summaryrefslogtreecommitdiffstats
path: root/plugins/grToolkit/README.md
blob: 84594496b801a3d3a83176923a27798a2cdb1f9b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
Introduction
======================
You have generated an MD-SAL module.

* You should be able to successfully run ```mvn clean install``` on this project.

Next Steps
======================
* Run a ```mvn clean install``` if you haven't already. This will generate some code from the yang models.
* Modify the model yang file under the model project.
* Follow the comments in the generated provider class to wire your new provider into the generated 
code.
* Modify the generated provider model to respond to and handle the yang model. Depending on what
you added to your model you may need to inherit additional interfaces or make other changes to
the provider model.

Generated Bundles
======================
* model
    - Provides the yang model for your application. This is your primary northbound interface.
* provider
    - Provides a template implementation for a provider to respond to your yang model.
* features
    - Defines a karaf feature. If you add dependencies on third-party bundles then you will need to
      modify the features.xml to list out the dependencies.
* installer
    - Bundles all of the jars and third party dependencies (minus ODL dependencies) into a single
      .zip file.

Usage
======================
## Purpose
The purpose of this ODL feature is to support geo-redundancy through a series of ODL integrated health checks and tools.

## Properties File
On initialization gr-toolkit expects to find a file named ```gr-toolkit.properties``` located in the ```SDNC_CONFIG``` directory. The properties file should contain:
- ```akka.conf.location```
    - The path to the akka.conf configuration file.
- ```adm.useSsl```
    - true/false; Determines whether or not to use http or https when making requests to the Admin Portal.
- ```adm.fqdn```
    - The FQDN or url of the site's Admin Portal.
- ```adm.healthcheck```
    - The url path of the Admin Portal's health check page.
- ```adm.port.http```
    - The HTTP port for the Admin Portal.
- ```adm.port.ssl```
    - The HTTPS port for the Admin Portal.
- ```controller.credentials```
    - username:password; The credentials used to make requests to the ODL controller.
- ```controller.useSsl```
    - true/false; Determines whether or not to use http or https when making requests to the controller.
- ```controller.port.http```
    - The HTTP port for the ODL Controller.
- ```controller.port.ssl```
    - The HTTPS port for the ODL Controller.
- ```controller.port.akka```
    - The port used for Akka communications on the ODL Controller.
- ```mbean.cluster```
    - The Jolokia path for the Akka Cluster MBean.
- ```mbean.shardManager```
    - The Jolokia path for the Akka ShardManager MBean.
- ```mbean.shard.config```
    - The Jolokia path for the Akka Shard MBean. This should be templated to look like ```/jolokia/read/org.opendaylight.controller:Category=Shards,name=%s,type=DistributedConfigDatastore```. GR Toolkit will use this template with information pulled from the Akka ShardManager MBean.
- ```site.identifier```
    - A unique identifier for the site the ODL Controller resides on.

## Site Identifier
Returns a unique site identifier of the site the ODL resides on.

> ### Input: None
> 
> ### Output
> ```json
> {
>   "output": {
>     "id": "UNIQUE_IDENTIFIER_HERE",
>     "status": "200"
>   }
> }
> ```

## Admin Health
Returns HEALTHY/FAULTY based on whether or not a 200 response is received from the Admin Portal's health check page.

> ### Input: None
> 
> ### Output
> ```json
> {
>   "output": {
>     "status": "200",
>     "health": "HEALTHY"
>   }
> }
> ```

## Database Health
Returns HEALTHY/FAULTY based on if DbLib can obtain a writeable connection from its pool.

> ### Input: None
> 
> ### Output
> ```json
> {
>   "output": {
>     "status": "200",
>     "health": "HEALTHY"
>   }
> }
> ```

## Cluster Health
Uses Jolokia queries to determine shard health and voting status. In a 3 ODL node configuration, 2 FAULTY nodes constitutes a FAULTY site. In a 6 node configuration it is assumed that there are 2 sites consiting of 3 nodes each.

> ### Input: None
> 
> ### Output
> ```json
> {
>   "output": {
>     "site1-health": "HEALTHY",
>     "members": [
>       {
>         "address": "member-3.node",
>         "role": "member-3",
>         "unreachable": false,
>         "voting": true,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-3-shard-default-config"
>           }
>         ]
>       },
>       {
>         "address": "member-1.node",
>         "role": "member-1",
>         "unreachable": false,
>         "voting": true,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-1-shard-default-config"
>           }
>         ]
>       },
>       {
>         "address": "member-5.node",
>         "role": "member-5",
>         "unreachable": false,
>         "voting": false,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-5-shard-default-config"
>           }
>         ]
>       },
>       {
>         "address": "member-2.node",
>         "role": "member-2",
>         "unreachable": false,
>         "leader": [
>           {
>             "shard": "member-2-shard-default-config"
>           }
>         ],
>         "commit-status": [
>           {
>             "shard": "member-5-shard-default-config",
>             "delta": 148727
>           },
>           {
>             "shard": "member-4-shard-default-config",
>             "delta": 148869
>           }
>         ],
>         "voting": true,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-2-shard-default-config"
>           }
>         ]
>       },
>       {
>         "address": "member-4.node",
>         "role": "member-4",
>         "unreachable": false,
>         "voting": false,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-4-shard-default-config"
>           }
>         ]
>       },
>       {
>         "address": "member-6.node",
>         "role": "member-6",
>         "unreachable": false,
>         "voting": false,
>         "up": true,
>         "replicas": [
>           {
>             "shard": "member-6-shard-default-config"
>           }
>         ]
>       }
>     ],
>     "status": "200",
>     "site2-health": "HEALTHY"
>   }
> }
> ```

## Site Health
Aggregates data from Admin Health, Database Health, and Cluster Health and returns a simplified payload containing the health of a site. A FAULTY Admin Portal or Database health status will constitute a FAULTY site; in a 3 ODL node configuration, 2 FAULTY nodes constitutes a FAULTY site. If any portion of the health check registers as FAULTY, the entire site will be designated as FAULTY. In a 6 node configuration these health checks are performed cross site as well.

> ### Input: None
> 
> ### Output
> ```json
> {
>   "output": {
>     "sites": [
>       {
>         "id": "SITE_1",
>         "role": "ACTIVE",
>         "health": "HEALTHY"
>       },
>       {
>         "id": "SITE_2",
>         "role": "STANDBY",
>         "health": "FAULTY"
>       }
>     ],
>     "status": "200"
>   }
> }
> ```

## Halt Akka Traffic
Places rules in IP Tables to block Akka traffic to/from a specific node on a specified port.

> ### Input:
> ```json
> {
>   "input": {
>     "node-info": [
>       {
>         "node": "your.odl.node",
>         "port": "2550"
>       }
>     ]
>   }
> }
> ```
> 
> ### Output
> ```json
> {
>   "output": {
>     "status": "200"
>   }
> }
> ```

## Resume Akka Traffic
Removes rules in IP Tables to allow Akka traffic to/from a specifc node on a specified port.

> ### Input:
> ```json
> {
>   "input": {
>     "node-info": [
>       {
>         "node": "your.odl.node",
>         "port": "2550"
>       }
>     ]
>   }
> }
> ```
> 
> ### Output
> ```json
> {
>   "output": {
>     "status": "200"
>   }
> }
> ```

## Failover
Only usable in a 6 ODL node configuration. Determines which site is active/standby, switches voting to the standby site, and isolates the old active site. If backupData=true an MD-SAL export will be scheduled and backed up to a Nexus server (requires ccsdk.sli.northbound.daexim-offsite-backup feature).

> ### Input:
> ```json
> {
>   "input": {
>     "backupData": "true"
>   }
> }
> ```
> 
> ### Output
> ```json
> {
>   "output": {
>     "status": "200",
>     "message": "Failover complete."
>   }
> }
> ```