1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
|
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
*************
Policy on OOM
*************
.. contents::
:depth: 3
The new Beijing release capabilities for OOM are described here.
Installation
^^^^^^^^^^^^
Follow the OOM installation instructions at http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/index.html
Overview of the running system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Upon initialization, you should see the following pods, one instance for each Policy component: PAP, PDP-X, BRMSGW, PDP-D, policydb, and nexus.
Note the "-0" suffix for PDP-X and PDP-D components, which will be increased as they are scaled out to improve runtime performance and reliability.
.. code-block:: bash
:caption: verify pods
kubectl get pods --all-namespaces -o=wid
onap dev-brmsgw-5dbc4c8dc4-llk5s 1/1 Running 0 18m 10.42.120.43 k8sx
onap dev-drools-0 1/1 Running 0 18m 10.42.60.27 k8sx
onap dev-nexus-7d96568f5f-qp5td 1/1 Running 0 18m 10.42.172.8 k8sx
onap dev-pap-8587696769-vwj6k 2/2 Running 0 18m 10.42.19.137 k8sx
onap dev-pdp-0 2/2 Running 0 18m 10.42.144.218 k8sx
onap dev-policydb-587d55bdff-4f5dz 1/1 Running 0 18m 10.42.12.242 k8sx
You will also see a service for every component:
.. code-block:: bash
:caption: verify services
kubectl get services --all-namespaces
onap brmsgw NodePort 10.43.209.173 <none> 9989:30216/TCP 24m
onap drools NodePort 10.43.27.92 <none> 6969:30217/TCP,9696:30221/TCP 24m
onap nexus NodePort 10.43.19.171 <none> 8081:30236/TCP 24m
onap pap NodePort 10.43.9.166 <none> 8443:30219/TCP,9091:30218/TCP 24m
onap pdp ClusterIP None <none> 8081/TCP 24m
onap policydb ClusterIP None <none> 3306/TCP 24m
Config and Decision policy requests will be distributed across PDP-Xs through the *pdp* service. PDP-X clients (such as DCAE) should configure their URLs to go through the *pdp* service. Their requests will be distributed across the available PDP-X replicas.
The PDP-Xs can be also accessed individually (dev-pdp-0, or dev-pdp-x if scaled out), but is preferable for PDP-X external clients to interface through the service.
PDP-Ds are also accessible on a group fashion by using the service IP, but DMaaP is the main means of communication with other ONAP components.
Healthchecks
^^^^^^^^^^^^
Verify that the policy healthcheck passes by the robot framework:
.. code-block:: bash
:caption: robot healthcheck
~/oom/kubernetes/robot/ete-k8s.sh onap health 2> /dev/null | grep PASS
..
Basic Policy Health Check | PASS |
..
A policy healthcheck (with more detailed output) can be done directly by invoking the drools service in the policy VM.
.. code-block:: none
:caption: PDP-D service (more detailed) healthcheck
# Using default credentials for the healtcheck service.
# To change the default username and passwords for this service,
# please modify configuration pre-installation at:
# oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/keys/feature-healthcheck.conf
curl --silent --user 'healthcheck:zb!XztG34' -X GET http://localhost:30217/healthcheck | python -m json.tool
{
"details": [
{
"code": 200,
"healthy": true,
"message": "alive",
"name": "PDP-D",
"url": "self"
},
{
"code": 200,
"healthy": true,
"message": "",
"name": "PAP",
"url": "http://pap:9091/pap/test"
},
{
"code": 200,
"healthy": true,
"message": "",
"name": "PDP",
"url": "http://pdp:8081/pdp/test"
}
],
"healthy": true
}
PDP-X Active/Active Pool
^^^^^^^^^^^^^^^^^^^^^^^^
The policy engine UI (console container in the pap pod) can be used to check that the PAP and the PDP-Xs are synchronized.
The console URL is accessible at ``http://<oom-vm>:30219/onap/login.htm``. Select the *PDP* menu entry on the left side panel under *Policy*.
.. image:: srmPdpxPdpMgmt.png
After initialization, there will be no policies loaded into the policy subsystem. This can be verified by accessing the Editor tab in the UI.
PDP-D Active/Active Pool
^^^^^^^^^^^^^^^^^^^^^^^^
The PDP-Ds replicas will come up with the amsterdam controller installed in brainless mode (no maven coordinates) since the controller has not been associated with a set of drools rules to run (control loop rules).
The following command can be issued on each of the PDP-D replicas IPs:
.. code-block:: bash
:caption: Querying the rules association for a PDP-D replica
# Using default credentials for the drools telemetry service.
# To change the default username and passwords for this service,
# please modify configuration pre-installation at:
# oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/base.conf
curl --silent --user '@1b3rt:31nst31n' -X GET http://<drools-replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
{
"alive": false,
"artifactId": "NO-ARTIFACT-ID",
"brained": false,
"canonicalSessionNames": [],
"container": null,
"groupId": "NO-GROUP-ID",
"locked": false,
"recentSinkEvents": [],
"recentSourceEvents": [],
"sessionNames": [],
"version": "NO-VERSION"
}
Before Installing Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^
It has been experienced in large OOM k8s multi-node full ONAP installations that components DNS and connectivity problems across pods through services. Eventually, the system becomes stable and ready to be used. Single node, smaller installations, do not seem to have these issues. Give the system enough time to make sure it has been initialized properly before pushing policies.
Make sure the policy subsystem is initialized by:
1. Verify that the "PDP Management" screen shows the 1 pooled PDP-X "UP_TO_DATE". If the PDP-X does not show the correct state, restart the faulty one to force re-synchronization with the pap.
.. code-block:: bash
:caption: Force re-synchronization of a PDP-X
kubectl exec -it dev-pdp-0 --container pdp -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop; policy.sh start"
# bounce the BRMSGW as well since it synchronizes with PDP-Xs via websockets:
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop; policy.sh start"
2. Verify service name resolution is OK across policy components
.. code-block:: bash
:caption: Verify policy services connectivity
# pick any policy pod to run these tests from:
# kubectl get pods --all-namespaces -o=wide
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping policydb"
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping pdp"
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping drools"
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping nexus"
kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping message-router"
Installing Policies
^^^^^^^^^^^^^^^^^^^
The OOM default installation will come with no policies pre-configured. There is a sample script used by integration teams to load policies to support all four use cases at: */tmp/policy-install/config/push-policies.sh* in the pap container within the pap pod. This script can be modified for your own particular installation, for example if only interested in vCPE use cases, remove those vCPE related API REST calls. For the vFW use case, you may want to edit the encoded operational policy to point to the proper resourceID in your installation.
The above mentioned *push-policies.sh* script can be executed as follows:
.. code-block:: bash
:caption: Installing the default policies
# NOTE: If modifications are required to the /tmp/policy-install/config/push-policies.sh, it should be copied
# to a different location, for example /tmp as /tmp/policy-install/config directory is read-only.
kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /tmp/policy-install/config/push-policies.sh"
..
Create BRMSParam Operational Policies
..
Create BRMSParamvFirewall Policy
..
Transaction ID: ef08cc65-9950-4478-a4ab-0f3bc2519f60 --Policy with the name com.Config_BRMS_Param_BRMSParamvFirewall.1.xml was successfully created.Create BRMSParamvDNS Policy
..
Transaction ID: 52e33efe-ba66-47de-b404-8d441107d8a9 --Policy with the name com.Config_BRMS_Param_BRMSParamvDNS.1.xml was successfully created.Create BRMSParamVOLTE Policy
..
Transaction ID: f13072b7-6258-4c16-99da-f908d29363ec --Policy with the name com.Config_BRMS_Param_BRMSParamVOLTE.1.xml was successfully created.Create BRMSParamvCPE Policy
..
Transaction ID: 616f970a-b45e-40f7-88cd-d63000d22cca --Policy with the name com.Config_BRMS_Param_BRMSParamvCPE.1.xml was successfully created.Create MicroService Config Policies
Create MicroServicevFirewall Policy
..
Transaction ID: 4c143a15-20af-408a-9285-bc7940261829 --Policy with the name com.Config_MS_MicroServicevFirewall.1.xml was successfully created.Create MicroServicevDNS Policy
..
Transaction ID: 1e54ae73-509b-490e-bf62-1fea7989fd5f --Policy with the name com.Config_MS_MicroServicevDNS.1.xml was successfully created.Create MicroServicevCPE Policy
..
Transaction ID: 32239868-bab2-4e12-9fd9-81a0ed4a6b1c --Policy with the name com.Config_MS_MicroServicevCPE.1.xml was successfully created.Creating Decision Guard policy
..
Transaction ID: b43cb9d5-42c7-4654-aacf-d4898c4d13bb --Policy with the name com.Decision_AllPermitGuard.1.xml was successfully created.Push Decision policy
..
Transaction ID: 3c1e4ae6-6991-415b-9f2d-c665a8c5a026 --Policy 'com.Decision_AllPermitGuard.1.xml' was successfully pushed to the PDP group 'default'.Pushing BRMSParam Operational policies
..
Transaction ID: 58d26d03-b5b8-4fd3-b2df-1411a1c36420 --Policy 'com.Config_BRMS_Param_BRMSParamvFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvDNS
..
Transaction ID: 0854e54a-504b-4f06-bc2f-30f491cb9f5a --Policy 'com.Config_BRMS_Param_BRMSParamvDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamVOLTE
..
Transaction ID: d33c7dde-5c99-4dab-b4ff-9988473cd88d --Policy 'com.Config_BRMS_Param_BRMSParamVOLTE.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvCPE
..
Transaction ID: e8c8a73e-127c-4318-9e59-3cae9dcbe011 --Policy 'com.Config_BRMS_Param_BRMSParamvCPE.1.xml' was successfully pushed to the PDP group 'default'.Pushing MicroService Config policies
..
Transaction ID: ec0429d7-e35f-4978-8a6c-40d2b5b3be61 --Policy 'com.Config_MS_MicroServicevFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevDNS
..
Transaction ID: f7072f05-7b74-45b5-9bd3-99b7f8023e3e --Policy 'com.Config_MS_MicroServicevDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevCPE
..
Transaction ID: 6d47db63-7956-4f5f-ab34-aeb5a124a90d --Policy 'com.Config_MS_MicroServicevCPE.1.xml' was successfully pushed to the PDP group 'default'.
The policies pushed could be viewed eventually through the Policy UI:
.. image:: srmEditor.png
As part of the process pushing of policies through the policy, the brmsgw component will compose drools rules artifacts and publish them to the nexus respository at ``http://<oom-vm>:30236/nexus/``.
.. image:: srmNexus.png
At the same time each replica of the PDP-Ds will receive notifications for each new version of the policies to run for the Amsterdam controller. The following command can be run to see how the amsterdam controller is associated with the latest rules version.
The following command can be used for verifying each replica:
.. code-block:: none
:caption: Querying the rules association of a PDP-D replica
# Using default credentials for the drools telemetry service.
# To change the default username and passwords for this service,
# please modify configuration pre-installation at:
# oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/base.conf
curl --silent --user '@1b3rt:31nst31n' -X GET http://<replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
{
"alive": true,
"artifactId": "policy-amsterdam-rules",
"brained": true,
"groupId": "org.onap.policy-engine.drools.amsterdam",
"locked": false,
"modelClassLoaderHash": 1223551265,
"recentSinkEvents": [],
"recentSourceEvents": [],
"sessionCoordinates": [
"org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.4.0:closedloop-amsterdam"
],
"sessions": [
"closedloop-amsterdam"
],
"version": "0.4.0"
}
Likewise, for verification purposes, each PDP-X replica can be queried directly to retrieve policy information.
The following commands can be used to query a policy through the pdp service:
.. code-block:: bash
:caption: Querying the "pdp" service for the vFirewal policy
# Open a shell into the pap pod
ubuntu@k8sx:~$ kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap bash
# In this example the vFirewall policy is queried.
policy@dev-pap-8587696769-vwj6k:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' http://pdp:8081/pdp/api/getConfig | python -m json.tool
[
{
"config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevFirewall\",\"description\":\"MicroService vFirewall Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"vFirewallBroadcastPackets\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":300,\"direction\":\"LESS_OR_EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ONSET\"},{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":700,\"direction\":\"GREATER_OR_EQUAL\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
"matchingConditions": {
"ConfigName": "SampleConfigName",
"Location": "SampleServiceLocation",
"ONAPName": "DCAE",
"service": "tca_policy",
"uuid": "test"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_MS_MicroServicevFirewall.1.xml",
"policyType": "MicroService",
"policyVersion": "1",
"property": null,
"responseAttributes": {},
"type": "JSON"
},
{
"config": .....
"matchingConditions": {
"ConfigName": "BRMS_PARAM_RULE",
"ONAPName": "DROOLS"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_BRMS_Param_BRMSParamvFirewall.1.xml",
"policyType": "BRMS_PARAM",
"policyVersion": "1",
"property": null,
"responseAttributes": {
"controller": "amsterdam"
},
"type": "OTHER"
}
]
While the following commands could be used to query an specific PDP-X replica:
.. code-block:: bash
:caption: Querying PDP-X 0 for the vCPE policy
# open a shell into the pap pod
ubuntu@k8sx:~$ kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap bash
# in this example the vCPE policy is queried.
curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.144.218:8081/pdp/api/getConfig | python -m json.tool
[
{
"config": ...,
"matchingConditions": {
"ConfigName": "BRMS_PARAM_RULE",
"ONAPName": "DROOLS"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
"policyType": "BRMS_PARAM",
"policyVersion": "1",
"property": null,
"responseAttributes": {
"controller": "amsterdam"
},
"type": "OTHER"
},
{
"config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
"matchingConditions": {
"ConfigName": "SampleConfigName",
"Location": "SampleServiceLocation",
"ONAPName": "DCAE",
"service": "tca_policy",
"uuid": "test"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_MS_MicroServicevCPE.1.xml",
"policyType": "MicroService",
"policyVersion": "1",
"property": null,
"responseAttributes": {},
"type": "JSON"
}
]
PDP-X Resiliency
^^^^^^^^^^^^^^^^
A PDP-X container failure can be simulated by either:
a) performing a"policy.sh stop" operation within the PDP-X container, which in fact will shutdown the PDP-X service, and eventually will be detected by the liveness checks, or
b) by plainly deleting the corresponding pod.
In the following example, the PDP-X 0 is forced to fail.
.. code-block:: bash
:caption: Causing PDP-X 0 service to fail
# In these scenarios the liveness check will fail and recovery actions will take place.
# Alternative 1: In this scenario we shutdown the PDP-X 0 service, so the liveness monitored ports will be down
# (but the pod is up) and corrective measures will be applied
ubuntu@k8sx:~$ kubectl exec -it dev-pdp-0 --container pdp -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop;"
pdplp: STOPPING ..
pdp: STOPPING ..
# Alternative 2: Brute force delete of the PDP-X 0 pod.
ubuntu@k8sx:~$ kubectl delete pod dev-pdp-0 -n onap
pod "dev-pdp-0" deleted
Upon detection of the service being down through the liveness check, the container will be restarted. Note the **restart count** when querying the status of the pods:
.. code-block:: bash
:caption: Checking PDP-X 0 restart count
ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
onap dev-brmsgw-5dbc4c8dc4-llk5s 1/1 Running 0 3d 10.42.120.43 k8sx
onap dev-drools-0 1/1 Running 0 3d 10.42.60.27 k8sx
onap dev-nexus-7d96568f5f-qp5td 1/1 Running 0 3d 10.42.172.8 k8sx
onap dev-pap-8587696769-vwj6k 2/2 Running 0 3d 10.42.19.137 k8sx
onap dev-pdp-0 2/2 Running 0 3d 10.42.144.218 k8sx
onap dev-policydb-587d55bdff-4f5dz 1/1 Running 0 3d 10.42.12.242 k8sx
During the restart process, the PAP component, will detect that PDP-X 0 is down and therefore its state being reflected in the PDP-X screen:
.. image:: srmPdpxResiliencyPdpMgmt1.png
This screen will be updated to reflect PDP-X 0 is back alive, after PDP-X 0 synchronizes itself with the PAP.
.. image:: srmPdpxResiliencyPdpMgmt2.png
At that point, the PDP-X is usable either directly or through the service to query for policies.
.. code-block:: bash
:caption: Query PDP-X 1 for vCPE policy
# in this example we perform the vCPE query from the OOM VM
# the default installation credentials are used for querying the vCPE policy
ubuntu@k8sx:~$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.233.111:8081/pdp/api/getConfig | python -m json.tool
[
{
"config": "..",
"matchingConditions": {
"ConfigName": "BRMS_PARAM_RULE",
"ONAPName": "DROOLS"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
"policyType": "BRMS_PARAM",
"policyVersion": "1",
"property": null,
"responseAttributes": {
"controller": "amsterdam"
},
"type": "OTHER"
},
{
"config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
"matchingConditions": {
"ConfigName": "SampleConfigName",
"Location": "SampleServiceLocation",
"ONAPName": "DCAE",
"service": "tca_policy",
"uuid": "test"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_MS_MicroServicevCPE.1.xml",
"policyType": "MicroService",
"policyVersion": "1",
"property": null,
"responseAttributes": {},
"type": "JSON"
}
]
PDP-D Resiliency
^^^^^^^^^^^^^^^^
A PDP-D container failure can be simulated by either:
a) performing a"policy stop" operation within the PDP-D pod, which in fact will shutdown the PDP-D service, and eventually will be detected by the liveness checks, or
b) by plainly deleting the corresponding pod.
In the following example, the PDP-D 0 is forced to fail.
.. code-block:: bash
:caption: Causing PDP-D 0 to fail
# In these scenarios the liveness check will fail and recovery actions will take place.
# Alternative 1: in this scenario we shutdown the PDP-D 0 policy process, so the liveness monitored ports
# will be down (but the pod is up) and corrective measures will be applied
ubuntu@k8sx:~/oom/kubernetes$ kubectl exec -it dev-drools-0 --container drools -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy stop"
[drools-pdp-controllers]
L []: Stopping Policy Management... Policy Management (pid=3284) is stopping... Policy Management has stopped.
Upon detection of the service being down through the liveness check, the container will be restarted. Note the restart count when querying the status of the pods:
.. code-block:: bash
:caption: Checking PDP-D 0 restart count
ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide | grep drool
onap dev-drools-0 0/1 Running 0 1d 10.42.10.21 k8sx
..
ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide | grep drools
onap dev-drools-0 1/1 Running 1 1d 10.42.10.21 k8sx <-- note restart count
Verification that the restarted PDP-D 0 comes up with the appropriate policy loaded can be verified by checking its maven coordinates:
.. code-block:: bash
:caption: Verifying restarted PDP-D points to policies pre-failure.
ubuntu@k8sx:~$ curl --silent --user '@1b3rt:31nst31n' -X GET http://10.42.10.21:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
{
"alive": true,
"artifactId": "policy-amsterdam-rules",
"brained": true,
"groupId": "org.onap.policy-engine.drools.amsterdam",
"locked": false,
"modelClassLoaderHash": 189820624,
"recentSinkEvents": [],
"recentSourceEvents": [],
"sessionCoordinates": [
"org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.4.0:closedloop-amsterdam"
],
"sessions": [
"closedloop-amsterdam"
],
"version": "0.4.0"
}
PDP-X Scaling
^^^^^^^^^^^^^
To scale a new PDP-X, set the replica count appropriately.
In our tests below, we are going to work with the OOM policy component in isolation. In this exercise, we scale the PDP-X with 1 additional replica, PDP-X 1.
.. code-block:: bash
:caption: Scaling a PDP-X
ubuntu@k8sx:~$ helm upgrade -i dev local/onap --namespace onap --set global.pullPolicy=IfNotPresent --set policy.pdp.replicaCount=2
Release "dev" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Jun 4 15:19:05 2018
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbc-pg-primary ClusterIP 10.43.29.226 <none> 5432/TCP 2d
dbc-pg-replica ClusterIP 10.43.202.168 <none> 5432/TCP 2d
dbc-postgres ClusterIP 10.43.181.134 <none> 5432/TCP 2d
dmaap-bc NodePort 10.43.254.230 <none> 8080:30241/TCP,8443:30242/TCP 2d
message-router-kafka ClusterIP 10.43.69.159 <none> 9092/TCP 2d
message-router-zookeeper ClusterIP None <none> 2181/TCP 2d
message-router NodePort 10.43.123.102 <none> 3904:30227/TCP,3905:30226/TCP 2d
msb-consul NodePort 10.43.27.77 <none> 8500:30285/TCP 2d
msb-discovery NodePort 10.43.178.20 <none> 10081:30281/TCP 2d
msb-eag NodePort 10.43.77.235 <none> 80:30282/TCP,443:30284/TCP 2d
msb-iag NodePort 10.43.221.196 <none> 80:30280/TCP,443:30283/TCP 2d
brmsgw NodePort 10.43.21.222 <none> 9989:30216/TCP 2d
nexus NodePort 10.43.159.27 <none> 8081:30236/TCP 2d
drools NodePort 10.43.233.67 <none> 6969:30217/TCP,9696:30221/TCP 2d
policydb ClusterIP None <none> 3306/TCP 2d
pdp ClusterIP None <none> 8081/TCP 2d
pap NodePort 10.43.110.50 <none> 8443:30219/TCP,9091:30218/TCP 2d
robot NodePort 10.43.172.248 <none> 88:30209/TCP 2d
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dev-dmaap-bus-controller 1 1 1 1 2d
dev-message-router-kafka 1 1 1 1 2d
dev-message-router-zookeeper 1 1 1 1 2d
dev-message-router 1 1 1 1 2d
dev-kube2msb 1 1 1 1 2d
dev-msb-consul 1 1 1 1 2d
dev-msb-discovery 1 1 1 1 2d
dev-msb-eag 1 1 1 1 2d
dev-msb-iag 1 1 1 1 2d
dev-brmsgw 1 1 1 1 2d
dev-nexus 1 1 1 1 2d
dev-policydb 1 1 1 1 2d
dev-pap 1 1 1 1 2d
dev-robot 1 1 1 1 2d
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
dev-dbc-pg 2 2 2d
dev-drools 1 1 2d
dev-pdp 2 2 2d
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
dev-message-router-kafka Bound dev-message-router-kafka 2Gi RWX 2d
dev-message-router-zookeeper Bound dev-message-router-zookeeper 2Gi RWX 2d
dev-nexus Bound dev-nexus 2Gi RWX 2d
dev-policydb Bound dev-policydb 2Gi RWX 2d
==> v1/ConfigMap
NAME DATA AGE
dev-dmaap-bus-controller-config 1 2d
dev-message-router-cadi-prop-configmap 1 2d
dev-message-router-msgrtrapi-prop-configmap 1 2d
dev-msb-discovery 1 2d
dev-msb-eag 1 2d
dev-msb-iag 1 2d
dev-brmsgw-pe-configmap 2 2d
dev-drools-configmap 6 2d
dev-drools-log-configmap 1 2d
dev-drools-settings-configmap 1 2d
dev-policydb-configmap 1 2d
dev-pdp-log-configmap 1 2d
dev-pdp-pe-configmap 3 2d
dev-pe-scripts-configmap 1 2d
dev-filebeat-configmap 1 2d
dev-pe-configmap 1 2d
dev-pap-pe-configmap 7 2d
dev-pap-sdk-log-configmap 1 2d
dev-pap-log-configmap 1 2d
dev-robot-resources-configmap 3 2d
dev-robot-lighttpd-authorization-configmap 1 2d
dev-robot-eteshare-configmap 4 2d
==> v1/PersistentVolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
dev-dbc-pg-data0 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-0 dev-dbc-pg-data 2d
dev-dbc-pg-data1 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-1 dev-dbc-pg-data 2d
dev-message-router-kafka 2Gi RWX Retain Bound onap/dev-message-router-kafka 2d
dev-message-router-zookeeper 2Gi RWX Retain Bound onap/dev-message-router-zookeeper 2d
dev-nexus 2Gi RWX Retain Bound onap/dev-nexus 2d
dev-policydb 2Gi RWX Retain Bound onap/dev-policydb 2d
==> v1beta1/ClusterRoleBinding
NAME AGE
onap-binding 2d
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
dev-dmaap-bus-controller-5bd859c7dc-blzdc 1/1 Running 0 2d
dev-message-router-kafka-748cdf7b9c-srv7l 1/1 Running 0 2d
dev-message-router-zookeeper-5b5969f6f-8rk9w 1/1 Running 0 2d
dev-message-router-b5bdc599c-5h56k 1/1 Running 0 2d
dev-kube2msb-579fc77c54-m84qx 1/1 Running 0 2d
dev-msb-consul-7bc4fcc8-94gsc 1/1 Running 0 2d
dev-msb-discovery-768547bcb-2hr7j 2/2 Running 0 2d
dev-msb-eag-5d95686c67-9lkzs 2/2 Running 0 2d
dev-msb-iag-675b649848-pv2gh 2/2 Running 0 2d
dev-brmsgw-5675f5877b-wv68s 1/1 Running 0 2d
dev-nexus-7d96568f5f-m8c4l 1/1 Running 0 2d
dev-policydb-587d55bdff-9gdjv 1/1 Running 0 2d
dev-pap-678b44cd87-wxbww 2/2 Running 0 2d
dev-robot-589c76bb6b-hrrdn 1/1 Running 0 2d
dev-dbc-pg-0 1/1 Running 0 2d
dev-dbc-pg-1 1/1 Running 0 2d
dev-drools-0 1/1 Running 1 2d
dev-pdp-0 2/2 Running 1 2d
dev-pdp-1 0/2 Init:0/1 0 0s
==> v1/Secret
NAME TYPE DATA AGE
dev-dbc-pg Opaque 3 2d
dev-message-router-secret Opaque 1 2d
dev-drools-secret Opaque 2 2d
dev-policydb-secret Opaque 2 2d
onap-docker-registry-key kubernetes.io/dockercfg 1 2d
Check Policy Engine UI how the PDP-Xs are coming up and request policies to the PAP.
Eventually the new PDP-X will be connected and serving policies:
.. image:: srmPdpxScalingPdpMgmt1.png
The new PDP-X should be now ready to serve policies:
.. code-block:: bash
:caption: Check that the new PDP-X 3 and 4 can serve policies
ubuntu@k8sx:~/oom/kubernetes$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.183.0:8081/pdp/api/getConfig | python -m json.tool
[
{
"config": ..
"matchingConditions": {
"ConfigName": "BRMS_PARAM_RULE",
"ONAPName": "DROOLS"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
"policyType": "BRMS_PARAM",
"policyVersion": "1",
"property": null,
"responseAttributes": {
"controller": "amsterdam"
},
"type": "OTHER"
},
{
"config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
"matchingConditions": {
"ConfigName": "SampleConfigName",
"Location": "SampleServiceLocation",
"ONAPName": "DCAE",
"service": "tca_policy",
"uuid": "test"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_MS_MicroServicevCPE.1.xml",
"policyType": "MicroService",
"policyVersion": "1",
"property": null,
"responseAttributes": {},
"type": "JSON"
}
]
ubuntu@k8sx:~/oom/kubernetes$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.137.241:8081/pdp/api/getConfig | python -m json.tool
[
{
"config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
"matchingConditions": {
"ConfigName": "SampleConfigName",
"Location": "SampleServiceLocation",
"ONAPName": "DCAE",
"service": "tca_policy",
"uuid": "test"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_MS_MicroServicevCPE.1.xml",
"policyType": "MicroService",
"policyVersion": "1",
"property": null,
"responseAttributes": {},
"type": "JSON"
},
{
"config": ...
"ConfigName": "BRMS_PARAM_RULE",
"ONAPName": "DROOLS"
},
"policyConfigMessage": "Config Retrieved! ",
"policyConfigStatus": "CONFIG_RETRIEVED",
"policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
"policyType": "BRMS_PARAM",
"policyVersion": "1",
"property": null,
"responseAttributes": {
"controller": "amsterdam"
},
"type": "OTHER"
}
]
PDP-D Scaling
^^^^^^^^^^^^^
To scale a new PDP-D, set the replica count appropriately. In our scenario below, we are going to scale the PDP-D service to add a new pod (2 active PDP-Ds).
.. code-block:: bash
:caption: Scaling a PDP-D
# Note: we also set the PDP-X pool to 2 instances (matching the previous section)
ubuntu@k8sx:~$ helm upgrade -i dev local/onap --namespace onap --set global.pullPolicy=IfNotPresent --set policy.pdp.replicaCount=2 --set policy.drools.replicaCount=2
Release "dev" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Jun 4 15:52:46 2018
NAMESPACE: onap
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
dev-dmaap-bus-controller-config 1 2d
dev-message-router-cadi-prop-configmap 1 2d
dev-message-router-msgrtrapi-prop-configmap 1 2d
dev-msb-discovery 1 2d
dev-msb-eag 1 2d
dev-msb-iag 1 2d
dev-brmsgw-pe-configmap 2 2d
dev-drools-configmap 6 2d
dev-drools-log-configmap 1 2d
dev-drools-settings-configmap 1 2d
dev-policydb-configmap 1 2d
dev-pdp-pe-configmap 3 2d
dev-pdp-log-configmap 1 2d
dev-pe-scripts-configmap 1 2d
dev-filebeat-configmap 1 2d
dev-pe-configmap 1 2d
dev-pap-pe-configmap 7 2d
dev-pap-log-configmap 1 2d
dev-pap-sdk-log-configmap 1 2d
dev-robot-resources-configmap 3 2d
dev-robot-lighttpd-authorization-configmap 1 2d
dev-robot-eteshare-configmap 4 2d
==> v1/PersistentVolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
dev-dbc-pg-data0 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-0 dev-dbc-pg-data 2d
dev-dbc-pg-data1 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-1 dev-dbc-pg-data 2d
dev-message-router-kafka 2Gi RWX Retain Bound onap/dev-message-router-kafka 2d
dev-message-router-zookeeper 2Gi RWX Retain Bound onap/dev-message-router-zookeeper 2d
dev-nexus 2Gi RWX Retain Bound onap/dev-nexus 2d
dev-policydb 2Gi RWX Retain Bound onap/dev-policydb 2d
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
dev-message-router-kafka Bound dev-message-router-kafka 2Gi RWX 2d
dev-message-router-zookeeper Bound dev-message-router-zookeeper 2Gi RWX 2d
dev-nexus Bound dev-nexus 2Gi RWX 2d
dev-policydb Bound dev-policydb 2Gi RWX 2d
==> v1beta1/ClusterRoleBinding
NAME AGE
onap-binding 2d
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
dev-dmaap-bus-controller 1 1 1 1 2d
dev-message-router-kafka 1 1 1 1 2d
dev-message-router-zookeeper 1 1 1 1 2d
dev-message-router 1 1 1 1 2d
dev-kube2msb 1 1 1 1 2d
dev-msb-consul 1 1 1 1 2d
dev-msb-discovery 1 1 1 1 2d
dev-msb-eag 1 1 1 1 2d
dev-msb-iag 1 1 1 1 2d
dev-brmsgw 1 1 1 1 2d
dev-nexus 1 1 1 1 2d
dev-policydb 1 1 1 1 2d
dev-pap 1 1 1 1 2d
dev-robot 1 1 1 1 2d
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
dev-dmaap-bus-controller-5bd859c7dc-blzdc 1/1 Running 0 2d
dev-message-router-kafka-748cdf7b9c-srv7l 1/1 Running 0 2d
dev-message-router-zookeeper-5b5969f6f-8rk9w 1/1 Running 0 2d
dev-message-router-b5bdc599c-5h56k 1/1 Running 0 2d
dev-kube2msb-579fc77c54-m84qx 1/1 Running 0 2d
dev-msb-consul-7bc4fcc8-94gsc 1/1 Running 0 2d
dev-msb-discovery-768547bcb-2hr7j 2/2 Running 0 2d
dev-msb-eag-5d95686c67-9lkzs 2/2 Running 0 2d
dev-msb-iag-675b649848-pv2gh 2/2 Running 0 2d
dev-brmsgw-5675f5877b-wv68s 1/1 Running 0 2d
dev-nexus-7d96568f5f-m8c4l 1/1 Running 0 2d
dev-policydb-587d55bdff-9gdjv 1/1 Running 0 2d
dev-pap-678b44cd87-wxbww 2/2 Running 0 2d
dev-robot-589c76bb6b-hrrdn 1/1 Running 0 2d
dev-dbc-pg-0 1/1 Running 0 2d
dev-dbc-pg-1 1/1 Running 0 2d
dev-drools-0 1/1 Running 1 2d
dev-drools-1 0/1 Init:0/1 0 1s
dev-pdp-0 2/2 Running 1 2d
dev-pdp-1 2/2 Running 0 33m
==> v1/Secret
NAME TYPE DATA AGE
dev-dbc-pg Opaque 3 2d
dev-message-router-secret Opaque 1 2d
dev-drools-secret Opaque 2 2d
dev-policydb-secret Opaque 2 2d
onap-docker-registry-key kubernetes.io/dockercfg 1 2d
==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
dev-dbc-pg 2 2 2d
dev-drools 2 2 2d
dev-pdp 2 2 2d
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbc-postgres ClusterIP 10.43.181.134 <none> 5432/TCP 2d
dbc-pg-replica ClusterIP 10.43.202.168 <none> 5432/TCP 2d
dbc-pg-primary ClusterIP 10.43.29.226 <none> 5432/TCP 2d
dmaap-bc NodePort 10.43.254.230 <none> 8080:30241/TCP,8443:30242/TCP 2d
message-router-kafka ClusterIP 10.43.69.159 <none> 9092/TCP 2d
message-router-zookeeper ClusterIP None <none> 2181/TCP 2d
message-router NodePort 10.43.123.102 <none> 3904:30227/TCP,3905:30226/TCP 2d
msb-consul NodePort 10.43.27.77 <none> 8500:30285/TCP 2d
msb-discovery NodePort 10.43.178.20 <none> 10081:30281/TCP 2d
msb-eag NodePort 10.43.77.235 <none> 80:30282/TCP,443:30284/TCP 2d
msb-iag NodePort 10.43.221.196 <none> 80:30280/TCP,443:30283/TCP 2d
brmsgw NodePort 10.43.21.222 <none> 9989:30216/TCP 2d
nexus NodePort 10.43.159.27 <none> 8081:30236/TCP 2d
drools NodePort 10.43.233.67 <none> 6969:30217/TCP,9696:30221/TCP 2d
policydb ClusterIP None <none> 3306/TCP 2d
pdp ClusterIP None <none> 8081/TCP 2d
pap NodePort 10.43.110.50 <none> 8443:30219/TCP,9091:30218/TCP 2d
robot NodePort 10.43.172.248 <none> 88:30209/TCP 2d
Verify that the new PDP-D comes up with the latest policy coordinates:
.. code-block:: bash
:caption: Verify new PDP-D 2 comes up with policies loaded
ubuntu@k8sx:~$ curl --silent --user '@1b3rt:31nst31n' -X GET http://10.42.172.88:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
{
"alive": true,
"artifactId": "policy-amsterdam-rules",
"brained": true,
"groupId": "org.onap.policy-engine.drools.amsterdam",
"locked": false,
"modelClassLoaderHash": 1657760388,
"recentSinkEvents": [],
"recentSourceEvents": [],
"sessionCoordinates": [
"org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.5.0:closedloop-amsterdam"
],
"sessions": [
"closedloop-amsterdam"
],
"version": "0.5.0"
}
End of Document
.. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Policy+on+OOM
.. SSNote: Old Wiki page ref. https://wiki.onap.org/display/DW/Scalability%2C+Resiliency+and+Manageability
|