summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/apex/APEX-User-Manual.rst30
-rw-r--r--docs/clamp/acm/api-protocol/acm-rest-apis.rst24
-rw-r--r--docs/clamp/acm/api-protocol/swagger/acm-comissioning.json114
-rw-r--r--docs/clamp/acm/api-protocol/swagger/acm-instantiation.json931
-rw-r--r--docs/clamp/acm/api-protocol/swagger/acm-monitoring.json8
-rw-r--r--docs/clamp/acm/api-protocol/swagger/k8sparticipant.json392
-rw-r--r--docs/clamp/acm/api-protocol/swagger/participant-sim.json478
-rw-r--r--docs/clamp/acm/design-impl/participants/k8s-participant.rst25
-rw-r--r--docs/clamp/acm/design-impl/participants/participant-simulator.rst21
-rw-r--r--docs/clamp/acm/design-impl/participants/participants.rst1
-rw-r--r--docs/development/devtools/clamp-sdc.rst2
-rw-r--r--docs/development/devtools/devtools.rst12
-rw-r--r--docs/development/devtools/strimzi-policy.rst700
-rw-r--r--docs/installation/docker.rst116
-rw-r--r--docs/tox.ini6
15 files changed, 1762 insertions, 1098 deletions
diff --git a/docs/apex/APEX-User-Manual.rst b/docs/apex/APEX-User-Manual.rst
index eed350ab..21e9dbcb 100644
--- a/docs/apex/APEX-User-Manual.rst
+++ b/docs/apex/APEX-User-Manual.rst
@@ -2036,11 +2036,37 @@ Context Handlers
APEX provides plugins for each of the main areas.
-Configure AVRO Schema Handler
-#############################
+Configure Context Schema Handler
+################################
.. container:: paragraph
+ There are 2 choices available for defining schema: JSON & AVRO.
+ JSON based schemas are recommended because of the flexibility, better tooling & easier integration.
+
+ The JSON schema handler is added to the configuration as
+ follows:
+
+ .. container:: listingblock
+
+ .. container:: content
+
+ .. code::
+
+ "engineServiceParameters":{
+ "engineParameters":{
+ "contextParameters":{
+ "parameterClassName" : "org.onap.policy.apex.context.parameters.ContextParameters",
+ "schemaParameters":{
+ "Json":{
+ "parameterClassName" :
+ "org.onap.policy.apex.plugins.context.schema.json.JsonSchemaHelperParameters"
+ }
+ }
+ }
+ }
+ }
+
The AVRO schema handler is added to the configuration as
follows:
diff --git a/docs/clamp/acm/api-protocol/acm-rest-apis.rst b/docs/clamp/acm/api-protocol/acm-rest-apis.rst
index b71dae95..19c2a01a 100644
--- a/docs/clamp/acm/api-protocol/acm-rest-apis.rst
+++ b/docs/clamp/acm/api-protocol/acm-rest-apis.rst
@@ -103,28 +103,4 @@ Composition.
The Swagger for the Pass Through API will appear here.
-Participant Standalone API
-==========================
-
-This API allows a Participant to run in standalone mode and to run standalone Automation
-Composition Elements.
-
-Kubernetes participant can also be deployed as a standalone application and provides REST endpoints
-for onboarding helm charts to its local chart storage, installing and uninstalling of helm charts
-to a Kubernetes cluster. It also allows to configure a remote repository in Kubernetes participant
-for installing helm charts. User can onboard a helm chart along with the overrides YAML file, the
-chart gets stored in the local chart directory of Kubernetes participants. The onboarded charts can
-be installed and uninstalled. The GET API fetches all the available helm charts from the chart
-storage.
-
-.. swaggerv2doc:: swagger/k8sparticipant.json
-
-
-Participant Simulator API
-=========================
-
-This API allows a Participant Simulator to be started and run for test purposes.
-
-.. swaggerv2doc:: swagger/participant-sim.json
-
End of Document
diff --git a/docs/clamp/acm/api-protocol/swagger/acm-comissioning.json b/docs/clamp/acm/api-protocol/swagger/acm-comissioning.json
index ab77bd9e..3ab03bc0 100644
--- a/docs/clamp/acm/api-protocol/swagger/acm-comissioning.json
+++ b/docs/clamp/acm/api-protocol/swagger/acm-comissioning.json
@@ -12,7 +12,7 @@
}
},
"paths": {
- "/onap/automationcomposition/v2/commission": {
+ "/onap/policy/clamp/acm/v2/commission": {
"get": {
"tags": [
"Clamp Automation Composition Commissioning API"
@@ -353,7 +353,7 @@
}
}
},
- "/onap/automationcomposition/v2/commission/elements": {
+ "/onap/policy/clamp/acm/v2/commission/elements": {
"get": {
"tags": [
"Clamp Automation Composition Commissioning API"
@@ -469,7 +469,7 @@
}
}
},
- "/onap/automationcomposition/v2/commission/getCommonOrInstanceProperties": {
+ "/onap/policy/clamp/acm/v2/commission/getCommonOrInstanceProperties": {
"get": {
"tags": [
"Clamp Automation Composition Commissioning API"
@@ -593,113 +593,7 @@
}
}
},
- "/onap/automationcomposition/v2/commission/toscaServiceTemplateSchema": {
- "get": {
- "tags": [
- "Clamp Automation Composition Commissioning API"
- ],
- "summary": "Query details of the requested tosca service template json schema",
- "description": "Queries details of the requested commissioned tosca service template json schema, returning all tosca service template json schema details",
- "operationId": "queryToscaServiceTemplateJsonSchemaUsingGET",
- "produces": [
- "application/json",
- "application/yaml"
- ],
- "parameters": [
- {
- "name": "section",
- "in": "query",
- "description": "Section of Template schema is desired for",
- "required": false,
- "type": "string",
- "default": "all"
- },
- {
- "name": "X-ONAP-RequestID",
- "in": "header",
- "description": "RequestID for http transaction",
- "required": false,
- "type": "string",
- "format": "uuid"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "string"
- }
- },
- "401": {
- "description": "Authentication Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "403": {
- "description": "Authorization Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "404": {
- "description": "Not Found"
- },
- "500": {
- "description": "Internal Server Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- }
- },
- "security": [
- {
- "basicAuth": []
- }
- ],
- "x-interface info": {
- "api-version": "1.0.0",
- "last-mod-release": "Istanbul"
- }
- }
- },
- "/onap/automationcomposition/v2/commission/toscaservicetemplate": {
+ "/onap/policy/clamp/acm/v2/commission/toscaservicetemplate": {
"get": {
"tags": [
"Clamp Automation Composition Commissioning API"
diff --git a/docs/clamp/acm/api-protocol/swagger/acm-instantiation.json b/docs/clamp/acm/api-protocol/swagger/acm-instantiation.json
index cdad2b61..092b6ea6 100644
--- a/docs/clamp/acm/api-protocol/swagger/acm-instantiation.json
+++ b/docs/clamp/acm/api-protocol/swagger/acm-instantiation.json
@@ -12,7 +12,7 @@
}
},
"paths": {
- "/onap/automationcomposition/v2/instantiation": {
+ "/onap/policy/clamp/acm/v2/instantiation": {
"get": {
"tags": [
"Clamp Automation Composition Instantiation API"
@@ -464,7 +464,7 @@
}
}
},
- "/onap/automationcomposition/v2/instantiation/command": {
+ "/onap/policy/clamp/acm/v2/instantiation/command": {
"put": {
"tags": [
"Clamp Automation Composition Instantiation API"
@@ -579,6 +579,933 @@
"last-mod-release": "Istanbul"
}
}
+ },
+ "/onap/policy/clamp/acm/v2/instantiationState":{
+ "get":{
+ "tags":[
+ "Clamp Automation Composition Instantiation API"
+ ],
+ "summary":"Query details of the requested automation compositions",
+ "description":"Queries details of requested automation compositions, returning all automation composition details",
+ "operationId":"getInstantiationOrderStateUsingGET",
+ "produces":[
+ "application/json",
+ "application/yaml"
+ ],
+ "parameters":[
+ {
+ "name":"name",
+ "in":"query",
+ "description":"Automation composition name",
+ "required":false,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"query",
+ "description":"Automation composition version",
+ "required":false,
+ "type":"string"
+ },
+ {
+ "name":"X-ONAP-RequestID",
+ "in":"header",
+ "description":"RequestID for http transaction",
+ "required":false,
+ "type":"string",
+ "format":"uuid"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/AutomationCompositionOrderStateResponse",
+ "originalRef":"AutomationCompositionOrderStateResponse"
+ }
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "responsesObject":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/AutomationCompositionOrderStateResponse",
+ "originalRef":"AutomationCompositionOrderStateResponse"
+ }
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "security":[
+ {
+ "basicAuth":[
+
+ ]
+ }
+ ],
+ "x-interface info":{
+ "api-version":"1.0.0",
+ "last-mod-release":"Istanbul"
+ }
+ }
+ },
+ "/onap/policy/clamp/acm/v2/instanceProperties":{
+ "post":{
+ "tags":[
+ "Clamp Automation Composition Instantiation API"
+ ],
+ "summary":"Saves instance properties",
+ "description":"Saves instance properties, returning the saved instances properties and it's version",
+ "operationId":"createInstancePropertiesUsingPOST",
+ "consumes":[
+ "application/json",
+ "application/yaml"
+ ],
+ "produces":[
+ "application/json",
+ "application/yaml"
+ ],
+ "parameters":[
+ {
+ "in":"body",
+ "name":"body",
+ "description":"Body of instance properties",
+ "required":true,
+ "schema":{
+ "$ref":"#/definitions/ToscaServiceTemplateReq",
+ "originalRef":"ToscaServiceTemplateReq"
+ }
+ },
+ {
+ "name":"X-ONAP-RequestID",
+ "in":"header",
+ "description":"RequestID for http transaction",
+ "required":false,
+ "type":"string",
+ "format":"uuid"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstancePropertiesResponse",
+ "originalRef":"InstancePropertiesResponse"
+ }
+ },
+ "201":{
+ "description":"Created"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "responsesObject":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstancePropertiesResponse",
+ "originalRef":"InstancePropertiesResponse"
+ }
+ },
+ "201":{
+ "description":"Created"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "security":[
+ {
+ "basicAuth":[
+
+ ]
+ }
+ ],
+ "x-interface info":{
+ "api-version":"1.0.0",
+ "last-mod-release":"Istanbul"
+ }
+ },
+ "put":{
+ "tags":[
+ "Clamp Automation Composition Instantiation API"
+ ],
+ "summary":"Updates instance properties",
+ "description":"Updates instance properties, returning the saved instances properties and it's version",
+ "operationId":"updatesInstancePropertiesUsingPUT",
+ "consumes":[
+ "application/json",
+ "application/yaml"
+ ],
+ "produces":[
+ "application/json",
+ "application/yaml"
+ ],
+ "parameters":[
+ {
+ "in":"body",
+ "name":"body",
+ "description":"Body of instance properties",
+ "required":true,
+ "schema":{
+ "$ref":"#/definitions/ToscaServiceTemplateReq",
+ "originalRef":"ToscaServiceTemplateReq"
+ }
+ },
+ {
+ "name":"name",
+ "in":"query",
+ "description":"Automation composition definition name",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"query",
+ "description":"Automation composition definition version",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"X-ONAP-RequestID",
+ "in":"header",
+ "description":"RequestID for http transaction",
+ "required":false,
+ "type":"string",
+ "format":"uuid"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstancePropertiesResponse",
+ "originalRef":"InstancePropertiesResponse"
+ }
+ },
+ "201":{
+ "description":"Created"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "responsesObject":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstancePropertiesResponse",
+ "originalRef":"InstancePropertiesResponse"
+ }
+ },
+ "201":{
+ "description":"Created"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "security":[
+ {
+ "basicAuth":[
+
+ ]
+ }
+ ],
+ "x-interface info":{
+ "api-version":"1.0.0",
+ "last-mod-release":"Istanbul"
+ }
+ },
+ "delete":{
+ "tags":[
+ "Clamp Automation Composition Instantiation API"
+ ],
+ "summary":"Delete a automation composition and instance properties",
+ "description":"Deletes a automation composition and instance properties, returning optional error details",
+ "operationId":"deleteInstancePropertiesUsingDELETE",
+ "produces":[
+ "application/json",
+ "application/yaml"
+ ],
+ "parameters":[
+ {
+ "name":"name",
+ "in":"query",
+ "description":"Automation composition definition name",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"query",
+ "description":"Automation composition definition version",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"X-ONAP-RequestID",
+ "in":"header",
+ "description":"RequestID for http transaction",
+ "required":false,
+ "type":"string",
+ "format":"uuid"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstantiationResponse",
+ "originalRef":"InstantiationResponse"
+ }
+ },
+ "204":{
+ "description":"No Content"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "responsesObject":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/InstantiationResponse",
+ "originalRef":"InstantiationResponse"
+ }
+ },
+ "204":{
+ "description":"No Content"
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "security":[
+ {
+ "basicAuth":[
+
+ ]
+ }
+ ],
+ "x-interface info":{
+ "api-version":"1.0.0",
+ "last-mod-release":"Istanbul"
+ }
+ }
+ },
+ "/onap/policy/clamp/acm/v2/automationCompositionPriming":{
+ "get":{
+ "tags":[
+ "Clamp Automation Composition Instantiation API"
+ ],
+ "summary":"Query priming details of the requested automation compositions",
+ "description":"Queries priming details of requested automation compositions, returning primed/deprimed compositions",
+ "operationId":"getAutomationCompositionPrimingUsingGET",
+ "produces":[
+ "application/json",
+ "application/yaml"
+ ],
+ "parameters":[
+ {
+ "name":"name",
+ "in":"query",
+ "description":"Automation composition definition name",
+ "required":false,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"query",
+ "description":"Automation composition definition version",
+ "required":false,
+ "type":"string"
+ },
+ {
+ "name":"X-ONAP-RequestID",
+ "in":"header",
+ "description":"RequestID for http transaction",
+ "required":false,
+ "type":"string",
+ "format":"uuid"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/AutomationCompositionPrimedResponse",
+ "originalRef":"AutomationCompositionPrimedResponse"
+ }
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "responsesObject":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "$ref":"#/definitions/AutomationCompositionPrimedResponse",
+ "originalRef":"AutomationCompositionPrimedResponse"
+ }
+ },
+ "401":{
+ "description":"Authentication Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "403":{
+ "description":"Authorization Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ },
+ "404":{
+ "description":"Not Found"
+ },
+ "500":{
+ "description":"Internal Server Error",
+ "headers":{
+ "X-LatestVersion":{
+ "type":"string"
+ },
+ "X-PatchVersion":{
+ "type":"string"
+ },
+ "X-MinorVersion":{
+ "type":"string"
+ },
+ "X-ONAP-RequestID":{
+ "type":"string",
+ "format":"uuid"
+ }
+ }
+ }
+ },
+ "security":[
+ {
+ "basicAuth":[
+
+ ]
+ }
+ ],
+ "x-interface info":{
+ "api-version":"1.0.0",
+ "last-mod-release":"Istanbul"
+ }
+ }
}
}
} \ No newline at end of file
diff --git a/docs/clamp/acm/api-protocol/swagger/acm-monitoring.json b/docs/clamp/acm/api-protocol/swagger/acm-monitoring.json
index 2c177fa9..2c23abec 100644
--- a/docs/clamp/acm/api-protocol/swagger/acm-monitoring.json
+++ b/docs/clamp/acm/api-protocol/swagger/acm-monitoring.json
@@ -12,7 +12,7 @@
}
},
"paths": {
- "/onap/automationcomposition/v2/monitoring/acelement": {
+ "/onap/policy/clamp/acm/v2/monitoring/acelement": {
"get": {
"tags": [
"Clamp Automation Composition Monitoring API"
@@ -155,7 +155,7 @@
}
}
},
- "/onap/automationcomposition/v2/monitoring/acelements/automationcomposition": {
+ "/onap/policy/clamp/acm/v2/monitoring/acelements/automationcomposition": {
"get": {
"tags": [
"Clamp Automation Composition Monitoring API"
@@ -268,7 +268,7 @@
}
}
},
- "/onap/automationcomposition/v2/monitoring/participant": {
+ "/onap/policy/clamp/acm/v2/monitoring/participant": {
"get": {
"tags": [
"Clamp Automation Composition Monitoring API"
@@ -404,7 +404,7 @@
}
}
},
- "/onap/automationcomposition/v2/monitoring/participants/automationcomposition": {
+ "/onap/policy/clamp/acm/v2/monitoring/participants/automationcomposition": {
"get": {
"tags": [
"Clamp Automation Composition Monitoring API"
diff --git a/docs/clamp/acm/api-protocol/swagger/k8sparticipant.json b/docs/clamp/acm/api-protocol/swagger/k8sparticipant.json
deleted file mode 100644
index ae06b06d..00000000
--- a/docs/clamp/acm/api-protocol/swagger/k8sparticipant.json
+++ /dev/null
@@ -1,392 +0,0 @@
-{
- "swagger": "2.0",
- "info": {
- "description": "Api Documentation",
- "version": "1.0",
- "title": "Api Documentation",
- "termsOfService": "urn:tos",
- "contact": {},
- "license": {
- "name": "Apache 2.0",
- "url": "http://www.apache.org/licenses/LICENSE-2.0"
- }
- },
- "paths": {
- "/onap/k8sparticipant/helm/chart/{name}/{version}": {
- "delete": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Delete the chart",
- "operationId": "deleteChartUsingDELETE",
- "produces": [
- "*/*"
- ],
- "parameters": [
- {
- "name": "name",
- "in": "path",
- "description": "name",
- "required": true,
- "type": "string"
- },
- {
- "name": "version",
- "in": "path",
- "description": "version",
- "required": true,
- "type": "string"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "object"
- }
- },
- "204": {
- "description": "Chart Deleted"
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- }
- }
- }
- },
- "/onap/k8sparticipant/helm/charts": {
- "get": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Return all Charts",
- "operationId": "getAllChartsUsingGET",
- "produces": [
- "application/json"
- ],
- "responses": {
- "200": {
- "description": "chart List",
- "schema": {
- "$ref": "#/definitions/ChartList",
- "originalRef": "ChartList"
- }
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- },
- "404": {
- "description": "Not Found"
- }
- }
- }
- },
- "/onap/k8sparticipant/helm/install": {
- "post": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Install the chart",
- "operationId": "installChartUsingPOST",
- "consumes": [
- "application/json"
- ],
- "produces": [
- "application/json"
- ],
- "parameters": [
- {
- "in": "body",
- "name": "info",
- "description": "info",
- "required": true,
- "schema": {
- "$ref": "#/definitions/InstallationInfo",
- "originalRef": "InstallationInfo"
- }
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "object"
- }
- },
- "201": {
- "description": "chart Installed",
- "schema": {
- "type": "object"
- }
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- },
- "404": {
- "description": "Not Found"
- }
- }
- }
- },
- "/onap/k8sparticipant/helm/onboard/chart": {
- "post": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Onboard the Chart",
- "operationId": "onboardChartUsingPOST",
- "consumes": [
- "multipart/form-data"
- ],
- "produces": [
- "application/json"
- ],
- "parameters": [
- {
- "name": "chart",
- "in": "formData",
- "required": false,
- "type": "file"
- },
- {
- "name": "info",
- "in": "formData",
- "required": false,
- "type": "string"
- },
- {
- "in": "body",
- "name": "values",
- "description": "values",
- "required": false,
- "schema": {
- "type": "string",
- "format": "binary"
- }
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "string"
- }
- },
- "201": {
- "description": "Chart Onboarded",
- "schema": {
- "type": "string"
- }
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- },
- "404": {
- "description": "Not Found"
- }
- }
- }
- },
- "/onap/k8sparticipant/helm/repo": {
- "post": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Configure helm repository",
- "operationId": "configureRepoUsingPOST",
- "consumes": [
- "application/json"
- ],
- "produces": [
- "application/json"
- ],
- "parameters": [
- {
- "in": "body",
- "name": "repo",
- "description": "repo",
- "required": true,
- "schema": {
- "type": "string"
- }
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "object"
- }
- },
- "201": {
- "description": "Repository added",
- "schema": {
- "type": "object"
- }
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- },
- "404": {
- "description": "Not Found"
- }
- }
- }
- },
- "/onap/k8sparticipant/helm/uninstall/{name}/{version}": {
- "delete": {
- "tags": [
- "k8s-participant"
- ],
- "summary": "Uninstall the Chart",
- "operationId": "uninstallChartUsingDELETE",
- "produces": [
- "application/json"
- ],
- "parameters": [
- {
- "name": "name",
- "in": "path",
- "description": "name",
- "required": true,
- "type": "string"
- },
- {
- "name": "version",
- "in": "path",
- "description": "version",
- "required": true,
- "type": "string"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "object"
- }
- },
- "201": {
- "description": "chart Uninstalled",
- "schema": {
- "type": "object"
- }
- },
- "204": {
- "description": "No Content"
- },
- "401": {
- "description": "Unauthorized"
- },
- "403": {
- "description": "Forbidden"
- }
- }
- }
- }
- },
- "definitions": {
- "ChartInfo": {
- "type": "object",
- "properties": {
- "chartId": {
- "$ref": "#/definitions/ToscaConceptIdentifier",
- "originalRef": "ToscaConceptIdentifier"
- },
- "namespace": {
- "type": "string"
- },
- "overrideParams": {
- "type": "object",
- "additionalProperties": {
- "type": "string"
- }
- },
- "releaseName": {
- "type": "string"
- },
- "repository": {
- "$ref": "#/definitions/HelmRepository",
- "originalRef": "HelmRepository"
- }
- },
- "title": "ChartInfo"
- },
- "ChartList": {
- "type": "object",
- "properties": {
- "charts": {
- "type": "array",
- "items": {
- "$ref": "#/definitions/ChartInfo",
- "originalRef": "ChartInfo"
- }
- }
- },
- "title": "ChartList"
- },
- "HelmRepository": {
- "type": "object",
- "properties": {
- "address": {
- "type": "string"
- },
- "password": {
- "type": "string"
- },
- "port": {
- "type": "string"
- },
- "protocol": {
- "type": "string"
- },
- "repoName": {
- "type": "string"
- },
- "userName": {
- "type": "string"
- }
- },
- "title": "HelmRepository"
- },
- "InstallationInfo": {
- "type": "object",
- "properties": {
- "name": {
- "type": "string"
- },
- "version": {
- "type": "string"
- }
- },
- "title": "InstallationInfo"
- },
- "ToscaConceptIdentifier": {
- "type": "object",
- "properties": {
- "name": {
- "type": "string"
- },
- "version": {
- "type": "string"
- }
- },
- "title": "ToscaConceptIdentifier"
- }
- }
-} \ No newline at end of file
diff --git a/docs/clamp/acm/api-protocol/swagger/participant-sim.json b/docs/clamp/acm/api-protocol/swagger/participant-sim.json
deleted file mode 100644
index 2111b607..00000000
--- a/docs/clamp/acm/api-protocol/swagger/participant-sim.json
+++ /dev/null
@@ -1,478 +0,0 @@
-{
- "swagger": "2.0",
- "info": {
- "description": "Api Documentation",
- "version": "1.0",
- "title": "Api Documentation",
- "termsOfService": "urn:tos",
- "contact": {},
- "license": {
- "name": "Apache 2.0",
- "url": "http://www.apache.org/licenses/LICENSE-2.0"
- }
- },
- "paths": {
- "/onap/participantsim/v2/elements": {
- "put": {
- "tags": [
- "Clamp Automation Composition Participant Simulator API"
- ],
- "summary": "Updates simulated automation composition elements",
- "description": "Updates simulated automation composition elements, returning the updated automation composition definition IDs",
- "operationId": "updateUsingPUT",
- "consumes": [
- "application/json"
- ],
- "produces": [
- "application/json",
- "application/yaml"
- ],
- "parameters": [
- {
- "in": "body",
- "name": "body",
- "description": "Body of a automation composition element",
- "required": true,
- "schema": {
- "$ref": "#/definitions/AutomationCompositionElementReq",
- "originalRef": "AutomationCompositionElementReq"
- }
- },
- {
- "name": "X-ONAP-RequestID",
- "in": "header",
- "description": "RequestID for http transaction",
- "required": false,
- "type": "string",
- "format": "uuid"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "$ref": "#/definitions/TypedSimpleResponse«AutomationCompositionElement»",
- "originalRef": "TypedSimpleResponse«AutomationCompositionElement»"
- }
- },
- "201": {
- "description": "Created"
- },
- "401": {
- "description": "Authentication Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "403": {
- "description": "Authorization Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "404": {
- "description": "Not Found"
- },
- "500": {
- "description": "Internal Server Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- }
- },
- "security": [
- {
- "basicAuth": []
- }
- ],
- "x-interface info": {
- "api-version": "1.0.0",
- "last-mod-release": "Dublin"
- }
- }
- },
- "/onap/participantsim/v2/elements/{name}/{version}": {
- "get": {
- "tags": [
- "Clamp Automation Composition Participant Simulator API"
- ],
- "summary": "Query details of the requested simulated automation composition elements",
- "description": "Queries details of the requested simulated automation composition elements, returning all automation composition element details",
- "operationId": "elementsUsingGET",
- "produces": [
- "application/json",
- "application/yaml"
- ],
- "parameters": [
- {
- "name": "name",
- "in": "path",
- "description": "Automation composition element name",
- "required": true,
- "type": "string"
- },
- {
- "name": "version",
- "in": "path",
- "description": "Automation composition element version",
- "required": true,
- "type": "string"
- },
- {
- "name": "X-ONAP-RequestID",
- "in": "header",
- "description": "RequestID for http transaction",
- "required": false,
- "type": "string",
- "format": "uuid"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "object",
- "additionalProperties": {
- "$ref": "#/definitions/AutomationCompositionElementRes",
- "originalRef": "AutomationCompositionElementRes"
- }
- }
- },
- "401": {
- "description": "Authentication Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "403": {
- "description": "Authorization Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "404": {
- "description": "Not Found"
- },
- "500": {
- "description": "Internal Server Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- }
- },
- "security": [
- {
- "basicAuth": []
- }
- ],
- "x-interface info": {
- "api-version": "1.0.0",
- "last-mod-release": "Dublin"
- }
- }
- },
- "/onap/participantsim/v2/participants": {
- "put": {
- "tags": [
- "Clamp Automation Composition Participant Simulator API"
- ],
- "summary": "Updates simulated participants",
- "description": "Updates simulated participants, returning the updated automation composition definition IDs",
- "operationId": "updateUsingPUT_1",
- "consumes": [
- "application/json"
- ],
- "produces": [
- "application/json",
- "application/yaml"
- ],
- "parameters": [
- {
- "in": "body",
- "name": "body",
- "description": "Body of a participant",
- "required": true,
- "schema": {
- "$ref": "#/definitions/ParticipantReq",
- "originalRef": "ParticipantReq"
- }
- },
- {
- "name": "X-ONAP-RequestID",
- "in": "header",
- "description": "RequestID for http transaction",
- "required": false,
- "type": "string",
- "format": "uuid"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "$ref": "#/definitions/TypedSimpleResponse«Participant»",
- "originalRef": "TypedSimpleResponse«Participant»"
- }
- },
- "201": {
- "description": "Created"
- },
- "401": {
- "description": "Authentication Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "403": {
- "description": "Authorization Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "404": {
- "description": "Not Found"
- },
- "500": {
- "description": "Internal Server Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- }
- },
- "security": [
- {
- "basicAuth": []
- }
- ],
- "x-interface info": {
- "api-version": "1.0.0",
- "last-mod-release": "Dublin"
- }
- }
- },
- "/onap/participantsim/v2/participants/{name}/{version}": {
- "get": {
- "tags": [
- "Clamp Automation Composition Participant Simulator API"
- ],
- "summary": "Query details of the requested simulated participants",
- "description": "Queries details of the requested simulated participants, returning all participant details",
- "operationId": "participantsUsingGET",
- "produces": [
- "application/json",
- "application/yaml"
- ],
- "parameters": [
- {
- "name": "name",
- "in": "path",
- "description": "Participant name",
- "required": true,
- "type": "string"
- },
- {
- "name": "version",
- "in": "path",
- "description": "Participant version",
- "required": true,
- "type": "string"
- },
- {
- "name": "X-ONAP-RequestID",
- "in": "header",
- "description": "RequestID for http transaction",
- "required": false,
- "type": "string",
- "format": "uuid"
- }
- ],
- "responses": {
- "200": {
- "description": "OK",
- "schema": {
- "type": "array",
- "items": {
- "$ref": "#/definitions/ParticipantRes",
- "originalRef": "ParticipantRes"
- }
- }
- },
- "401": {
- "description": "Authentication Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "403": {
- "description": "Authorization Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- },
- "404": {
- "description": "Not Found"
- },
- "500": {
- "description": "Internal Server Error",
- "headers": {
- "X-LatestVersion": {
- "type": "string"
- },
- "X-PatchVersion": {
- "type": "string"
- },
- "X-MinorVersion": {
- "type": "string"
- },
- "X-ONAP-RequestID": {
- "type": "string",
- "format": "uuid"
- }
- }
- }
- },
- "security": [
- {
- "basicAuth": []
- }
- ],
- "x-interface info": {
- "api-version": "1.0.0",
- "last-mod-release": "Dublin"
- }
- }
- }
- }
-} \ No newline at end of file
diff --git a/docs/clamp/acm/design-impl/participants/k8s-participant.rst b/docs/clamp/acm/design-impl/participants/k8s-participant.rst
index 366c8430..ddce0a3c 100644
--- a/docs/clamp/acm/design-impl/participants/k8s-participant.rst
+++ b/docs/clamp/acm/design-impl/participants/k8s-participant.rst
@@ -13,10 +13,9 @@ resources in the k8s cluster.
The kubernetes participant also exposes REST endpoints for onboarding, installing and uninstalling of helm charts from the
local chart database which facilitates the user to also use this component as a standalone application for helm operations.
-In Istanbul version, the kubernetes participant supports the following methods of installation of helm charts.
+In Kohn version, the kubernetes participant supports the following methods of installation of helm charts.
- Installation of helm charts from configured helm repositories and remote repositories passed via TOSCA in CLAMP.
-- Installation of helm charts from the local chart database via the participant's REST Api.
Prerequisites for using Kubernetes participant in Istanbul version:
-------------------------------------------------------------------
@@ -86,15 +85,9 @@ The *repository* type is described in the following table:
* - repoName
- String
- The name of the helm repository that needs to be configured on the helm client
- * - protocol
- - String
- - Specifies http/https protocols to connect with repository url
* - address
- String
- - Specifies the ip address or the host name
- * - port (optional)
- - String
- - Specifies the port where the repository service is running
+ - Specifies the url of the hem repository
* - userName (optional)
- String
- The username to login the helm repository
@@ -120,20 +113,8 @@ Once the automation composition definitions are available in the runtime databas
When the state of the Automation Composition is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui, the kubernetes participant receives the automation composition state change event from the runtime and
deploys the helm charts associated with each Automation Composition Elements by creating appropriate namespace on the cluster.
If the repository of the helm chart is not passed via TOSCA, the participant looks for the helm chart in the configured helm repositories of helm client.
-It also performs a chart look up on the local chart database where the helm charts are onboarded via the participant's REST Api.
-The participant also monitors the deployed pods for the next 3 minutes until the pods comes to RUNNING state.
+The participant also monitors the deployed pods for the configured time until the pods comes to RUNNING state.
It holds the deployment information of the pods including the current status of the pods after the deployment.
When the state of the Automation Composition is changed from "PASSIVE" to "UNINITIALISED" back, the participant also undeploys the helm charts from the cluster that are part of the Automation Composition Element.
-
-REST APIs on Kubernetes participant
------------------------------------
-
-Kubernetes participant can also be installed as a standalone application which exposes REST endpoints for onboarding,
-installing, uninstalling helm charts from local chart database.
-
-
-.. image:: ../../images/participants/k8s-rest.png
-
-:download:`Download Kubernetes participant API Swagger <swagger/k8s-participant-swagger.json>` \ No newline at end of file
diff --git a/docs/clamp/acm/design-impl/participants/participant-simulator.rst b/docs/clamp/acm/design-impl/participants/participant-simulator.rst
deleted file mode 100644
index a53e9077..00000000
--- a/docs/clamp/acm/design-impl/participants/participant-simulator.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-
-.. _clamp-acm-participant-simulator:
-
-Participant Simulator
-#####################
-
-This can be used for simulation testing purpose when there are no actual frameworks or a full deployment.
-Participant simulator can edit the states of AutomationCompositionElements and Participants for verification of other clamp-acm components
-for early testing.
-All clamp-acm components should be setup, except participant frameworks (for example, no policy framework components
-are needed) and participant simulator acts as respective participant framework, and state changes can be done with following REST APIs
-
-Participant Simulator API
-=========================
-
-This API allows a Participant Simulator to be started and run for test purposes.
-
-:download:`Download Policy Participant Simulator API Swagger <swagger/participant-sim.json>`
-
-.. swaggerv2doc:: swagger/participant-sim.json
diff --git a/docs/clamp/acm/design-impl/participants/participants.rst b/docs/clamp/acm/design-impl/participants/participants.rst
index 9cf38bc7..67c966bd 100644
--- a/docs/clamp/acm/design-impl/participants/participants.rst
+++ b/docs/clamp/acm/design-impl/participants/participants.rst
@@ -36,4 +36,3 @@ The detailed implementation of the CLAMP Participant ecosystem is described on t
http-participant
k8s-participant
policy-framework-participant
- participant-simulator
diff --git a/docs/development/devtools/clamp-sdc.rst b/docs/development/devtools/clamp-sdc.rst
index 07f030a6..c82fb2ce 100644
--- a/docs/development/devtools/clamp-sdc.rst
+++ b/docs/development/devtools/clamp-sdc.rst
@@ -45,6 +45,8 @@ The ONAP components used during the pairwise tests are:
- DMaaP for the communication between Automation Composition runtime and participants.
- Policy Framework components for instantiation and commissioning of automation compositions.
+Helpful instruction page on bringing SDC and PORTAL setup on an OOM deployment https://wiki.onap.org/display/DW/Deploy+OOM+and+SDC+%28or+ONAP%29+on+a+single+VM+with+microk8s+-+Honolulu+Setup
+
Testing procedure
*****************
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index 2c73369e..ab57cd28 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -513,3 +513,15 @@ You may specify a local configuration file instead of *src/test/resources/simPar
}
]
}
+
+Bringing up Strimzi-Kafka Deploment with Policy Framework
+*********************************************************
+
+This page will explain how to setup a local Kubernetes cluster and minimal helm setup to run and deploy Policy Framework on a single host.
+
+This is meant for a development purpose only as we are going to use microk8s in this page
+
+.. toctree::
+ :maxdepth: 1
+
+ strimzi-policy.rst
diff --git a/docs/development/devtools/strimzi-policy.rst b/docs/development/devtools/strimzi-policy.rst
new file mode 100644
index 00000000..772281e8
--- /dev/null
+++ b/docs/development/devtools/strimzi-policy.rst
@@ -0,0 +1,700 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _strimzi-policy-label:
+
+.. toctree::
+ :maxdepth: 2
+
+Policy Framework with Strimzi-Kafka communication
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This page will explain how to set up a local Kubernetes cluster and minimal helm setup to run and deploy Policy Framework on a single host.
+The rationale for this page is to spin up a development environment quickly and efficiently without the hassle of setting up the multi node cluster/Network file share that are required in a full deployment.
+
+These instructions are for development purposes only. We are using the lightweight `microk8s <https://microk8s.io/>`_ as our Kubernetes environment.
+
+Troubleshooting tips are included for possible issues while installation
+
+General Setup
+*************
+
+One VM running Ubuntu 20.04 LTS (should also work on 18.04), with internet access to download charts/containers and the OOM repo
+Root/sudo privileges
+Sufficient RAM, depending on how many components you want to deploy
+Around 20G of RAM allows for a few components, the minimal setup requires AAF, Policy, and Strimzi-Kafka
+
+
+Overall procedure
+*****************
+
+Install/remove Microk8s with appropriate version
+Install/remove Helm with appropriate version
+Tweak Microk8s
+Download OOM repo
+Install the required Helm plugins
+Install ChartMuseum as a local helm repo
+Build all OOM charts and store them in the chart repo
+Fine tune deployment based on your VM capacity and component needs
+Deploy/Undeploy charts
+Enable communication over Kafka
+Run testsuites
+
+
+Install/Upgrade Microk8s with appropriate version
+-------------------------------------------------
+
+Microk8s is a bundled lightweight version of kubernetes maintained by Canonical, it has the advantage of being well integrated with snap on Ubuntu, which makes it super easy to manage/upgrade/work with
+
+More info on : https://microk8s.io/docs
+
+There are 2 things to know about microk8s :
+
+1) it is wrapped by snap, which is nice but you need to understand that it's not exactly the same as having a proper k8s installation (more info below on some specific commands)
+
+2) it is not using docker as the container runtime, it's using containerd. it's not an issue, just be aware of that as you won't see containers using classic docker commands
+
+
+If you have a previous version of microk8s, you first need to uninstall it (upgrade is possible but it is not recommended between major versions so I recommend to uninstall as it's fast and safe)
+
+ .. code-block:: bash
+
+ sudo snap remove microk8s
+
+You need to select the appropriate version to install, to see all possible version do :
+
+ .. code-block:: bash
+
+ sudo snap info microk8s
+ sudo snap install microk8s --classic --channel=1.19/stable
+
+You may need to change your firewall configuration to allow pod to pod communication and pod to internet communication :
+
+ .. code-block:: bash
+
+ sudo ufw allow in on cni0 && sudo ufw allow out on cni0
+ sudo ufw default allow routed
+ sudo microk8s enable dns storage
+ sudo microk8s enable dns
+
+Install/remove Helm with appropriate version
+--------------------------------------------
+
+Helm is the package manager for k8s, we require a specific version for each ONAP release, it's the best is to look at the OOM guides to see which one is required `<https://helm.sh>`_
+
+For the Honolulu release we need Helm 3 - A significant improvement with Helm3 is that it does not require a specific pod running in the kubernetes cluster (no more Tiller pod)
+
+As Helm is self contained, it's pretty straightforward to install/upgrade, we can also use snap to install the right version
+
+ .. code-block:: bash
+
+ sudo snap install helm --classic --channel=3.5/stable
+
+Note: You may encounter some log issues when installing helm with snap
+
+Normally the helm logs are available in "~/.local/share/helm/plugins/deploy/cache/onap/logs", if you notice that the log files are all equal to 0, you can uninstall helm with snap and reinstall it manually
+
+ .. code-block:: bash
+
+ wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz
+
+ tar xvfz helm-v3.5.4-linux-amd64.tar.gz
+
+ sudo mv linux-amd64/helm /usr/local/bin/helm
+
+
+Tweak Microk8s
+--------------
+The tweaks below are not strictly necessary, but they help in making the setup more simple and flexible.
+
+A) Increase the max number of pods & Add priviledged config
+As ONAP may deploy a significant amount of pods, we need to inform kubelet to allow more than the basic configuration (as we plan an all in box setup), If you only plan to run a limited number of components, this is not needed
+
+to change the max number of pods, we need to add a parameter to the startup line of kubelet.
+
+1. Edit the file located at :
+
+ .. code-block:: bash
+
+ sudo nano /var/snap/microk8s/current/args/kubelet
+
+add the following line at the end :
+
+--max-pods=250
+
+save the file and restart kubelet to apply the change :
+
+ .. code-block:: bash
+
+ sudo service snap.microk8s.daemon-kubelet restart
+
+2. Edit the file located at :
+
+ .. code-block:: bash
+
+ sudo nano /var/snap/microk8s/current/args/kube-apiserver
+
+add the following line at the end :
+
+--allow-privileged=true
+
+save the file and restart kubelet to apply the change :
+
+ .. code-block:: bash
+
+ sudo service snap.microk8s.daemon-apiserver restart
+
+
+B) run a local copy of kubectl
+Microk8s comes bundled with kubectl, you can interact with it by doing:
+
+ .. code-block:: bash
+
+ sudo microk8s kubectl describe node
+
+to make things simpler as we will most likely interact a lot with kubectl, let's install a local copy of kubectl so we can use it to interact with the kubernetes cluster in a more straightforward way
+
+We need kubectl 1.19 to match the cluster we have installed, let's again use snap to quickly choose and install the one we need
+
+ .. code-block:: bash
+
+ sudo snap install kubectl --classic --channel=1.19/stable
+
+Now we need to provide our local kubectl client with a proper config file so that it can access the cluster, microk8s allows to retrieve the cluster config very easily
+
+Simply create a .kube folder in your home directory and dump the config there
+
+ .. code-block:: bash
+
+ cd
+ mkdir .kube
+ cd .kube
+ sudo microk8s.config > config
+ chmod 700 config
+
+the last line will avoid helm complaining about too open permissions
+
+you should now have helm and kubectl ready to interact with each other, you can verify this by trying :
+
+ .. code-block:: bash
+
+ kubectl version
+
+this should output both the local client and server version
+
+ .. code-block:: bash
+
+ Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
+ Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.7-34+02d22c9f4fb254", GitCommit:"02d22c9f4fb2545422b2b28e2152b1788fc27c2f", GitTreeState:"clean", BuildDate:"2021-02-11T20:13:16Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
+
+
+Download OOM repo
+-----------------
+The Policy kubernetes chart is located in the `OOM repository <https://github.com/onap/oom/tree/master/kubernetes/policy>`_.
+This chart includes different policy components referred as <policy-component-name>.
+
+Please refer to the `OOM documentation <https://docs.onap.org/projects/onap-oom/en/latest/oom_user_guide.html>`_ on how to install and deploy ONAP.
+ .. code-block:: bash
+
+ cd
+ git clone "https://gerrit.onap.org/r/oom"
+
+
+Install the needed Helm plugins
+-------------------------------
+Onap deployments are using the deploy and undeploy plugins for helm
+
+to install them just run :
+
+ .. code-block:: bash
+
+ helm plugin install ./oom/kubernetes/helm/plugins/undeploy/
+ helm plugin install ./oom/kubernetes/helm/plugins/deploy/
+
+ cp -R ~/oom/kubernetes/helm/plugins/ ~/.local/share/helm/plugins
+
+this will copy the plugins into your home directory .helm folder and make them available as helm commands
+
+Another plugin we need is the push plugin, with helm3 there is no longer an embedded repo to use.
+ .. code-block:: bash
+
+ helm plugin install https://github.com/chartmuseum/helm-push.git --version 0.10.0
+
+Once all plugins are installed, you should see them as available helm commands when doing :
+
+ .. code-block:: bash
+
+ helm --help
+
+Add the helm repo:
+ .. code-block:: bash
+
+ helm repo add strimzi https://strimzi.io/charts/
+
+Install the operator:
+ .. code-block:: bash
+
+ helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator --namespace strimzi-system --version 0.28.0 --set watchAnyNamespace=true --create-namespace
+
+
+
+Install the chartmuseum repository
+----------------------------------
+Download the chartmuseum script and run it as a background task
+
+ .. code-block:: bash
+
+ curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
+ chmod +x ./chartmuseum
+ mv ./chartmuseum /usr/local/bin
+ /usr/local/bin/chartmuseum --port=8080 --storage="local" --storage-local-rootdir="~/chartstorage" &
+
+you should see the chartmuseum repo starting locally, you can press enter to return to your terminal
+
+you can now inform helm that a local repo is available for use :
+
+ .. code-block:: bash
+
+ # helm repo add local http://localhost:8080
+
+Tip: If there is an error as below while adding repo local, then remove the repo, update and readd.
+Error: repository name (local) already exists, please specify a different name
+
+ .. code-block:: bash
+
+ # helm repo remove local
+
+"local" has been removed from your repositories
+
+ .. code-block:: bash
+
+ # helm repo update
+
+Hang tight while we grab the latest from your chart repositories...
+...Successfully got an update from the "stable" chart repository
+Update Complete. ⎈Happy Helming!⎈
+
+ .. code-block:: bash
+
+ helm repo add local http://localhost:8080
+ 2022-09-24T11:43:29.777+0100 INFO [1] Request served {"path": "/index.yaml", "comment": "", "clientIP": "127.0.0.1", "method": "GET", "statusCode": 200, "latency": "4.107325ms", "reqID": "bd5d6089-b921-4086-a88a-13bd608a4135"}
+ "local" has been added to your repositories
+
+
+Build all OOM charts and store them in the chart repo
+-----------------------------------------------------
+You should be ready to build all helm charts, go into the oom/kubernetes folder and run a full make
+
+Ensure you have "make" installed:
+
+ .. code-block:: bash
+
+ sudo apt install make
+
+Then build OOM
+
+ .. code-block:: bash
+
+ cd ~/oom/kubernetes
+ make all
+
+You can speed up the make skipping the linting of the charts
+
+ .. code-block:: bash
+
+ $cd ./oom/kubernetes
+ $make all -e SKIP_LINT=TRUE; make onap -e SKIP_LINT=TRUE
+
+You'll notice quite a few messages popping into your terminal running the chartmuseum, showing that it accepts and store the generated charts, that's normal, if you want, just open another terminal to run the helm commands
+
+Once the build completes, you should be ready to deploy ONAP
+
+
+Fine tune deployment based on your VM capacity and component needs
+------------------------------------------------------------------
+ .. code-block:: bash
+
+ $cd ./oom/kubernetes
+ Edit onap/values.yaml, to include the components to deploy, for this usecase, we set below components to true
+ aaf: enabled: true
+ policy: enabled: true
+ strimzi: enabled: true
+
+Save the file and we are all set to DEPLOY
+
+Installing or Upgrading Policy Components
+=========================================
+
+The assumption is you have cloned the charts from the OOM repository into a local directory.
+
+**Step 1** Go to the policy charts and edit properties in values.yaml files to make any changes to particular policy component if required.
+
+.. code-block:: bash
+
+ cd oom/kubernetes/policy/components/<policy-component-name>
+
+**Step 2** Build the charts
+
+.. code-block:: bash
+
+ cd oom/kubernetes
+ make SKIP_LINT=TRUE policy
+
+.. note::
+ SKIP_LINT is only to reduce the "make" time
+
+**Step 3** Undeploying already deployed policy components
+
+After undeploying policy components, keep monitoring the policy pods until they go away.
+
+.. code-block:: bash
+
+ helm del --purge <my-helm-release>-<policy-component-name>
+ kubectl get pods -n <namespace> | grep <policy-component-name>
+
+**Step 4** Make sure there is no orphan database persistent volume or claim.
+
+First, find if there is an orphan database PV or PVC with the following commands:
+
+.. code-block:: bash
+
+ kubectl get pvc -n <namespace> | grep <policy-component-name>
+ kubectl get pv -n <namespace> | grep <policy-component-name>
+
+If there are any orphan resources, delete them with
+
+.. code-block:: bash
+
+ kubectl delete pvc <orphan-policy-pvc-name>
+ kubectl delete pv <orphan-policy-pv-name>
+
+**Step 5** Delete NFS persisted data for policy components
+
+Connect to the machine where the file system is persisted and then execute the below command
+
+.. code-block:: bash
+
+ rm -fr /dockerdata-nfs/<my-helm-release>/<policy-component-name>
+
+**Step 6** Re-Deploy policy pods
+
+First you need to ensure that the onap namespace exists (it now must be created prior deployment)
+
+ .. code-block:: bash
+
+ kubectl create namespace onap
+
+After deploying policy, keep monitoring the policy pods until they come up.
+
+.. code-block:: bash
+
+ helm deploy dev local/onap -n onap --create-namespace --set global.masterPassword=test --debug -f ./onap/values.yaml --verbose --debug
+ kubectl get pods -n <namespace> | grep <policy-component-name>
+
+You should see all pods starting up and you should be able to see logs using kubectl, dive into containers etc...
+
+Restarting a faulty component
+=============================
+Each policy component can be restarted independently by issuing the following command:
+
+.. code-block:: bash
+
+ kubectl delete pod <policy-component-pod-name> -n <namespace>
+
+Some handy commands and tips below for troubleshooting:
+
+ .. code-block:: bash
+
+ kubectl get po
+ kubectl get pvc
+ kubectl get pv
+ kubectl get secrets
+ kubectl get cm
+ kubectl get svc
+ kubectl logs dev-policy-api-7bb656d67f-qqmtk
+ kubectl describe dev-policy-api-7bb656d67f-qqmtk
+ kubectl exec -it <podname> ifconfig
+ kubectl exec -it <podname> pwd
+ kubectl exec -it <podname> sh
+
+TIP: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
+
+TIP: If only policy pods are being brought down and brought-up
+ .. code-block:: bash
+
+ helm uninstall dev-policy
+ make policy -e SKIP_LINT=TRUE
+ helm install dev-policy local/policy -n onap --set global.masterPassword=test --debug
+
+TIP: If there is an error to bringing up "dev-strimzi-entity-operator not found. Retry 60/60"
+ .. code-block:: bash
+
+ kubectl -nkube-system get svc/kube-dns
+
+Stop the microk8s cluster with "microk8s stop" command
+Edit the kubelet configuration file /var/snap/microk8s/current/args/kubelet and add the following lines:
+
+--resolv-conf=""
+--cluster-dns=<IPAddress>
+--cluster-domain=cluster.local
+
+Start the microk8s cluster with "microk8s start" command
+Check the status of microk8s cluster with "microk8s status" command
+
+How to undeploy and start fresh
+The easiest is to use kubectl, you can clean up the cluster in 3 commands :
+
+ .. code-block:: bash
+
+ kubectl delete namespace onap
+ kubectl delete pv --all
+ helm undeploy dev
+ helm undeploy onap
+ kubectl delete pvc --all;kubectl delete pv --all;kubectl delete cm --all;kubectl delete deploy --all;kubectl delete secret --all;kubectl delete jobs --all;kubectl delete pod --all
+ rm -rvI /dockerdata-nfs/dev/
+ rm -rf ~/.cache/helm/repository/local-*
+ rm -rf ~/.cache/helm/repository/policy-11.0.0.tgz
+ rm -rf ~/.cache/helm/repository/onap-11.0.0.tgz
+ rm -rf /dockerdata-nfs/*
+ helm repo update
+ helm repo remove local
+
+don't forget to create the namespace again before deploying again (helm won't complain if it is not there, but you'll end up with an empty cluster after it finishes)
+
+Note : you could also reset the K8S cluster by using the microk8s feature : microk8s reset
+
+
+Enable communication over Kafka
+-------------------------------
+To build a custom Kafka Cluster, Set UseStrimziKafka in policy/value.yaml to false, Or do not have any Strimzi-Kafka policy configuration in oom/kubernetes/policy/
+
+The following commands will create a simple custom kafka cluster, This strimzi cluster is not an ONAP based Strimzi Kafka Cluster. A custom kafka cluster is established with ready to use commands from https://strimzi.io/quickstarts/
+
+ .. code-block:: yaml
+
+ kubectl create namespace kafka
+
+After that, we feed Strimzi with a simple Custom Resource, which will then give you a small persistent Apache Kafka Cluster with one node each for Apache Zookeeper and Apache Kafka:
+
+# Apply the `Kafka` Cluster CR file
+
+ .. code-block:: yaml
+
+ kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
+
+We now need to wait while Kubernetes starts the required pods, services and so on:
+
+
+ .. code-block:: yaml
+
+ kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
+
+The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
+
+Once the cluster is running, you can run a simple producer to send messages to a Kafka topic (the topic will be automatically created):
+
+
+ .. code-block:: yaml
+
+ kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
+
+And to receive them in a different terminal you can run:
+
+
+ .. code-block:: yaml
+
+ kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
+
+NOTE: If targeting an ONAP based Strimzi Kafka cluster with security certs, Set UseStrimziKafka to true.
+By doing this, A policy-kafka-user, policy-kafka-topics are created in Strimzi kafka.
+
+In the case of a custom kafka cluster, topics have to be either manually created with the command below or programatically created with "allow.auto.create.topics = true" in Consumer config properties. Replace the topic below in the code block and create as many topics as needed for the component.
+
+ .. code-block:: yaml
+
+ cat << EOF | kubectl create -n kafka -f -
+ apiVersion: kafka.strimzi.io/v1beta2
+ kind: KafkaTopic
+ metadata:
+ name: policy-acruntime-participant
+ labels:
+ strimzi.io/cluster: "my-cluster"
+ spec:
+ partitions: 3
+ replicas: 1
+ EOF
+
+Policy application properties need to be modified for communication over Kafka.
+Modify the configuration of Topic properties for the components that need to communicate over kafka
+
+ .. code-block:: yaml
+
+ topicSources:
+ -
+ topic: policy-acruntime-participant
+ servers:
+ - dev-strimzi-kafka-bootstrap:9092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ useHttps: true
+ additionalProps:
+ group-id: policy-group
+ key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
+ value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
+ partition.assignment.strategy: org.apache.kafka.clients.consumer.RoundRobinAssignor
+ enable.auto.commit: false
+ auto.offset.reset: earliest
+ security.protocol: SASL_PLAINTEXT
+ properties.sasl.mechanism: SCRAM-SHA-512
+ properties.sasl.jaas.config: ${JAASLOGIN}
+
+ topicSinks:
+ -
+ topic: policy-acruntime-participant
+ servers:
+ - dev-strimzi-kafka-bootstrap:9092
+ topicCommInfrastructure: kafka
+ useHttps: true
+ additionalProps:
+ key.serializer: org.apache.kafka.common.serialization.StringSerializer
+ value.serializer: org.apache.kafka.common.serialization.StringSerializer
+ acks: 1
+ retry.backoff.ms: 150
+ retries: 3
+ security.protocol: SASL_PLAINTEXT
+ properties.sasl.mechanism: SCRAM-SHA-512
+ properties.sasl.jaas.config: ${JAASLOGIN}
+
+Note: security.protocol can simply be PLAINTEXT, if targetting a custom kafka cluster
+
+ .. code-block:: yaml
+
+ topicSources:
+ -
+ topic: policy-acruntime-participant
+ servers:
+ - my-cluster-kafka-bootstrap.mykafka.svc:9092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ useHttps: true
+ additionalProps:
+ group-id: policy-group
+ key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
+ value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
+ partition.assignment.strategy: org.apache.kafka.clients.consumer.RoundRobinAssignor
+ enable.auto.commit: false
+ auto.offset.reset: earliest
+ security.protocol: PLAINTEXT
+
+ topicSinks:
+ -
+ topic: policy-acruntime-participant
+ servers:
+ - my-cluster-kafka-bootstrap.mykafka.svc:9092
+ topicCommInfrastructure: kafka
+ useHttps: true
+ additionalProps:
+ key.serializer: org.apache.kafka.common.serialization.StringSerializer
+ value.serializer: org.apache.kafka.common.serialization.StringSerializer
+ acks: 1
+ retry.backoff.ms: 150
+ retries: 3
+ security.protocol: PLAINTEXT
+
+Ensure strimzi and policy pods are running, and topics are created with the commands below
+
+ .. code-block:: bash
+
+ $ kubectl get kafka -n onap
+ NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS
+ dev-strimzi 2 2 True True
+
+ $ kubectl get kafkatopics -n onap
+ NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
+ consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a dev-strimzi 50 2 True
+ policy-acruntime-participant dev-strimzi 10 2 True
+ policy-heartbeat dev-strimzi 10 2 True
+ policy-notification dev-strimzi 10 2 True
+ policy-pdp-pap dev-strimzi 10 2 True
+ strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 dev-strimzi 1 2 True
+ strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b dev-strimzi 1 2 True
+
+
+ .. code-block:: bash
+
+ $kubectl get kafkatopics -n mykafka
+ NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
+ strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 my-cluster 1 1 True
+ strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b my-cluster 1 1 True
+ consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a my-cluster 50 1 True
+ policy-acruntime-participant my-cluster 3 1 True
+ policy-pdp-pap my-cluster 3 1 True
+ policy-heartbeat my-cluster 3 1 True
+ policy-notification my-cluster 3 1 True
+
+
+The following commands will execute a quick check to see if the Kafka producer and Kafka Consumer are working, with the given Bootstrap server and topic.
+
+ .. code-block:: bash
+
+ kubectl -n mykafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic policy-acruntime-participant
+
+ kubectl -n mykafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic policy-acruntime-participant
+
+
+The following table lists some properties that can be specified as Helm chart
+
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| Property | Description | Default Value |
++=======================================+=========================================================================================================+===============================+
+| config.useStrimziKafka | If targeting a custom kafka cluster, ie useStrimziKakfa: false | true |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| bootstrap-servers | Kafka hostname and port | ``<kafka-bootstrap>:9092`` |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| consumer.client-id | Kafka consumer client id | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| security.protocol | Kafka security protocol. | ``SASL_PLAINTEXT`` |
+| | Some possible values are: | |
+| | | |
+| | * ``PLAINTEXT`` | |
+| | * ``SASL_PLAINTEXT``, for authentication | |
+| | * ``SASL_SSL``, for authentication and encryption | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| sasl.mechanism | Kafka security SASL mechanism. Required for SASL_PLAINTEXT and SASL_SSL protocols. | Not defined |
+| | Some possible values are: | |
+| | | |
+| | * ``PLAIN``, for PLAINTEXT | |
+| | * ``SCRAM-SHA-512``, for SSL | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| sasl.jaas.config | Kafka security SASL JAAS configuration. Required for SASL_PLAINTEXT and SASL_SSL protocols. | Not defined |
+| | Some possible values are: | |
+| | | |
+| | * ``org.apache.kafka.common.security.plain.PlainLoginModule required username="..." password="...";``, | |
+| | for PLAINTEXT | |
+| | * ``org.apache.kafka.common.security.scram.ScramLoginModule required username="..." password="...";``, | |
+| | for SSL | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| ssl.trust-store-type | Kafka security SASL SSL store type. Required for SASL_SSL protocol. | Not defined |
+| | Some possible values are: | |
+| | | |
+| | * ``JKS`` | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| ssl.trust-store-location | Kafka security SASL SSL store file location. Required for SASL_SSL protocol. | Not defined |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| ssl.trust-store-password | Kafka security SASL SSL store password. Required for SASL_SSL protocol. | Not defined |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+| ssl.endpoint.identification.algorithm | Kafka security SASL SSL broker hostname identification verification. Required for SASL_SSL protocol. | Not defined |
+| | Possible value is: | |
+| | | |
+| | * ``""``, empty string to disable | |
++---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
+
+
+Run testsuites
+--------------
+If you have deployed the robot pod or have a local robot installation, you can perform some tests using the scripts provided in the OOM repo
+
+Browse to the test suite you have started and open the folder, click the report.html to see the robot test results.
+
+
diff --git a/docs/installation/docker.rst b/docs/installation/docker.rst
index d9ddd1a1..7f038934 100644
--- a/docs/installation/docker.rst
+++ b/docs/installation/docker.rst
@@ -10,98 +10,136 @@ Policy Docker Installation
:depth: 2
-Building the ONAP Policy Framework Docker Images
+Starting the ONAP Policy Framework Docker Images
************************************************
-The instructions here are based on the instructions in the file *~/git/onap/policy/docker/README.md*.
+In order to start the containers, you can use *docker-compose*. This uses the *docker-compose-all.yml* yaml file to bring up the ONAP Policy Framework. This file is located in the policy/docker repository. In the csit folder there are scripts to *automatically* bring up components in Docker, without the need to build all the images locally.
-**Step 1:** Build the Policy API Docker image
+Clone the read-only version of policy/docker repo from gerrit:
.. code-block:: bash
- cd ~/git/onap/policy/api/packages
- mvn clean install -P docker
+ git clone "https://gerrit.onap.org/r/policy/docker"
-**Step 2:** Build the Policy PAP Docker image
+
+Start the containers automatically
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. note:: The start-all.sh script in policy/docker/csit will bring up all the Policy Framework components, and give the local ip for the GUI. The latest images will be downloaded from Nexus.
.. code-block:: bash
- cd ~/git/onap/policy/pap/packages
- mvn clean install -P docker
+ export CONTAINER_LOCATION=nexus3.onap.org:10001/
+ export PROJECT=pap
+ ./start-all.sh
-**Step 3:** Build the Drools PDP docker image.
-This image is a standalone vanilla Drools engine, which does not contain any pre-built drools rules or applications.
+To stop them use ./stop-all.sh
.. code-block:: bash
- cd ~/git/onap/policy/drools-pdp/
- mvn clean install -P docker
+ ./stop-all.sh
-**Step 4:** Build the Drools Application Control Loop image.
-This image has the drools use case application and the supporting software built together with the Drools PDP engine. It is recommended to use this image if you are first working with ONAP Policy and wish to test or learn how the use cases work.
+Start the containers manually
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. code-block:: bash
+**Step 1:** Set the containers location and project.
- cd ~/git/onap/policy/drools-applications
- mvn clean install -P docker
+For *local* images, set CONTAINER_LOCATION="" (or don't set it at all)
+*You will need to build locally all the images using the steps in the next chapter*
-**Step 5:** Build the Apex PDP docker image:
+For *remote* images set CONTAINER_LOCATION="nexus3.onap.org:10001/"
.. code-block:: bash
- cd ~/git/onap/policy/apex-pdp
- mvn clean install -P docker
+ export CONTAINER_LOCATION=nexus3.onap.org:10001/
+ export PROJECT=pap
-**Step 6:** Build the XACML PDP docker image:
+
+**Step 2:** Set gerrit branch
+
+Set GERRIT_BRANCH="master"
+
+Or use the script get-branch.sh
.. code-block:: bash
- cd ~/git/onap/policy/xacml-pdp/packages
- mvn clean install -P docker
+ source ./get-branch.sh
-**Step 7:** Build the policy engine docker image (If working with the legacy Policy Architecture/API):
+
+**Step 3:** Get all the images versions
+
+Use the script get-versions.sh
.. code-block:: bash
- cd ~/git/onap/policy/engine/
- mvn clean install -P docker
+ source ./get-versions.sh
-**Step 8:** Build the Policy SDC Distribution docker image:
+
+**Step 4:** Run the system using docker-compose
.. code-block:: bash
- cd ~/git/onap/policy/distribution/packages
- mvn clean install -P docker
+ docker-compose -f docker-compose-all.yml up <image> <image>
-Starting the ONAP Policy Framework Docker Images
+**You now have a full standalone ONAP Policy framework up and running!**
+
+
+Building the ONAP Policy Framework Docker Images
************************************************
+If you want to use your own local images, you can build them following these instructions:
+
+**Step 1:** Build the Policy API Docker image
+
+.. code-block:: bash
-In order to run the containers, you can use *docker-compose*. This uses the *docker-compose.yml* yaml file to bring up the ONAP Policy Framework. This file is located in the policy/docker repository.
+ cd ~/git/onap/policy/api/packages
+ mvn clean install -P docker
-**Step 1:** Set the environment variable *MTU* to be a suitable MTU size for the application.
+**Step 2:** Build the Policy PAP Docker image
.. code-block:: bash
- export MTU=9126
+ cd ~/git/onap/policy/pap/packages
+ mvn clean install -P docker
+**Step 3:** Build the Drools PDP docker image.
-**Step 2:** Determine if you want the legacy Policy Engine to have policies pre-loaded or not. By default, all the configuration and operational policies will be pre-loaded by the docker compose script. If you do not wish for that to happen, then export this variable:
+This image is a standalone vanilla Drools engine, which does not contain any pre-built drools rules or applications.
+
+.. code-block:: bash
-.. note:: This applies ONLY to the legacy Engine and not the Policy Lifecycle polices
+ cd ~/git/onap/policy/drools-pdp/
+ mvn clean install -P docker
+
+**Step 4:** Build the Drools Application Control Loop image.
+
+This image has the drools use case application and the supporting software built together with the Drools PDP engine. It is recommended to use this image if you are first working with ONAP Policy and wish to test or learn how the use cases work.
.. code-block:: bash
- export PRELOAD_POLICIES=false
+ cd ~/git/onap/policy/drools-applications
+ mvn clean install -P docker
+**Step 5:** Build the Apex PDP docker image:
+
+.. code-block:: bash
+
+ cd ~/git/onap/policy/apex-pdp
+ mvn clean install -P docker
-**Step 3:** Run the system using *docker-compose*. Note that on some systems you may have to run the *docker-compose* command as root or using *sudo*. Note that this command takes a number of minutes to execute on a laptop or desktop computer.
+**Step 6:** Build the XACML PDP docker image:
.. code-block:: bash
- docker-compose up -d
+ cd ~/git/onap/policy/xacml-pdp/packages
+ mvn clean install -P docker
+**Step 7:** Build the Policy SDC Distribution docker image:
-**You now have a full standalone ONAP Policy framework up and running!**
+.. code-block:: bash
+
+ cd ~/git/onap/policy/distribution/packages
+ mvn clean install -P docker
diff --git a/docs/tox.ini b/docs/tox.ini
index 42ffa687..49bbe010 100644
--- a/docs/tox.ini
+++ b/docs/tox.ini
@@ -4,10 +4,10 @@ envlist = docs,
skipsdist = true
[testenv:docs]
-basepython = python3
+basepython = python3.8
deps =
-r{toxinidir}/requirements-docs.txt
- -chttps://git.onap.org/doc/plain/etc/upper-constraints.os.txt
+ -chttps://raw.githubusercontent.com/openstack/requirements/stable/yoga/upper-constraints.txt
-chttps://git.onap.org/doc/plain/etc/upper-constraints.onap.txt
commands =
sphinx-build -b html -n -d {envtmpdir}/doctrees ./ {toxinidir}/_build/html
@@ -18,7 +18,7 @@ whitelist_externals =
sh
[testenv:docs-linkcheck]
-basepython = python3
+basepython = python3.8
#deps = -r{toxinidir}/requirements-docs.txt
commands = echo "Link Checking not enforced"
#commands = sphinx-build -b linkcheck -d {envtmpdir}/doctrees ./ {toxinidir}/_build/linkcheck