diff options
Diffstat (limited to 'docs')
24 files changed, 258 insertions, 20 deletions
diff --git a/docs/sections/services/datalake-handler/images/adminui-dbs-edit.png b/docs/sections/services/datalake-handler/images/adminui-dbs-edit.png Binary files differnew file mode 100644 index 00000000..017aaa7a --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-dbs-edit.png diff --git a/docs/sections/services/datalake-handler/images/adminui-dbs.png b/docs/sections/services/datalake-handler/images/adminui-dbs.png Binary files differnew file mode 100644 index 00000000..6c411a8f --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-dbs.png diff --git a/docs/sections/services/datalake-handler/images/adminui-design-edit.png b/docs/sections/services/datalake-handler/images/adminui-design-edit.png Binary files differnew file mode 100644 index 00000000..26019845 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-design-edit.png diff --git a/docs/sections/services/datalake-handler/images/adminui-design.png b/docs/sections/services/datalake-handler/images/adminui-design.png Binary files differnew file mode 100644 index 00000000..b0904479 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-design.png diff --git a/docs/sections/services/datalake-handler/images/adminui-feeder.png b/docs/sections/services/datalake-handler/images/adminui-feeder.png Binary files differnew file mode 100644 index 00000000..26adab2a --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-feeder.png diff --git a/docs/sections/services/datalake-handler/images/adminui-kafka-edit.png b/docs/sections/services/datalake-handler/images/adminui-kafka-edit.png Binary files differnew file mode 100644 index 00000000..b48e5128 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-kafka-edit.png diff --git a/docs/sections/services/datalake-handler/images/adminui-kafka.png b/docs/sections/services/datalake-handler/images/adminui-kafka.png Binary files differnew file mode 100644 index 00000000..743fb1e6 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-kafka.png diff --git a/docs/sections/services/datalake-handler/images/adminui-tools.png b/docs/sections/services/datalake-handler/images/adminui-tools.png Binary files differnew file mode 100644 index 00000000..18a1e18b --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-tools.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topic-config.png b/docs/sections/services/datalake-handler/images/adminui-topic-config.png Binary files differnew file mode 100644 index 00000000..d15f075c --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topic-config.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topic-edit1.png b/docs/sections/services/datalake-handler/images/adminui-topic-edit1.png Binary files differnew file mode 100644 index 00000000..81212670 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topic-edit1.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topic-edit2.png b/docs/sections/services/datalake-handler/images/adminui-topic-edit2.png Binary files differnew file mode 100644 index 00000000..145b86d6 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topic-edit2.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topic-edit3.png b/docs/sections/services/datalake-handler/images/adminui-topic-edit3.png Binary files differnew file mode 100644 index 00000000..08cd6be5 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topic-edit3.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topic-new.png b/docs/sections/services/datalake-handler/images/adminui-topic-new.png Binary files differnew file mode 100644 index 00000000..12f8948f --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topic-new.png diff --git a/docs/sections/services/datalake-handler/images/adminui-topics.png b/docs/sections/services/datalake-handler/images/adminui-topics.png Binary files differnew file mode 100644 index 00000000..adb5696c --- /dev/null +++ b/docs/sections/services/datalake-handler/images/adminui-topics.png diff --git a/docs/sections/services/datalake-handler/images/blueprint-list.png b/docs/sections/services/datalake-handler/images/blueprint-list.png Binary files differnew file mode 100644 index 00000000..e934205b --- /dev/null +++ b/docs/sections/services/datalake-handler/images/blueprint-list.png diff --git a/docs/sections/services/datalake-handler/images/bootstrap-pod.png b/docs/sections/services/datalake-handler/images/bootstrap-pod.png Binary files differnew file mode 100644 index 00000000..d0e275ec --- /dev/null +++ b/docs/sections/services/datalake-handler/images/bootstrap-pod.png diff --git a/docs/sections/services/datalake-handler/images/feeder-log.png b/docs/sections/services/datalake-handler/images/feeder-log.png Binary files differnew file mode 100644 index 00000000..15b23777 --- /dev/null +++ b/docs/sections/services/datalake-handler/images/feeder-log.png diff --git a/docs/sections/services/datalake-handler/installation.rst b/docs/sections/services/datalake-handler/installation.rst index 2235198a..5d8b3341 100644 --- a/docs/sections/services/datalake-handler/installation.rst +++ b/docs/sections/services/datalake-handler/installation.rst @@ -1,4 +1,100 @@ -Installation -============ +Deployment Steps +################ +DL-handler consists of two pods- the feeder and admin UI. It can be deployed by using cloudify blueprint. Datalake can be easily deployed through DCAE cloudify manager. The following steps guides you launch Datalake though cloudify manager. -DataLake handler microservice can be deployed using ... (TODO)
\ No newline at end of file +Pre-requisite +---------------- +- Make sure mariadb-galera from OOM is properly deployed and functional. +- An external database, such as Elasticsearch and MongoDB is deployed. + +After datalake getting deployed, the admin UI can be used to configure the sink database address and credentials. + +Log-in to the DCAE Bootstrap POD +--------------------------------------------------- + +First, we should find the bootstrap pod name through the following command and make sure that DCAE coudify manager is properly deployed. + .. image :: .images/bootstrap-pod.png + +Login to the DCAE bootstrap pod through the following command. + .. code-block :: bash + + #kubectl exec -it <DCAE bootstrap pod> /bin/bash -n onap + +Validate Blueprint +------------------- +Before the blueprints uploading to Cloudify manager, the blueprints shoule be validated first throuhg the following command. + .. code-block :: bash + + #cfy blueprint validate /bluerints/k8s-datalake-feeder.yaml + #cfy blueprint validate /blueprints/k8s-datalake-admin-ui.yaml + +Upload the Blueprint to Cloudify Manager. +----------------------------------------- +After validating, we can start to proceed blueprints uploading. + .. code-block :: bash + + #cfy blueprint upload -b datalake-feeder /bluerints/k8s-datalake-feeder.yaml + #cfy blueprint upload -b datalake-admin-ui /blueprints/k8s-datalake-admin-ui.yaml + +Verify Uploaded Blueprints +-------------------------- +Using "cft blueprint list" to varify your work. + .. code-block :: bash + + #cfy blueprint list + +You can see the following returned message to show the blueprints have been correctly uploaded. + .. image :: ./imagesblueprint-list.png + + +Verify Plugin Versions +------------------------------------------------------------------------------ +If the version of the plugin used is different, update the blueprint import to match. + .. code-block :: bash + + #cfy plugins list + +Create Deployment +----------------- +Here we are going to create deployments for both feeder and admin UI. + .. code-block :: bash + + #cfy deployments create -b datalake-feeder feeder-deploy + #cfy deployments create -b datalake-admin-ui admin-ui-deploy + +Launch Service +--------------- +Next, we are going to launch the datalake. + .. code-block :: bash + + #cfy executions start -d feeder-deploy install + #cfy executions start -d admin-ui-deploy install + + +Verify the Deployment Result +----------------------------- +The following command can be used to list the datalake logs. + .. code-block :: bash + #kubectl logs <datalake-pod> -n onap + +The output should looks like. + .. image :: ./feeder-log.png + +If you find any Java exception from log, make sure that the external database and datalake configuration are properly configured. +Admin UI can be used to configure the external database configuration. + + +Uninstall +---------- +Uninstall running component and delete deployment + .. code-block :: bash + + #cfy uninstall feeder-deploy + #cfy uninstall admin-ui-deploy + +Delete Blueprint +------------------ + .. code-block :: bash + + #cfy blueprints delete datalake-feeder + #cfy blueprints deltet datalake-admin-ui diff --git a/docs/sections/services/datalake-handler/userguide.rst b/docs/sections/services/datalake-handler/userguide.rst index 4a0957f5..b3be9491 100644 --- a/docs/sections/services/datalake-handler/userguide.rst +++ b/docs/sections/services/datalake-handler/userguide.rst @@ -1,4 +1,81 @@ Admin UI User Guide --------------------- - -To be filled. + +Introduction +~~~~~~~~ +DataLake Admin UI aims to provide a user-friendly dashboard to easily monitor and +manage DataLake configurations for the involved components, ONAP topics, databases, +and 3rd-party tools. Please refer to the link to access the Admin UI portal +via http://datalake-admin-ui:30479 + + +DataLake Feeder Management +****************** +.. image:: ./images/adminui-feeder.png +Click the "DataLake Feeder" on the menu bar, and the dashboard will show +the overview DataLake Feeder information, such as the numbers of topics. +Also, you can enable or disable DataLake Feeder process backend process +by using the toggle switch. + + +Kafka Management +****************** +.. image:: ./images/adminui-kafka.png +Click the "Kafka" on the menu bar, and it provides the kafka resource settings +including add, modify and delete in the page to fulfill your management demand. + +.. image:: ./images/adminui-kafka-edit.png +You can modify the kafka resource via clicking the card, +and click the plus button to add a new Kafka resource. +Then, you will need to fill the required information such as identifying name, +message router address and zookeeper address, and so on to build it up. + + +Topics Management +****************** +.. image:: ./images/adminui-topics.png +.. image:: ./images/adminui-topic-edit1.png +.. image:: ./images/adminui-topic-edit2.png +.. image:: ./images/adminui-topic-edit3.png +The Topic page lists down all the topics which you have been configured +by topic management. You can edit the topic setting via double click the specific row. +The setting includes DataLake feeder status - catch the topic or not, +data format, and the numbers of time to live for the topic. +And choose one or more Kafka items as topic resource +and define the databased to store topic info are necessary. + +.. image:: ./images/adminui-topic-config.png +For the default configuration of Topics, you can click the "Default configurations" button +to do the setting. When you add a new topic, these configurations will be filled into the form automatically. + +.. image:: ./images/adminui-topic-new.png +To add a new topic for the DataLake Feeder, you can click the "plus icon" button +to catch the data into the 3rd-party database. +Please be noted that only existing topics in the Kafka can be added. + + +Database Management +****************** +.. image:: ./images/adminui-dbs.png +.. image:: ./images/adminui-dbs-edit.png +In the Database Management page, it allows you to add, modify and delete the database resources +where the message from topics will be stored. +DataLake supports a bunch of databases including Couchbase DB, Apache Druid, Elasticsearch, HDFS, and MongoDB. + + +3rd-Party Tools Management +****************** +.. image:: ./images/adminui-tools.png +In the Tools page, it allows you to manage the resources of 3rd-party tools for data visualization. +Currently, DataLake supports two Tools which are Kibana and Apache Superset. + + +3rd-Party Design Tools Management +****************** +.. image:: ./images/adminui-design.png +.. image:: ./images/adminui-design-edit.png +After setting up the 3rd-party tools, you can import the template as the JSON, YAML or other formats +for data exploration, data visualization and dashboarding. DataLake supports Kibana dashboarding, +Kibana searching, Kibana visualization, Elasticsearch field mapping template, +and Apache Druid Kafka indexing service. + diff --git a/docs/sections/services/ves-hv/authorization.rst b/docs/sections/services/ves-hv/authorization.rst index 054f7b33..9cbd789a 100644 --- a/docs/sections/services/ves-hv/authorization.rst +++ b/docs/sections/services/ves-hv/authorization.rst @@ -1,11 +1,12 @@ - **WARNING: SSL/TLS authorization is a part of an experimental feature for ONAP Casablanca release and thus should be treated as unstable and subject to change in future releases.** +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 .. _ssl_tls_authorization: SSL/TLS authorization ===================== -HV-VES can be configured to require usage of SSL/TLS on every TCP connection. This can be done only during deployment of application container. For reference about exact commands, see :ref:`deployment`. +HV-VES requires usage of SSL/TLS on every TCP connection. This can be done only during deployment of application container. For reference about exact commands, see :ref:`deployment`. General steps for configuring TLS for HV-VES collector: @@ -19,7 +20,7 @@ General steps for configuring TLS for HV-VES collector: -HV-VES uses OpenJDK (version 8u181) implementation of TLS ciphers. For reference, see https://docs.oracle.com/javase/8/docs/technotes/guides/security/overview/jsoverview.html. +HV-VES uses OpenJDK (version 11.0.6) implementation of TLS ciphers. For reference, see https://docs.oracle.com/en/java/javase/11/security/java-security-overview1.html. If SSL/TLS is enabled for HV-VES container then service turns on also client authentication. HV-VES requires clients to provide their certificates on connection. In addition, HV-VES provides its certificate to every client during SSL/TLS-handshake to enable two-way authorization. diff --git a/docs/sections/services/ves-hv/deployment.rst b/docs/sections/services/ves-hv/deployment.rst index caad3978..e764a9aa 100644 --- a/docs/sections/services/ves-hv/deployment.rst +++ b/docs/sections/services/ves-hv/deployment.rst @@ -1,7 +1,6 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 - .. _deployment: Deployment diff --git a/docs/sections/services/ves-hv/resources/blueprint-snippet.yaml b/docs/sections/services/ves-hv/resources/blueprint-snippet.yaml index 912c0c5a..7ed36684 100644 --- a/docs/sections/services/ves-hv/resources/blueprint-snippet.yaml +++ b/docs/sections/services/ves-hv/resources/blueprint-snippet.yaml @@ -22,3 +22,6 @@ node_templates: kafka_info: bootstrap_servers: "message-router-kafka:9092" topic_name: "HV_VES_HEARTBEAT" + tls_info: + cert_directory: "/etc/ves-hv/ssl" + use_tls: true diff --git a/docs/sections/services/ves-hv/troubleshooting.rst b/docs/sections/services/ves-hv/troubleshooting.rst index 6b9ec8b6..d6cf9f1e 100644 --- a/docs/sections/services/ves-hv/troubleshooting.rst +++ b/docs/sections/services/ves-hv/troubleshooting.rst @@ -198,20 +198,21 @@ For more information, see the :ref:`hv_ves_behaviors` section. Authorization related errors ---------------------------- -**WARNING: SSL/TLS authorization is a part of an experimental feature for ONAP Dublin release and should be treated as unstable and subject to change in future releases.** **For more information, see** :ref:`ssl_tls_authorization`. **Key or trust store missing** :: - | org.onap.dcae.collectors.veshv.main | ERROR | Failed to start a server | java.io.FileNotFoundException: /etc/ves-hv/server.p12 + | org.onap.dcae.collectors.veshv.main | ERROR | Failed to start a server | java.nio.file.NoSuchFileException: /etc/ves-hv/server.p12 The above error is logged when key store is not provided. Similarly, when trust store is not provided, **/etc/ves-hv/trust.p12** file missing is logged. They can be changed by specifying ``security.keys.trustStore`` or ``security.keys.keyStore`` file configuration entries. +For testing purposes there is possibility to use plain TCP protocol. In order to do this navigate with your browser to consul-ui service and than pick KEY/VALUE tab. Select dcae-hv-ves-collector and change ``security.sslDisable`` to true. Update of configuration should let start TCP server without SSL/TLS configured. + ==== **Invalid credentials** diff --git a/docs/sections/tls_enablement.rst b/docs/sections/tls_enablement.rst index 0e469b84..c42c4761 100644 --- a/docs/sections/tls_enablement.rst +++ b/docs/sections/tls_enablement.rst @@ -4,41 +4,66 @@ TLS Support =========== -To comply with ONAP security requirement, all services exposing external API required TLS support using AAF generated certificates. DCAE Platform was updated in R3 to enable certificate distribution mechanism for services needing TLS support. +To comply with ONAP security requirement, all services exposing external API required TLS support using AAF generated certificates. DCAE Platform was updated in R3 to enable certificate distribution mechanism for services needing TLS support. For R6, we have moved from generating certificates manually to retrieving certificates from AAF at deployment time. Solution overview ----------------- -1. Certificate generation: - This step is done manually currently using Test AAF instance in POD25. Required namespace, DCAE identity (dcae@dcae.onap.org), roles and Subject Alternative Names for all components are preset. Using the procedure desribed by AAF (using ``agent.sh``), the certificates are generated. Using the Java keystore file (``.jks``) generated from AAF, create the .pem files and load them into tls-init-container under dcaegen2/deployment repository. The image has a script that runs when the image is deployed. The script copies the certificate artifacts into a Kubernetes volume. The container is used as an "init-container" included in the Kubernetes pod for a component that needs to use TLS. +1. Certificate setup: + + AAF requires setting up certificate details in AAF manually before a certificate is generated. + This step is currently done using a test AAF instance in POD25. + Required namespace, DCAE identity (dcae@dcae.onap.org), roles and Subject Alternative Names for all components are set in the test instance. + We use a single certificate for all DCAE components, with a long list of Subject Alternative Names (SANs). Current SAN listing:: bbs-event-processor, bbs-event-processor.onap, bbs-event-processor.onap.svc.cluster.local, config-binding-service, config-binding-service.onap, config-binding-service.onap.svc.cluster.local, dcae-cloudify-manager, dcae-cloudify-manager.onap, dcae-cloudify-manager.onap.svc.cluster.local, dcae-datafile-collector, dcae-datafile-collector.onap, dcae-datafile-collector.onap.svc.cluster.local, dcae-hv-ves-collector, dcae-hv-ves-collector.onap, dcae-hv-ves-collector.onap.svc.cluster.local, dcae-pm-mapper, dcae-pm-mapper.onap, dcae-pm-mapper.onap.svc.cluster.local, dcae-prh, dcae-prh.onap, dcae-prh.onap.svc.cluster.local, dcae-tca-analytics, dcae-tca-analytics.onap, dcae-tca-analytics.onap.svc.cluster.local, dcae-ves-collector, dcae-ves-collector.onap, dcae-ves-collector.onap.svc.cluster.local, deployment-handler, deployment-handler.onap, deployment-handler.onap.svc.cluster.local, holmes-engine-mgmt, holmes-engine-mgmt.onap, holmes-engine-mgmt.onap.svc.cluster.local, holmes-rule-mgmt, holmes-rules-mgmt.onap, holmes-rules-mgmt.onap.svc.cluster.local, inventory, inventory.onap, inventory.onap.svc.cluster.local, policy-handler, policy-handler.onap, policy-handler.onap.svc.cluster.local -2. Plugin and Blueprint: - Update blueprint to include new (optional) node property (tls_info) to the type definitions for the Kubernetes component types. The property is a dictionary with two elements: +2. Certificate generation and retrieval: + + When a DCAE component that needs a TLS certificate is launched, a Kubernetes init container runs before the main + component container is launched. The init container contacts the AAF certificate manager server. The AAF certificate + management server generates a certificate based on the information previously set up in step 1 above and sends the certificate + (in several formats) along with keys and passwords to the init container. The init container renames the files to conform to + DCAE naming conventions and creates some additional formats. It stores the results into a volume that's shared with + the main component container. + + DCAE platform components are deployed via ONAP OOM. The Helm chart for each deployment includes the init container + and sets up the shared volume. + + DCAE service components (sometimes called "microservices") are deployed via Cloudify using blueprints. This is described + in more detail in the next section. + +3. Plugin and Blueprint: + The blueprint for a component that needs a TLS certificate needs to include the node property called "tls_info" in + the node properties for the component. The property is a dictionary with two elements: * A boolean (``use_tls``) that indicates whether the component uses TLS. * A string (``cert_directory``) that indicates where the component expects to find certificate artifacts. Example + .. code-block:: yaml tls_info: cert_directory: '/opt/app/dh/etc/cert' use_tls: true -(Note that the ``cert_directory`` value does not include a trailing ``/``.) -For this example the certificates are mounted into /opt/app/dh/etc/cert directory within the conainer. +(Note that the ``cert_directory`` value does not include a trailing ``/``.) +For this example the certificates are mounted into ``/opt/app/dh/etc/cert`` directory within the container. During deployment Kubernetes plugin (referenced in blueprint) will check if the ``tls_info`` property is set and ``use_tls`` is set to true, then the plugin will add some elements to the Kubernetes Deployment for the component: * A Kubernetes volume (``tls-info``) that will hold the certificate artifacts * A Kubernetes initContainer (``tls-init``) - * A Kubernetes volumeMount for the initContainer that mounts the ``tls-info`` volume at ``/opt/tls/shared``. + * A Kubernetes volumeMount for the initContainer that mounts the ``tls-info`` volume at ``/opt/app/osaaf``. * A Kubernetes volumeMount for the main container that mounts the ``tls-info`` volume at the mount point specified in the ``cert_directory`` property. -3. Certificate Artifacts +Service components that act as HTTPS clients only need access to the root CA certificate used by AAF. For R6, such +components should set up a tls_info property as described above. See below for a note about an alternative approach +that is available in R6 but is not currently being used. + +4. Certificate artifacts The certificate directory mounted on the container will include the following files: * ``cert.jks``: A Java keystore containing the DCAE certificate. @@ -50,3 +75,39 @@ For this example the certificates are mounted into /opt/app/dh/etc/cert director * ``cert.pem``: The DCAE certificate concatenated with the intermediate CA certficate from AAF, in PEM form. * ``key.pem``: The private key for the DCAE certificate. The key is not encrypted. * ``cacert.pem``: The AAF CA certificate, in PEM form. (Needed by clients that access TLS-protected servers.) + +5. Alternative for getting CA certificate only + + The certificates generated by AAF are signed by AAF, not by a recognized certificate authority (CA). If a component acts + as a client and makes an HTTPS request to another component, it will not be able to validate the other component's + server certificate because it will not recognize the CA. Most HTTPS client library software will raise an error + and drop the connection. To prevent this, the client component needs to have a copy of the AAF CA certificate. + As noted in section 3 above, one way to do this is to set up the tls_info property as described in section 3 above. + + There are alternatives. In R6, two versions of the DCAE k8splugin are available: version 1.7.2 and version 2.0.0. + They behave differently with respect to setting up the CA certs. + + * k8splugin version 1.7.2 will automatically mount the CA certificate, in PEM format, at ``/opt/dcae/cacert/cacert.pem``. + It is not necessary to add anything to the blueprint. To get the CA certificate in PEM format in a different directory, + add a ``tls_info`` property to the blueprint, set ``use_tls`` to ``false``, and set ``cert_directory`` to the directory + where the CA cert is needed. For example: + + .. code-block:: yaml + + tls_info: + cert_directory: '/opt/app/certs' + use_tls: false + + For this example, the CA certificate would be mounted at ``/opt/app/certs/cacert.pem``. + + k8splugin version 1.7.2 uses a configmap, rather than an init container, to supply the CA certificate. + + * k8splugin version 2.0.0 will automatically mount the CA certificate, in PEM and JKS formats, in the directory ``/opt/dcae/cacert``. + It is not necessary to add anything to the blueprint. To get the CA certificates in a different directory, add a ``tls_info`` property to the blueprint, set ``use_tls`` to ``false``, and set ``cert_directory`` to the directory + where the CA certs are needed. Whatever directory is used, the following files will be available: + + * ``trust.jks``: A Java truststore containing the AAF CA certificate. (Needed by clients that access TLS-protected servers.) + * ``trust.pass``: A text file with a single line that contains the password for the ``trust.jks`` keystore. + * ``cacert.pem``: The AAF CA certificate, in PEM form. (Needed by clients that access TLS-protected servers.) + + k8splugin version 2.0.0 uses an init container to supply the CA certificates.
\ No newline at end of file |