Age | Commit message (Collapse) | Author | Files | Lines |
|
ONAP is too big to be deployed using helm install so we need to
use a custom helm plugin helm deploy. This script deloys onap
component by component instead of deploying evrything at
once. Unfortunately this script also modifies the helm release by
appending component name to it.
As a result of this behavior our objects are called for example:
onap-mariadb-galera-mariadb-galera-0
instead of just being called onap-mariadb-galera-0.
This patch simplifies this naming convention by replacing all direct
usages of .Release.Name with common.release macro which strips the
component specific part from the release name.
Issue-ID: OOM-2275
Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com>
Change-Id: Ia8cead50d305adb00eef666d0a1ace74479b5183
|
|
OOM has now templates in order to create the needed PVC, using:
* a PV with a specific class when using a common nfs mount path between
nodes (sames as today use) --> is the default behavior today
* or a storage class if we want to use dynamic PV.
On this case, we use (in order of priority):
- persistence.storageClassOverride if set on the chart
- global.persistence.storageClass if set globally
- persistence.storageClass if set on the chart
I've also used a "range" for PV creation of redis in order to have only
the needed number.
Change-Id: I6bb326f8aaece11bcf503e9300e5c39a87214f81
Issue-ID: OOM-1227
Signed-off-by: Sylvain Desbureaux <sylvain.desbureaux@orange.com>
|
|
Helm value override file now supports component-specific settings:
dcae-bootstrap:
enabled: true
dcae-cloudify-manager:
enabled: true
dcae-config-binding-service:
enabled: true
dcae-healthcheck:
enabled: true
dcae-redis:
enabled: true
dcae-servicechange-handler:
enabled: true
dcae-inventory-api:
enabled: true
dcae-deployment-handler:
enabled: true
dcae-policy-handler:
enabled: true
dcae-dashboard:
enabled: true
Issue-ID: OOM-1574
Signed-off-by: Ubuntu <dgl@research.att.com>
Change-Id: I85e0fe6ae19e176d954611549ec954a5fe662307
Signed-off-by: Ubuntu <dgl@research.att.com>
|