summaryrefslogtreecommitdiffstats
path: root/pgaas
diff options
context:
space:
mode:
Diffstat (limited to 'pgaas')
-rw-r--r--pgaas/LICENSE.txt17
-rw-r--r--pgaas/MANIFEST.in1
-rw-r--r--pgaas/README.md79
-rw-r--r--pgaas/pgaas/__init__.py13
-rw-r--r--pgaas/pgaas/logginginterface.py53
-rw-r--r--pgaas/pgaas/pgaas_plugin.py779
-rw-r--r--pgaas/pgaas_types.yaml67
-rw-r--r--pgaas/pom.xml327
-rw-r--r--pgaas/requirements.txt2
-rw-r--r--pgaas/setup.py36
-rw-r--r--pgaas/tests/psycopg2.py70
-rw-r--r--pgaas/tests/test_plugin.py291
-rw-r--r--pgaas/tox.ini54
13 files changed, 1789 insertions, 0 deletions
diff --git a/pgaas/LICENSE.txt b/pgaas/LICENSE.txt
new file mode 100644
index 0000000..df9e931
--- /dev/null
+++ b/pgaas/LICENSE.txt
@@ -0,0 +1,17 @@
+org.onap.dcaegen2
+============LICENSE_START=======================================================
+================================================================================
+Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+================================================================================
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+============LICENSE_END=========================================================
diff --git a/pgaas/MANIFEST.in b/pgaas/MANIFEST.in
new file mode 100644
index 0000000..eb3cd9c
--- /dev/null
+++ b/pgaas/MANIFEST.in
@@ -0,0 +1 @@
+exclude *~
diff --git a/pgaas/README.md b/pgaas/README.md
new file mode 100644
index 0000000..61f1b90
--- /dev/null
+++ b/pgaas/README.md
@@ -0,0 +1,79 @@
+# PGaaS Plugin
+Cloudify PGaaS plugin description and configuraiton
+# Description
+The PGaaS plugin allows users to deploy PostgreSQL application databases, and retrieve access credentials for such databases, as part of a Cloudify blueprint.
+# Plugin Requirements
+* Python versions
+ * 2.7.x
+* System dependencies
+ * psycopg2
+
+Note: These requirements apply to the VM where Cloudify Manager itself runs.
+
+Note: The psycopg2 requirement is met by running "yum install python-psycopg2" on the Cloudify Manager VM.
+
+Note: Cloudify Manager, itself, requires Python 2.7.x (and Centos 7).
+
+# Types
+## dcae.nodes.pgaas.cluster
+**Derived From:** cloudify.nodes.Root
+
+**Properties:**
+
+* `writerfqdn` (required string) The FQDN used for read-write access to the
+cluster containing the postgres database instance. This is used to identify
+and access a particular database instance and to record information about
+that instance on Cloudify Manager.
+* `use_existing` (optional boolean default=false) This is used to reference
+a database instance, in one blueprint, that was deployed in a different one.
+If it is `true`, then the `readerfqdn` property must not be set and this node
+must not have any `dcae.relationships.pgaas_cluster_uses_sshkeypair`
+relationships. If it is `false`, then this node must have exactly one
+`dcae.relationships.pgaas_cluster_uses_sshkeypair` relationship.
+* `readerfqdn` (optional string default=value of `writerfqdn`) The FQDN used for read-only access to the cluster containing the postgres database instance, if different than the FQDN used for read-write access. This will be used by viewer roles.
+
+**Mapped Operations:**
+
+* `cloudify.interfaces.lifecycle.create` validates and records information about the cluster on the Cloudify Manager server in /opt/manager/resources/pgaas/`writerfqdn`.
+* `cloudify.interfaces.lifecycle.delete` deletes previously recorded information from the Cloudify Manager server.
+
+Note: When `use_existing` is `true`, the create operation validates but does not record, and delete does nothing. Delete also does nothing when validation has failed.
+
+**Attributes:**
+This type has no runtime attributes
+
+## dcae.nodes.pgaas.database
+**Derived From:** cloudify.nodes.Root
+
+**Properties:**
+* `name` (required string) The name of the application database, in postgres. This name is also used to create the names of the roles used to access the database, and the schema made available to users of the database.
+* `use_existing` (optional boolean default=false) This is used to reference an application database, in one blueprint, that was deployed in a different one. If true, and this node has a dcae.relationships.database_runson_pgaas_cluster relationship, the dcae.nodes.pgaas.cluster node that is the target of that relationship must also have it's `use_existing` property set to true.
+* `writerfqdn` (optional string) This can be used as an alternative to specifying the cluster, for the application database, with a dcae.relationships.database_runson_pgaas_cluster relationship to a dcae.nodes.pgaas.cluster node. Exactly one of the two options must be used. The relationship method must be used if this blueprint is deploying both the cluster and the application database on the cluster.
+
+**Mapped Operations:**
+
+* `cloudify.interfaces.lifecycle.create` creates the application database, and various roles for admin/user/viewer access to it.
+* `cloudify.interfaces.lifecycle.delete` deletes the application database and roles
+
+Note: When `use_existing` is true, create and delete do not create or delete the application database or associated roles. Create still sets runtime attributes (see below).
+
+**Attributes:**
+
+* `admin` a dict containing access information for adminstrative access to the application database.
+* `user` a dict containing access information for user access to the application database.
+* `viewer` a dict containing access information for read-only access to the application database.
+
+The keys in the access information dicts are as follows:
+
+* `database` the name of the application database.
+* `host` the appropriate FQDN for accessing the application database, (writerfqdn or readerfqdn, based on the type of access).
+* `user` the user role for accessing the database.
+* `password` the password corresponding to the user role.
+
+# Relationships
+## dcae.relationships.pgaas_cluster_uses_sshkeypair
+**Description:** A relationship for binding a dcae.nodes.pgaas.cluster node to the dcae.nodes.ssh.keypair used by the cluster to initialize the database access password for the postgres role. The password for the postgres role is expected to be the hex representation of the MD5 hash of 'postgres' and the contents of the id_rsa (private key) file for the ssh keypair. A dcae.nodes.pgaas.cluster node must have such a relationship if and only if it's use_existing property is false.
+## dcae.relationships.dcae.relationships.database_runson_pgaas_cluster
+**Description:** A relationship for binding a dcae.nodes.pgaas.database node to the dcae.nodes.pgaas.cluster node that contains the application database. A dcae.nodes.pgaas.database node must have either such a relationship or a writerfqdn property. The writerfqdn property cannot be used if the cluster is created in the same blueprint as the application database.
+## dcae.relationships.application_uses_pgaas_database
+**Description:** A relationship for binding a node that needs application database access information to the dcae.nodes.pgaas.database node for that application database.
diff --git a/pgaas/pgaas/__init__.py b/pgaas/pgaas/__init__.py
new file mode 100644
index 0000000..4f8c969
--- /dev/null
+++ b/pgaas/pgaas/__init__.py
@@ -0,0 +1,13 @@
+"""
+PostgreSQL plugin to manage passwords
+"""
+import logging
+
+def get_module_logger(mod_name):
+ logger = logging.getLogger(mod_name)
+ handler=logging.StreamHandler()
+ formatter=logging.Formatter('%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
+ handler.setFormatter(formatter)
+ logger.addHandler(handler)
+ logger.setLevel(logging.DEBUG)
+ return logger
diff --git a/pgaas/pgaas/logginginterface.py b/pgaas/pgaas/logginginterface.py
new file mode 100644
index 0000000..44ddce9
--- /dev/null
+++ b/pgaas/pgaas/logginginterface.py
@@ -0,0 +1,53 @@
+# org.onap.dcaegen2
+# ============LICENSE_START====================================================
+# =============================================================================
+# Copyright (c) 2018-2020 AT&T Intellectual Property. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+"""
+PostgreSQL plugin to manage passwords
+"""
+
+from cloudify import ctx
+
+# pragma pylint: disable=bad-indentation
+
+def debug(msg):
+ """
+ Print a debugging message.
+ This is a handy endpoint to add other extended debugging calls.
+ """
+ ctx.logger.debug(msg)
+
+def warn(msg):
+ """
+ Print a warning message.
+ This is a handy endpoint to add other extended warning calls.
+ """
+ ctx.logger.warn(msg)
+
+def error(msg):
+ """
+ Print an error message.
+ This is a handy endpoint to add other extended error calls.
+ """
+ ctx.logger.error(msg)
+
+def info(msg):
+ """
+ Print a info message.
+ This is a handy endpoint to add other extended info calls.
+ """
+ ctx.logger.info(msg)
diff --git a/pgaas/pgaas/pgaas_plugin.py b/pgaas/pgaas/pgaas_plugin.py
new file mode 100644
index 0000000..f437bd9
--- /dev/null
+++ b/pgaas/pgaas/pgaas_plugin.py
@@ -0,0 +1,779 @@
+# org.onap.dcaegen2
+# ============LICENSE_START====================================================
+# =============================================================================
+# Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+# Copyright (c) 2020 Pantheon.tech. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+"""
+PostgreSQL plugin to manage passwords
+"""
+
+from __future__ import print_function
+import sys
+import os
+import re
+import json
+import hashlib
+import socket
+import traceback
+import base64
+import binascii
+import collections
+try:
+ from urllib.parse import quote
+except ImportError:
+ from urllib import quote
+
+from cloudify import ctx
+from cloudify.decorators import operation
+from cloudify.exceptions import NonRecoverableError
+from cloudify.exceptions import RecoverableError
+
+try:
+ import psycopg2
+except ImportError:
+ # FIXME: any users of this plugin installing its dependencies in nonstandard
+ # directories should set up PYTHONPATH accordingly, outside the program code
+ SYSPATH = sys.path
+ sys.path = list(SYSPATH)
+ sys.path.append('/usr/lib64/python2.7/site-packages')
+ import psycopg2
+ sys.path = SYSPATH
+
+from pgaas.logginginterface import debug, info, warn, error
+
+
+"""
+ To set up a cluster:
+ - https://$NEXUS/repository/raw/type_files/sshkeyshare/sshkey_types.yaml
+ - https://$NEXUS/repository/raw/type_files/pgaas_types.yaml
+ sharedsshkey_pgrs:
+ type: dcae.nodes.ssh.keypair
+ pgaas_cluster:
+ type: dcae.nodes.pgaas.cluster
+ properties:
+ writerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ readerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ # OR:
+ # writerfqdn: { concat: [ { get_input: location_prefix }, '-', { get_input: pgaas_cluster_name }, '-write.', { get_input: location_domain } ] }
+ # readerfqdn: { concat: [ { get_input: location_prefix }, '-', { get_input: pgaas_cluster_name }, '.', { get_input: location_domain } ] }
+ relationships:
+ - type: dcae.relationships.pgaas_cluster_uses_sshkeypair
+ target: sharedsshkey_pgrs
+
+ To reference an existing cluster:
+ - https://$NEXUS/repository/raw/type_files/pgaas_types.yaml
+ pgaas_cluster:
+ type: dcae.nodes.pgaas.cluster
+ properties:
+ writerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ # OR: writerfqdn: { concat: [ { get_input: location_prefix }, '-',
+ # { get_input: pgaas_cluster_name }, '-write.',
+ # { get_input: location_domain } ] }
+ # OR: writerfqdn: { get_property: [ dns_pgrs_rw, fqdn ] }
+ use_existing: true
+
+ To initialize an existing server to be managed by pgaas_plugin::
+ - https://$NEXUS/repository/raw/type_files/sshkeyshare/sshkey_types.yaml
+ - https://$NEXUS/repository/raw/type_files/pgaas_types.yaml
+ pgaas_cluster:
+ type: dcae.nodes.pgaas.cluster
+ properties:
+ writerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ readerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ # OR:
+ # writerfqdn: { concat: [ { get_input: location_prefix }, '-',
+ # { get_input: pgaas_cluster_name }, '-write.',
+ # { get_input: location_domain } ] }
+ # readerfqdn: { concat: [ { get_input: location_prefix }, '-',
+ # { get_input: pgaas_cluster_name }, '.',
+ # { get_input: location_domain } ] }
+ initialpassword: { get_input: currentpassword }
+ relationships:
+ - type: dcae.relationships.pgaas_cluster_uses_sshkeypair
+ target: sharedsshkey_pgrs
+
+ - { get_attribute: [ pgaas_cluster, public ] }
+ - { get_attribute: [ pgaas_cluster, base64private ] }
+ # - { get_attribute: [ pgaas_cluster, postgrespswd ] }
+
+
+ To set up a database:
+ - http://$NEXUS/raw/type_files/pgaas_types.yaml
+ pgaasdbtest:
+ type: dcae.nodes.pgaas.database
+ properties:
+ writerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ # OR: writerfqdn: { concat: [ { get_input: location_prefix }, '-',
+ # { get_input: pgaas_cluster_name }, '-write.',
+ # { get_input: location_domain } ] }
+ # OR: writerfqdn: { get_property: [ dns_pgrs_rw, fqdn ] }
+ name: { get_input: database_name }
+
+ To reference an existing database:
+ - http://$NEXUS/raw/type_files/pgaas_types.yaml
+ $CLUSTER_$DBNAME:
+ type: dcae.nodes.pgaas.database
+ properties:
+ writerfqdn: { get_input: k8s_pgaas_instance_fqdn }
+ # OR: writerfqdn: { concat: [ { get_input: location_prefix }, '-',
+ # { get_input: pgaas_cluster_name }, '-write.',
+ # { get_input: location_domain } ] }
+ # OR: writerfqdn: { get_property: [ dns_pgrs_rw, fqdn ] }
+ name: { get_input: database_name }
+ use_existing: true
+
+ $CLUSTER_$DBNAME_admin_host:
+ description: Hostname for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, admin, host ] }
+ $CLUSTER_$DBNAME_admin_user:
+ description: Admin Username for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, admin, user ] }
+ $CLUSTER_$DBNAME_admin_password:
+ description: Admin Password for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, admin, password ] }
+ $CLUSTER_$DBNAME_user_host:
+ description: Hostname for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, user, host ] }
+ $CLUSTER_$DBNAME_user_user:
+ description: User Username for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, user, user ] }
+ $CLUSTER_$DBNAME_user_password:
+ description: User Password for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, user, password ] }
+ $CLUSTER_$DBNAME_viewer_host:
+ description: Hostname for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, viewer, host ] }
+ $CLUSTER_$DBNAME_viewer_user:
+ description: Viewer Username for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, viewer, user ] }
+ $CLUSTER_$DBNAME_viewer_password:
+ description: Viewer Password for $CLUSTER $DBNAME database
+ value: { get_attribute: [ $CLUSTER_$DBNAME, viewer, password ] }
+
+"""
+
+OPT_MANAGER_RESOURCES_PGAAS = "/opt/manager/resources/pgaas"
+
+# pylint: disable=invalid-name
+def setOptManagerResources(o): # pylint: disable=global-statement
+ """
+ Overrides the default locations of /opt/managers/resources
+ """
+ # pylint: disable=global-statement
+ global OPT_MANAGER_RESOURCES_PGAAS
+ OPT_MANAGER_RESOURCES_PGAAS = "{}/pgaas".format(o)
+
+def safestr(s):
+ """
+ returns a safely printable version of the string
+ """
+ return quote(str(s), '')
+
+def raiseRecoverableError(msg):
+ """
+ Print a warning message and raise a RecoverableError exception.
+ This is a handy endpoint to add other extended debugging calls.
+ """
+ warn(msg)
+ raise RecoverableError(msg)
+
+def raiseNonRecoverableError(msg):
+ """
+ Print an error message and raise a NonRecoverableError exception.
+ This is a handy endpoint to add other extended debugging calls.
+ """
+ error(msg)
+ raise NonRecoverableError(msg)
+
+def dbexecute(crx, cmd, args=None):
+ """
+ executes the SQL statement
+ Prints the entire command for debugging purposes
+ """
+ debug("executing {}".format(cmd))
+ crx.execute(cmd, args)
+
+
+def dbexecute_trunc_print(crx, cmd, args=None):
+ """
+ executes the SQL statement.
+ Will print only the first 30 characters in the command
+ Use this function if you are executing an SQL cmd with a password
+ """
+ debug("executing {}".format(cmd[:30]))
+ crx.execute(cmd, args)
+
+
+def waithp(host, port):
+ """
+ do a test connection to a host and port
+ """
+ debug("waithp({0},{1})".format(safestr(host), safestr(port)))
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ try:
+ sock.connect((host, int(port)))
+ except: # pylint: disable=bare-except
+ a, b, c = sys.exc_info()
+ traceback.print_exception(a, b, c)
+ sock.close()
+ raiseRecoverableError('Server at {0}:{1} is not ready'.format(safestr(host), safestr(port)))
+ sock.close()
+
+def doconn(desc):
+ """
+ open an SQL connection to the PG server
+ """
+ debug("doconn({},{},{})".format(desc['host'], desc['user'], desc['database']))
+ # debug("doconn({},{},{},{})".format(desc['host'], desc['user'], desc['database'], desc['password']))
+ ret = psycopg2.connect(**desc)
+ ret.autocommit = True
+ return ret
+
+def hostportion(hostport):
+ """
+ return the host portion of a fqdn:port or IPv4:port or [IPv6]:port
+ """
+ ipv4re = re.match(r"^([^:]+)(:(\d+))?", hostport)
+ ipv6re = re.match(r"^[[]([^]]+)[]](:(\d+))?", hostport)
+ if ipv4re:
+ return ipv4re.group(1)
+ if ipv6re:
+ return ipv6re.group(1)
+ raiseNonRecoverableError("invalid hostport: {}".format(hostport))
+
+def portportion(hostport):
+ """
+ Return the port portion of a fqdn:port or IPv4:port or [IPv6]:port.
+ If port is not present, return 5432.
+ """
+ ipv6re = re.match(r"^[[]([^]]+)[]](:(\d+))?", hostport)
+ ipv4re = re.match(r"^([^:]+)(:(\d+))?", hostport)
+ if ipv4re:
+ return ipv4re.group(3) if ipv4re.group(3) else '5432'
+ if ipv6re:
+ return ipv6re.group(3) if ipv6re.group(3) else '5432'
+ raiseNonRecoverableError("invalid hostport: {}".format(hostport))
+
+def rootdesc(data, dbname, initialpassword=None):
+ """
+ return the postgres connection information
+ """
+ debug("rootdesc(..data..,{0})".format(safestr(dbname)))
+ # pylint: disable=bad-continuation
+ return {
+ 'database': dbname,
+ 'host': hostportion(data['rw']),
+ 'port': portportion(data['rw']),
+ 'user': 'postgres',
+ 'password': initialpassword if initialpassword else getpass(data, 'postgres', data['rw'], 'postgres')
+ }
+
+def rootconn(data, dbname='postgres', initialpassword=None):
+ """
+ connect to a given server as postgres,
+ connecting to the specified database
+ """
+ debug("rootconn(..data..,{0})".format(safestr(dbname)))
+ return doconn(rootdesc(data, dbname, initialpassword))
+
+def onedesc(data, dbname, role, access):
+ """
+ return the connection information for a given user and dbname on a cluster
+ """
+ user = '{0}_{1}'.format(dbname, role)
+ # pylint: disable=bad-continuation
+ return {
+ 'database': dbname,
+ 'host': hostportion(data[access]),
+ 'port': portportion(data[access]),
+ 'user': user,
+ 'password': getpass(data, user, data['rw'], dbname)
+ }
+
+def dbdescs(data, dbname):
+ """
+ return the entire set of information for a specific server/database
+ """
+ # pylint: disable=bad-continuation
+ return {
+ 'admin': onedesc(data, dbname, 'admin', 'rw'),
+ 'user': onedesc(data, dbname, 'user', 'rw'),
+ 'viewer': onedesc(data, dbname, 'viewer', 'ro')
+ }
+
+def getpass(data, ident, hostport, dbname):
+ """
+ generate the password for a given user on a specific server
+ """
+ m = hashlib.sha256()
+ m.update(ident.encode())
+
+ # mix in the seed (the last line) for that database, if one exists
+ hostport = hostport.lower()
+ dbname = dbname.lower()
+ hostPortDbname = '{0}/{1}:{2}'.format(OPT_MANAGER_RESOURCES_PGAAS, hostport, dbname)
+ try:
+ lastLine = ''
+ with open(hostPortDbname, "r") as fp:
+ for line in fp:
+ lastLine = line
+ m.update(lastLine.encode())
+ except IOError:
+ pass
+
+ m.update(base64.b64decode(data['data']))
+ return m.hexdigest()
+
+def find_related_nodes(reltype, inst=None):
+ """
+ extract the related_nodes information from the context
+ for a specific relationship
+ """
+ if inst is None:
+ inst = ctx.instance
+ ret = []
+ for rel in inst.relationships:
+ if reltype in rel.type_hierarchy:
+ ret.append(rel.target)
+ return ret
+
+def chkfqdn(fqdn):
+ """
+ verify that a FQDN is valid
+ """
+ if fqdn is None:
+ return False
+ hp = hostportion(fqdn)
+ # not needed right now: pp = portportion(fqdn)
+ # TODO need to augment this for IPv6 addresses
+ return re.match('^[a-zA-Z0-9_-]+(\\.[a-zA-Z0-9_-]+)+$', hp) is not None
+
+def chkdbname(dbname):
+ """
+ verify that a database name is valid
+ """
+ ret = re.match('[a-zA-Z][a-zA-Z0-9]{0,43}', dbname) is not None and dbname != 'postgres'
+ if not ret:
+ warn("Invalid dbname: {0}".format(safestr(dbname)))
+ return ret
+
+def get_valid_domains():
+ """
+ Return a list of the valid names, suitable for inclusion in an error message.
+ """
+ msg = ''
+ import glob
+ validDomains = []
+ for f in glob.glob('{}/*'.format(OPT_MANAGER_RESOURCES_PGAAS)):
+ try:
+ with open(f, "r") as fp:
+ try:
+ tmpdata = json.load(fp)
+ if 'pubkey' in tmpdata:
+ validDomains.append(os.path.basename(f))
+ except: # pylint: disable=bare-except
+ pass
+ except: # pylint: disable=bare-except
+ pass
+ if len(validDomains) == 0:
+ msg += '\nNo valid PostgreSQL cluster information was found'
+ else:
+ msg += '\nThese are the valid PostgreSQL cluster domains found on this manager:'
+ for v in validDomains:
+ msg += '\n\t"{}"'.format(v)
+ return msg
+
+def get_existing_clusterinfo(wfqdn, rfqdn, related):
+ """
+ Retrieve all of the information specific to an existing cluster.
+ """
+ if rfqdn != '':
+ raiseNonRecoverableError('Read-only FQDN must not be specified when using an existing cluster, fqdn={0}'.format(safestr(rfqdn)))
+ if len(related) != 0:
+ raiseNonRecoverableError('Cluster SSH keypair must not be specified when using an existing cluster')
+ try:
+ fn = '{0}/{1}'.format(OPT_MANAGER_RESOURCES_PGAAS, wfqdn.lower())
+ with open(fn, 'r') as f:
+ data = json.load(f)
+ data['rw'] = wfqdn
+ return data
+ except Exception as e: # pylint: disable=broad-except
+ warn("Error: {0}".format(e))
+ msg = 'Cluster must be deployed when using an existing cluster.\nCheck your domain name: fqdn={0}\nerr={1}'.format(safestr(wfqdn), e)
+ if not os.path.isdir(OPT_MANAGER_RESOURCES_PGAAS):
+ msg += '\nThe directory {} does not exist. No PostgreSQL clusters have been deployed on this manager.'.format(OPT_MANAGER_RESOURCES_PGAAS)
+ else:
+ msg += get_valid_domains()
+ # warn("Stack: {0}".format(traceback.format_exc()))
+ raiseNonRecoverableError(msg)
+
+def getclusterinfo(wfqdn, reuse, rfqdn, initialpassword, related):
+ """
+ Retrieve all of the information specific to a cluster.
+ if reuse, retrieve it
+ else create and store it
+ """
+ # debug("getclusterinfo({}, {}, {}, {}, ..related..)".format(safestr(wfqdn), safestr(reuse), safestr(rfqdn), safestr(initialpassword)))
+ debug("getclusterinfo({}, {}, {}, ..related..)".format(safestr(wfqdn), safestr(reuse), safestr(rfqdn)))
+ if not chkfqdn(wfqdn):
+ raiseNonRecoverableError('Invalid FQDN specified for admin/read-write access, fqdn={0}'.format(safestr(wfqdn)))
+ if reuse:
+ return get_existing_clusterinfo(wfqdn, rfqdn, related)
+
+ if rfqdn == '':
+ rfqdn = wfqdn
+ elif not chkfqdn(rfqdn):
+ raiseNonRecoverableError('Invalid FQDN specified for read-only access, fqdn={0}'.format(safestr(rfqdn)))
+ if len(related) != 1:
+ raiseNonRecoverableError('Cluster SSH keypair must be specified using a dcae.relationships.pgaas_cluster_uses_sshkeypair ' +
+ 'relationship to a dcae.nodes.sshkeypair node')
+ data = {'ro': rfqdn, 'pubkey': related[0].instance.runtime_properties['public'],
+ 'data': related[0].instance.runtime_properties['base64private'], 'hash': 'sha256'}
+ os.umask(0o77)
+ try:
+ os.makedirs('{0}'.format(OPT_MANAGER_RESOURCES_PGAAS))
+ except: # pylint: disable=bare-except
+ pass
+ try:
+ with open('{0}/{1}'.format(OPT_MANAGER_RESOURCES_PGAAS, wfqdn.lower()), 'w') as f:
+ f.write(json.dumps(data))
+ except Exception as e: # pylint: disable=broad-except
+ warn("Error: {0}".format(e))
+ warn("Stack: {0}".format(traceback.format_exc()))
+ raiseNonRecoverableError('Cannot write cluster information to {0}: fqdn={1}, err={2}'.format(OPT_MANAGER_RESOURCES_PGAAS, safestr(wfqdn), e))
+ data['rw'] = wfqdn
+ if initialpassword:
+ with rootconn(data, initialpassword=initialpassword) as conn:
+ crr = conn.cursor()
+ dbexecute_trunc_print(crr, "ALTER USER postgres WITH PASSWORD %s", (getpass(data, 'postgres', wfqdn, 'postgres'),))
+ crr.close()
+ return data
+
+@operation
+def add_pgaas_cluster(**kwargs): # pylint: disable=unused-argument
+ """
+ dcae.nodes.pgaas.cluster:
+ Record key generation data for cluster
+ """
+ try:
+ warn("add_pgaas_cluster() invoked")
+ data = getclusterinfo(ctx.node.properties['writerfqdn'],
+ ctx.node.properties['use_existing'],
+ ctx.node.properties['readerfqdn'],
+ ctx.node.properties['initialpassword'],
+ find_related_nodes('dcae.relationships.pgaas_cluster_uses_sshkeypair'))
+ ctx.instance.runtime_properties['public'] = data['pubkey']
+ ctx.instance.runtime_properties['base64private'] = data['data']
+ ctx.instance.runtime_properties['postgrespswd'] = getpass(data, 'postgres', ctx.node.properties['writerfqdn'], 'postgres')
+ warn('All done')
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
+
+@operation
+def rm_pgaas_cluster(**kwargs): # pylint: disable=unused-argument
+ """
+ dcae.nodes.pgaas.cluster:
+ Remove key generation data for cluster
+ """
+ try:
+ warn("rm_pgaas_cluster()")
+ wfqdn = ctx.node.properties['writerfqdn']
+ if chkfqdn(wfqdn) and not ctx.node.properties['use_existing']:
+ os.remove('{0}/{1}'.format(OPT_MANAGER_RESOURCES_PGAAS, wfqdn))
+ warn('All done')
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
+
+def dbgetinfo(refctx):
+ """
+ Get the data associated with a database.
+ Make sure the connection exists.
+ """
+ wfqdn = refctx.node.properties['writerfqdn']
+ related = find_related_nodes('dcae.relationships.database_runson_pgaas_cluster', refctx.instance)
+ if wfqdn == '':
+ if len(related) != 1:
+ raiseNonRecoverableError('Database Cluster must be specified using exactly one dcae.relationships.database_runson_pgaas_cluster relationship ' +
+ 'to a dcae.nodes.pgaas.cluster node when writerfqdn is not specified')
+ wfqdn = related[0].node.properties['writerfqdn']
+ return dbgetinfo_for_update(wfqdn)
+
+def dbgetinfo_for_update(wfqdn):
+ """
+ Get the data associated with a database.
+ Make sure the connection exists.
+ """
+
+ if not chkfqdn(wfqdn):
+ raiseNonRecoverableError('Invalid FQDN specified for admin/read-write access, fqdn={0}'.format(safestr(wfqdn)))
+ ret = getclusterinfo(wfqdn, True, '', '', [])
+ waithp(hostportion(wfqdn), portportion(wfqdn))
+ return ret
+
+@operation
+def create_database(**kwargs):
+ """
+ dcae.nodes.pgaas.database:
+ Create a database on a cluster
+ """
+ try:
+ debug("create_database() invoked")
+ dbname = ctx.node.properties['name']
+ warn("create_database({0})".format(safestr(dbname)))
+ if not chkdbname(dbname):
+ raiseNonRecoverableError('Unacceptable or missing database name: {0}'.format(safestr(dbname)))
+ debug('create_database(): dbname checked out')
+ dbinfo = dbgetinfo(ctx)
+ debug('Got db server info')
+ descs = dbdescs(dbinfo, dbname)
+ ctx.instance.runtime_properties['admin'] = descs['admin']
+ ctx.instance.runtime_properties['user'] = descs['user']
+ ctx.instance.runtime_properties['viewer'] = descs['viewer']
+ with rootconn(dbinfo) as conn:
+ crx = conn.cursor()
+ dbexecute(crx, 'SELECT datname FROM pg_database WHERE datistemplate = false')
+ existingdbs = [x[0] for x in crx]
+ if ctx.node.properties['use_existing']:
+ if dbname not in existingdbs:
+ raiseNonRecoverableError('use_existing specified but database does not exist, dbname={0}'.format(safestr(dbname)))
+ return
+ dbexecute(crx, 'SELECT rolname FROM pg_roles')
+ existingroles = [x[0] for x in crx]
+ admu = descs['admin']['user']
+ usru = descs['user']['user']
+ vwru = descs['viewer']['user']
+ cusr = '{0}_common_user_role'.format(dbname)
+ cvwr = '{0}_common_viewer_role'.format(dbname)
+ schm = '{0}_db_common'.format(dbname)
+ if admu not in existingroles:
+ dbexecute_trunc_print(crx, 'CREATE USER {0} WITH PASSWORD %s'.format(admu), (descs['admin']['password'],))
+ if usru not in existingroles:
+ dbexecute_trunc_print(crx, 'CREATE USER {0} WITH PASSWORD %s'.format(usru), (descs['user']['password'],))
+ if vwru not in existingroles:
+ dbexecute_trunc_print(crx, 'CREATE USER {0} WITH PASSWORD %s'.format(vwru), (descs['viewer']['password'],))
+ if cusr not in existingroles:
+ dbexecute(crx, 'CREATE ROLE {0}'.format(cusr))
+ if cvwr not in existingroles:
+ dbexecute(crx, 'CREATE ROLE {0}'.format(cvwr))
+ if dbname not in existingdbs:
+ dbexecute(crx, 'CREATE DATABASE {0} WITH OWNER {1}'.format(dbname, admu))
+ crx.close()
+ with rootconn(dbinfo, dbname) as dbconn:
+ crz = dbconn.cursor()
+ for r in [cusr, cvwr, usru, vwru]:
+ dbexecute(crz, 'REVOKE ALL ON DATABASE {0} FROM {1}'.format(dbname, r))
+ dbexecute(crz, 'GRANT {0} TO {1}'.format(cvwr, cusr))
+ dbexecute(crz, 'GRANT {0} TO {1}'.format(cusr, admu))
+ dbexecute(crz, 'GRANT CONNECT ON DATABASE {0} TO {1}'.format(dbname, cvwr))
+ dbexecute(crz, 'CREATE SCHEMA IF NOT EXISTS {0} AUTHORIZATION {1}'.format(schm, admu))
+ for r in [admu, cusr, cvwr, usru, vwru]:
+ dbexecute(crz, 'ALTER ROLE {0} IN DATABASE {1} SET search_path = public, {2}'.format(r, dbname, schm))
+ dbexecute(crz, 'GRANT USAGE ON SCHEMA {0} to {1}'.format(schm, cvwr))
+ dbexecute(crz, 'GRANT CREATE ON SCHEMA {0} to {1}'.format(schm, admu))
+ dbexecute(crz, 'ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT SELECT ON TABLES TO {1}'.format(admu, cvwr))
+ dbexecute(crz, 'ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT INSERT, UPDATE, DELETE, TRUNCATE ON TABLES TO {1}'.format(admu, cusr))
+ dbexecute(crz, 'ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO {1}'.format(admu, cusr))
+ dbexecute(crz, 'GRANT TEMP ON DATABASE {0} TO {1}'.format(dbname, cusr))
+ dbexecute(crz, 'GRANT {0} to {1}'.format(cusr, usru))
+ dbexecute(crz, 'GRANT {0} to {1}'.format(cvwr, vwru))
+ crz.close()
+ warn('All done')
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
+
+@operation
+def delete_database(**kwargs): # pylint: disable=unused-argument
+ """
+ dcae.nodes.pgaas.database:
+ Delete a database from a cluster
+ """
+ try:
+ debug("delete_database() invoked")
+ dbname = ctx.node.properties['name']
+ warn("delete_database({0})".format(safestr(dbname)))
+ if not chkdbname(dbname):
+ return
+ debug('delete_database(): dbname checked out')
+ if ctx.node.properties['use_existing']:
+ return
+ debug('delete_database(): !use_existing')
+ dbinfo = dbgetinfo(ctx)
+ debug('Got db server info')
+ with rootconn(dbinfo) as conn:
+ crx = conn.cursor()
+ admu = ctx.instance.runtime_properties['admin']['user']
+ usru = ctx.instance.runtime_properties['user']['user']
+ vwru = ctx.instance.runtime_properties['viewer']['user']
+ cusr = '{0}_common_user_role'.format(dbname)
+ cvwr = '{0}_common_viewer_role'.format(dbname)
+ dbexecute(crx, 'DROP DATABASE IF EXISTS {0}'.format(dbname))
+ for r in [usru, vwru, admu, cusr, cvwr]:
+ dbexecute(crx, 'DROP ROLE IF EXISTS {0}'.format(r))
+ warn('All gone')
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
+
+#############################################################
+# function: update_database #
+# Purpose: Called as a workflow to change the database #
+# passwords for all the users #
+# #
+# Invoked via: #
+# cfy executions start -d <deployment-id> update_db_passwd #
+# #
+# Assumptions: #
+# 1) pgaas_types.yaml must define a work flow e.g. #
+# workflows: #
+# update_db_passwd : #
+# mapping : pgaas.pgaas.pgaas_plugin.update_database #
+# 2) DB Blueprint: node_template must have properties: #
+# writerfqdn & name (of DB) #
+#############################################################
+# pylint: disable=unused-argument
+@operation
+def update_database(refctx, **kwargs):
+ """
+ dcae.nodes.pgaas.database:
+ Update the password for a database from a cluster
+ refctx is auto injected into the function when called as a workflow
+ """
+ try:
+ debug("update_database() invoked")
+
+ ################################################
+ # Verify refctx contains the <nodes> attribute. #
+ # The workflow context might not be consistent #
+ # across different cloudify versions #
+ ################################################
+ if not hasattr(refctx, 'nodes'):
+ raiseNonRecoverableError('workflow context does not contain attribute=<nodes>. dir(refctx)={}'.format(dir(refctx)))
+
+ ############################################
+ # Verify that refctx.nodes is iterable #
+ ############################################
+ if not isinstance(refctx.nodes, collections.Iterable):
+ raiseNonRecoverableError("refctx.nodes is not an iterable. Type={}".format(type(refctx.nodes)))
+
+ ctx_node = None
+ ##############################################
+ # Iterate through the nodes until we find #
+ # one with the properties we are looking for #
+ ##############################################
+ for i in refctx.nodes:
+
+ ############################################
+ # Safeguard: If a given node doesn't have #
+ # properties then skip it. #
+ # Don't cause an exception since the nodes #
+ # entry we are searching might still exist #
+ ############################################
+ if not hasattr(i, 'properties'):
+ warn('Encountered a ctx node that does not have attr=<properties>. dir={}'.format(dir(i)))
+ continue
+
+ debug("ctx node has the following Properties: {}".format(list(i.properties.keys())))
+
+ if ('name' in i.properties) and ('writerfqdn' in i.properties):
+ ctx_node = i
+ break
+
+
+ ###############################################
+ # If none of the nodes have properties: #
+ # <name> and <writerfqdn> then fatal error #
+ ###############################################
+ if not ctx_node:
+ raiseNonRecoverableError('Either <name> or <writerfqdn> is not found in refctx.nodes.properties.')
+
+ debug("name is {}".format(ctx_node.properties['name']))
+ debug("host is {}".format(ctx_node.properties['writerfqdn']))
+
+ dbname = ctx_node.properties['name']
+ debug("update_database({0})".format(safestr(dbname)))
+
+ ###########################
+ # dbname must be valid #
+ ###########################
+ if not chkdbname(dbname):
+ raiseNonRecoverableError('dbname is null')
+
+
+ hostport = ctx_node.properties['writerfqdn']
+ debug('update_database(): wfqdn={}'.format(hostport))
+ dbinfo = dbgetinfo_for_update(hostport)
+
+ #debug('Got db server info={}'.format(dbinfo))
+
+ hostPortDbname = '{0}/{1}:{2}'.format(OPT_MANAGER_RESOURCES_PGAAS, hostport.lower(), dbname.lower())
+
+ debug('update_database(): hostPortDbname={}'.format(hostPortDbname))
+ try:
+ appended = False
+ with open(hostPortDbname, "a") as fp:
+ with open("/dev/urandom", "rb") as rp:
+ b = rp.read(16)
+ print(binascii.hexlify(b).decode('utf-8'), file=fp)
+ appended = True
+ if not appended:
+ ctx.logger.warn("Error: the password for {} {} was not successfully changed".format(hostport, dbname))
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
+
+ descs = dbdescs(dbinfo, dbname)
+
+ ##########################################
+ # Verify we have expected keys #
+ # <admin>, <user>, and <viewer> as well #
+ # as "sub-key" <user> #
+ ##########################################
+
+ if not isinstance(descs, dict):
+ raiseNonRecoverableError('db descs has unexpected type=<{}> was expected type dict'.format(type(descs)))
+
+ for key in ("admin", "user", "viewer"):
+ if key not in descs:
+ raiseNonRecoverableError('db descs does not contain key=<{}>. Keys found for descs are: {}'.format(key, list(descs.keys())))
+ if 'user' not in descs[key]:
+ raiseNonRecoverableError('db descs[{}] does not contain key=<user>. Keys found for descs[{}] are: {}'.format(key, key, list(descs[key].keys())))
+
+
+ with rootconn(dbinfo) as conn:
+ crx = conn.cursor()
+
+ admu = descs['admin']['user']
+ usru = descs['user']['user']
+ vwru = descs['viewer']['user']
+
+ for r in [usru, vwru, admu]:
+ dbexecute_trunc_print(crx, "ALTER USER {} WITH PASSWORD '{}'".format(r, getpass(dbinfo, r, hostport, dbname)))
+ #debug("user={} password={}".format(r, getpass(dbinfo, r, hostport, dbname)))
+
+ warn('All users updated for database {}'.format(dbname))
+ except Exception as e: # pylint: disable=broad-except
+ ctx.logger.warn("Error: {0}".format(e))
+ ctx.logger.warn("Stack: {0}".format(traceback.format_exc()))
+ raise e
diff --git a/pgaas/pgaas_types.yaml b/pgaas/pgaas_types.yaml
new file mode 100644
index 0000000..951fbd5
--- /dev/null
+++ b/pgaas/pgaas_types.yaml
@@ -0,0 +1,67 @@
+# -*- indent-tabs-mode: nil -*- # vi: set expandtab:
+tosca_definitions_version: cloudify_dsl_1_3
+
+plugins:
+ pgaas:
+ executor: central_deployment_agent
+ package_name: pgaas
+ package_version: 1.2.0
+
+node_types:
+ dcae.nodes.pgaas.cluster:
+ derived_from: cloudify.nodes.Root
+ properties:
+ writerfqdn:
+ description: 'FQDN used for admin/read-write access to the cluster'
+ type: string
+ use_existing:
+ type: boolean
+ default: false
+ description: 'If set to true, the cluster exists and is being referenced'
+ readerfqdn:
+ description: 'FQDN used for read-only access to the cluster (default - same as writerfqdn)'
+ type: string
+ default: ''
+ port:
+ description: 'Port used for access to the cluster'
+ type: string
+ default: '5432'
+ initialpassword:
+ description: 'Password of existing PG instance to take control of'
+ type: string
+ default: ''
+ interfaces:
+ cloudify.interfaces.lifecycle:
+ create: pgaas.pgaas.pgaas_plugin.add_pgaas_cluster
+ delete: pgaas.pgaas.pgaas_plugin.rm_pgaas_cluster
+
+ dcae.nodes.pgaas.database:
+ derived_from: cloudify.nodes.Root
+ properties:
+ name:
+ type: string
+ description: 'Name of database (max 44 alphanumeric)'
+ use_existing:
+ type: boolean
+ default: false
+ description: 'If set to true, the database exists and is being referenced'
+ writerfqdn:
+ type: string
+ default: ''
+ description: 'Shortcut for connecting to a pgaas.cluster node (with use_existing=true) with a runson_pgaas_cluster relationship'
+ interfaces:
+ cloudify.interfaces.lifecycle:
+ create: pgaas.pgaas.pgaas_plugin.create_database
+ delete: pgaas.pgaas.pgaas_plugin.delete_database
+
+relationships:
+ dcae.relationships.pgaas_cluster_uses_sshkeypair:
+ derived_from: cloudify.relationships.connected_to
+ dcae.relationships.database_runson_pgaas_cluster:
+ derived_from: cloudify.relationships.contained_in
+ dcae.relationships.application_uses_pgaas_database:
+ derived_from: cloudify.relationships.connected_to
+
+workflows:
+ update_db_passwd :
+ mapping : pgaas.pgaas.pgaas_plugin.update_database
diff --git a/pgaas/pom.xml b/pgaas/pom.xml
new file mode 100644
index 0000000..7e7e0ed
--- /dev/null
+++ b/pgaas/pom.xml
@@ -0,0 +1,327 @@
+<?xml version="1.0"?>
+<!--
+============LICENSE_START=======================================================
+================================================================================
+Copyright (c) 2017,2020 AT&T Intellectual Property. All rights reserved.
+================================================================================
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+============LICENSE_END=========================================================
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <parent>
+ <groupId>org.onap.dcaegen2.platform</groupId>
+ <artifactId>plugins</artifactId>
+ <version>1.2.0-SNAPSHOT</version>
+ </parent>
+
+ <!--- CHANGE THE FOLLOWING 3 OBJECTS for your own repo -->
+ <groupId>org.onap.dcaegen2.platform.plugins</groupId>
+ <artifactId>pgaas</artifactId>
+ <name>pgaas</name>
+
+ <version>1.3.0-SNAPSHOT</version>
+ <url>http://maven.apache.org</url>
+ <properties>
+ <!-- vvvvvvvvvvvvvvvv not in relationships -->
+ <!-- name from the setup.py file -->
+ <plugin.name>pgaas</plugin.name>
+ <!-- path to directory containing the setup.py relative to this file -->
+ <plugin.subdir>.</plugin.subdir>
+ <!-- path of types file itself relative to this file -->
+ <typefile.source>pgaas_types.yaml</typefile.source>
+ <!-- path, in repo, to store type file -->
+ <typefile.dest>type_files/pgaas/1.1.0/pgaas_types.yaml</typefile.dest>
+ <!-- ^^^^^^^^^^^^^^^^ -->
+ <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+ <sonar.sources>.</sonar.sources>
+ <sonar.junit.reportsPath>xunit-results.xml</sonar.junit.reportsPath>
+ <sonar.python.coverage.reportPaths>coverage.xml</sonar.python.coverage.reportPaths>
+ <sonar.language>py</sonar.language>
+ <sonar.pluginName>Python</sonar.pluginName>
+ <sonar.inclusions>**/*.py</sonar.inclusions>
+ <sonar.exclusions>tests/*,setup.py</sonar.exclusions>
+ </properties>
+
+ <build>
+ <finalName>${project.artifactId}-${project.version}</finalName>
+ <pluginManagement>
+ <plugins>
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>sonar-maven-plugin</artifactId>
+ <version>2.7.1</version>
+ </plugin>
+
+ <!-- nexus-staging-maven-plugin is called during deploy phase by default behavior.
+ we do not need it -->
+ <plugin>
+ <groupId>org.sonatype.plugins</groupId>
+ <artifactId>nexus-staging-maven-plugin</artifactId>
+ <version>1.6.7</version>
+ <configuration>
+ <skipNexusStagingDeployMojo>true</skipNexusStagingDeployMojo>
+ </configuration>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-deploy-plugin</artifactId>
+ <version>2.8</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+ </plugins>
+ </pluginManagement>
+
+ <plugins>
+
+ <!-- first disable the default Java plugins at various stages -->
+ <!-- maven-resources-plugin is called during "*resource" phases by default behavior. it prepares the resources
+ dir. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-resources-plugin</artifactId>
+ <version>2.6</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-compiler-plugin is called during "compile" phases by default behavior. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.1</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-jar-plugin is called during "compile" phase by default behavior. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-jar-plugin</artifactId>
+ <version>2.4</version>
+ <executions>
+ <execution>
+ <id>default-jar</id>
+ <phase/>
+ </execution>
+ </executions>
+ </plugin>
+
+ <!-- maven-install-plugin is called during "install" phase by default behavior. it tries to copy stuff under
+ target dir to ~/.m2. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-install-plugin</artifactId>
+ <version>2.4</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-surefire-plugin is called during "test" phase by default behavior. it triggers junit test.
+ we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <version>2.12.4</version>
+ <configuration>
+ <skipTests>true</skipTests>
+ </configuration>
+ </plugin>
+
+ <!-- now we configure custom action (calling a script) at various lifecycle phases -->
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>exec-maven-plugin</artifactId>
+ <version>1.2.1</version>
+ <executions>
+ <execution>
+ <id>clean phase script</id>
+ <phase>clean</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>${session.executionRootDirectory}/mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>clean</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>generate-sources script</id>
+ <phase>generate-sources</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>generate-sources</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>compile script</id>
+ <phase>compile</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>compile</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>package script</id>
+ <phase>package</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>package</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>test script</id>
+ <phase>test</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>test</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>install script</id>
+ <phase>install</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>install</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>deploy script</id>
+ <phase>deploy</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>${session.executionRootDirectory}/mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>deploy</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_RAWREPO_BASEURL_UPLOAD>${onap.nexus.rawrepo.baseurl.upload}</MVN_RAWREPO_BASEURL_UPLOAD>
+ <MVN_RAWREPO_BASEURL_DOWNLOAD>${onap.nexus.rawrepo.baseurl.download}</MVN_RAWREPO_BASEURL_DOWNLOAD>
+ <MVN_RAWREPO_SERVERID>${onap.nexus.rawrepo.serverid}</MVN_RAWREPO_SERVERID>
+ <MVN_SERVER_ID>${project.distributionManagement.snapshotRepository.id}</MVN_SERVER_ID>
+ <TYPE_FILE_SOURCE>${typefile.source}</TYPE_FILE_SOURCE>
+ <TYPE_FILE_DEST>${typefile.dest}</TYPE_FILE_DEST>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+ </executions>
+ </plugin>
+ </plugins>
+ </build>
+</project>
diff --git a/pgaas/requirements.txt b/pgaas/requirements.txt
new file mode 100644
index 0000000..83a931a
--- /dev/null
+++ b/pgaas/requirements.txt
@@ -0,0 +1,2 @@
+psycopg2-binary
+cloudify-common>=5.0.5
diff --git a/pgaas/setup.py b/pgaas/setup.py
new file mode 100644
index 0000000..8e6ace7
--- /dev/null
+++ b/pgaas/setup.py
@@ -0,0 +1,36 @@
+# org.onap.dcaegen2
+# ============LICENSE_START====================================================
+# =============================================================================
+# Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+# Copyright (c) 2020 Pantheon.tech. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+from setuptools import setup, find_packages
+
+setup(
+ name="pgaas",
+ version="1.3.0",
+ packages=find_packages(),
+ author="AT&T",
+ description=("Cloudify plugin for pgaas/pgaas."),
+ license="http://www.apache.org/licenses/LICENSE-2.0",
+ keywords="",
+ url="https://onap.org",
+ zip_safe=False,
+ install_requires=[
+ 'psycopg2-binary',
+ 'cloudify-common>=5.0.5',
+ ],
+)
diff --git a/pgaas/tests/psycopg2.py b/pgaas/tests/psycopg2.py
new file mode 100644
index 0000000..ba8aadd
--- /dev/null
+++ b/pgaas/tests/psycopg2.py
@@ -0,0 +1,70 @@
+# ============LICENSE_START====================================================
+# org.onap.dcaegen2
+# =============================================================================
+# Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+"""
+
+This is a mock psycopg2 module.
+
+"""
+
+class MockCursor(object):
+ """
+ mocked cursor
+ """
+ def __init__(self, **kwargs):
+ pass
+
+ def execute(self, cmd, exc=None):
+ """
+ mock SQL execution
+ """
+ pass
+
+ def close(self):
+ """
+ mock SQL close
+ """
+ pass
+
+ def __iter__(self):
+ return iter([])
+
+class MockConn(object): # pylint: disable=too-few-public-methods
+ """
+ mock SQL connection
+ """
+ def __init__(self, **kwargs):
+ pass
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ pass
+
+ def cursor(self): # pylint: disable=no-self-use
+ """
+ mock return a cursor
+ """
+ return MockCursor()
+
+def connect(**kwargs): # pylint: disable=unused-argument
+ """
+ mock get-a-connection
+ """
+ return MockConn()
diff --git a/pgaas/tests/test_plugin.py b/pgaas/tests/test_plugin.py
new file mode 100644
index 0000000..70ce6e9
--- /dev/null
+++ b/pgaas/tests/test_plugin.py
@@ -0,0 +1,291 @@
+# ============LICENSE_START====================================================
+# org.onap.dcaegen2
+# =============================================================================
+# Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+"""
+unit tests for PostgreSQL password plugin
+"""
+
+from __future__ import print_function
+# pylint: disable=import-error,unused-import,wrong-import-order
+import pytest
+import socket
+import psycopg2
+import pgaas.pgaas_plugin
+from cloudify.mocks import MockCloudifyContext
+from cloudify.mocks import MockNodeContext
+from cloudify.mocks import MockNodeInstanceContext
+from cloudify.mocks import MockRelationshipSubjectContext
+from cloudify.state import current_ctx
+from cloudify.exceptions import NonRecoverableError
+from cloudify import ctx
+
+import sys
+import os
+sys.path.append(os.path.realpath(os.path.dirname(__file__)))
+import traceback
+
+TMPNAME = "/tmp/pgaas_plugin_tests_{}".format(os.environ["USER"] if "USER" in os.environ else
+ os.environ["LOGNAME"] if "LOGNAME" in os.environ else
+ str(os.getuid()))
+
+class MockKeyPair(object):
+ """
+ mock keypair for cloudify contexts
+ """
+ def __init__(self, type_hierarchy=None, target=None):
+ self._type_hierarchy = type_hierarchy
+ self._target = target
+
+ @property
+ def type_hierarchy(self):
+ """
+ return the type hierarchy
+ """
+ return self._type_hierarchy
+
+ @property
+ def target(self):
+ """
+ return the target
+ """
+ return self._target
+
+class MockInstance(object): # pylint: disable=too-few-public-methods
+ """
+ mock instance for cloudify contexts
+ """
+ def __init__(self, instance=None):
+ self._instance = instance
+
+ @property
+ def instance(self):
+ """
+ return the instance
+ """
+ return self._instance
+
+class MockRuntimeProperties(object): # pylint: disable=too-few-public-methods
+ """
+ mock runtime properties for cloudify contexts
+ """
+ def __init__(self, runtime_properties=None):
+ self._runtime_properties = runtime_properties
+
+ @property
+ def runtime_properties(self):
+ """
+ return the properties
+ """
+ return self._runtime_properties
+
+class MockSocket(object):
+ """
+ mock socket interface
+ """
+ def __init__(self):
+ pass
+ def connect(self, host=None, port=None):
+ """
+ mock socket connection
+ """
+ pass
+ def close(self):
+ """
+ mock socket close
+ """
+ pass
+
+
+def _connect(host, port): # pylint: disable=unused-argument
+ """
+ mock connection
+ """
+ return {}
+
+def set_mock_context(msg, monkeypatch, writerfqdn='test.bar.example.com'):
+ """
+ establish the mock context for our testing
+ """
+ print("================ %s ================" % msg)
+ # pylint: disable=bad-continuation
+ props = {
+ 'writerfqdn': writerfqdn,
+ 'use_existing': False,
+ 'readerfqdn': 'test-ro.bar.example.com',
+ 'name': 'testdb',
+ 'port': '5432',
+ 'initialpassword': 'test'
+ }
+
+ sshkeyprops = {
+ 'public': "testpub",
+ 'base64private': "testpriv"
+ }
+
+ mock_ctx = MockCloudifyContext(node_id='test_node_id', node_name='test_node_name',
+ # pylint: disable=bad-whitespace
+ properties=props,
+ relationships = [
+ MockKeyPair(type_hierarchy =
+ [ "dcae.relationships.pgaas_cluster_uses_sshkeypair" ],
+ target= MockInstance(
+ MockRuntimeProperties(sshkeyprops)) )
+ ],
+ runtime_properties = {
+ "admin": { "user": "admin_user" },
+ "user": { "user": "user_user" },
+ "viewer": { "user": "viewer_user" }
+ }
+ )
+ current_ctx.set(mock_ctx)
+ monkeypatch.setattr(socket.socket, 'connect', _connect)
+ # monkeypatch.setattr(psycopg2, 'connect', _connect)
+ pgaas.pgaas_plugin.setOptManagerResources(TMPNAME)
+ return mock_ctx
+
+
+@pytest.mark.dependency()
+def test_start(monkeypatch): # pylint: disable=unused-argument
+ """
+ put anything in here that needs to be done
+ PRIOR to the tests
+ """
+ pass
+
+@pytest.mark.dependency(depends=['test_start'])
+def test_add_pgaas_cluster(monkeypatch):
+ """
+ test add_pgaas_cluster()
+ """
+ try:
+ set_mock_context('test_add_pgaas_cluster', monkeypatch)
+ pgaas.pgaas_plugin.add_pgaas_cluster(args={})
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
+
+@pytest.mark.dependency(depends=['test_add_pgaas_cluster'])
+def test_add_database(monkeypatch):
+ """
+ test add_database()
+ """
+ try:
+ set_mock_context('test_add_database', monkeypatch)
+ pgaas.pgaas_plugin.create_database(args={})
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
+
+@pytest.mark.dependency(depends=['test_add_pgaas_cluster'])
+def test_bad_add_database(monkeypatch):
+ """
+ test bad_add_database()
+ """
+ try:
+ set_mock_context('test_add_database', monkeypatch, writerfqdn="bad.bar.example.com")
+ with pytest.raises(NonRecoverableError):
+ pgaas.pgaas_plugin.create_database(args={})
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
+
+@pytest.mark.dependency(depends=['test_add_database'])
+def test_update_database(monkeypatch):
+ """
+ test update_database()
+ """
+ try:
+ ########################################################
+ # Subtle test implications regarding: update_database #
+ # --------------------------------------------------- #
+ # 1) update_database is a workflow and the context #
+ # passed to it has 'nodes' attribute which is not #
+ # not included in MockCloudifyContext #
+ # 2) the 'nodes' attribute is a list of contexts so #
+ # we will have to create a sub-context #
+ # 3) update_database will iterate through each of the #
+ # nodes contexts looking for the correct one #
+ # 4) To identify the correct sub-context it will first#
+ # check each sub-context for the existence of #
+ # properties attribute #
+ # 5) ****Mock_context internally saves properties as #
+ # variable _properties and 'properties' is defined #
+ # as @property...thus it is not recognized as an #
+ # attribute...this will cause update_database to #
+ # fail so we need to explicitly create properties #
+ # properties attribute in the subcontext #
+ ########################################################
+
+ ####################
+ # Main context #
+ ####################
+ myctx = set_mock_context('test_update_database', monkeypatch)
+ ###########################################################
+ # Create subcontext and assign it to attribute properties #
+ # in main context #
+ ###########################################################
+ mynode = set_mock_context('test_update_database_node', monkeypatch)
+ # pylint: disable=protected-access
+ mynode.properties = mynode._properties
+ myctx.nodes = [mynode]
+ pgaas.pgaas_plugin.update_database(refctx=myctx)
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
+
+@pytest.mark.dependency(depends=['test_update_database'])
+def test_delete_database(monkeypatch):
+ """
+ test delete_database()
+ """
+ try:
+ set_mock_context('test_delete_database', monkeypatch)
+ pgaas.pgaas_plugin.delete_database(args={})
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
+
+@pytest.mark.dependency(depends=['test_delete_database'])
+def test_rm_pgaas_cluster(monkeypatch):
+ """
+ test rm_pgaas_cluster()
+ """
+ try:
+ set_mock_context('test_rm_pgaas_cluster', monkeypatch)
+ pgaas.pgaas_plugin.rm_pgaas_cluster(args={})
+ except Exception as e:
+ print("Error: {0}".format(e))
+ print("Stack: {0}".format(traceback.format_exc()))
+ raise
+ finally:
+ current_ctx.clear()
diff --git a/pgaas/tox.ini b/pgaas/tox.ini
new file mode 100644
index 0000000..967f664
--- /dev/null
+++ b/pgaas/tox.ini
@@ -0,0 +1,54 @@
+# ============LICENSE_START====================================================
+# org.onap.dcaegen2
+# =============================================================================
+# Copyright (c) 2017-2020 AT&T Intellectual Property. All rights reserved.
+# Copyright (c) 2020 Pantheon.tech. All rights reserved.
+# =============================================================================
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============LICENSE_END======================================================
+
+[tox]
+envlist = py27,py36,py37,py38,cov
+skip_missing_interpreters = true
+
+[testenv]
+# coverage can only find modules if pythonpath is set
+setenv=
+ PYTHONPATH={toxinidir}
+ COVERAGE_FILE=.coverage.{envname}
+deps=
+ -rrequirements.txt
+ pytest
+ coverage
+ pytest-cov
+whitelist_externals=
+ /bin/mkdir
+commands=
+ mkdir -p logs
+ coverage erase
+ pytest --junitxml xunit-results.{envname}.xml --cov pgaas
+
+[testenv:cov]
+skip_install = true
+deps=
+ coverage
+setenv=
+ COVERAGE_FILE=.coverage
+commands=
+ coverage combine
+ coverage xml
+ coverage report
+ coverage html
+
+[pytest]
+junit_family = xunit2