summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
m---------pgaas0
-rw-r--r--pgaas/LICENSE.txt17
-rw-r--r--pgaas/README.md79
-rw-r--r--pgaas/pgaas/__init__.py10
-rw-r--r--pgaas/pgaas/pgaas_plugin.py237
-rw-r--r--pgaas/pgaas_types.yaml55
-rw-r--r--pgaas/pom.xml304
-rw-r--r--pgaas/requirements.txt0
-rw-r--r--pgaas/setup.py15
-rw-r--r--pom.xml1
10 files changed, 718 insertions, 0 deletions
diff --git a/pgaas b/pgaas
deleted file mode 160000
-Subproject a17567e7e5c8f53f1ece0e613493371123f8817
diff --git a/pgaas/LICENSE.txt b/pgaas/LICENSE.txt
new file mode 100644
index 0000000..f90f8f1
--- /dev/null
+++ b/pgaas/LICENSE.txt
@@ -0,0 +1,17 @@
+============LICENSE_START=======================================================
+org.onap.ccsdk
+================================================================================
+Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
+================================================================================
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+============LICENSE_END=========================================================
diff --git a/pgaas/README.md b/pgaas/README.md
new file mode 100644
index 0000000..61f1b90
--- /dev/null
+++ b/pgaas/README.md
@@ -0,0 +1,79 @@
+# PGaaS Plugin
+Cloudify PGaaS plugin description and configuraiton
+# Description
+The PGaaS plugin allows users to deploy PostgreSQL application databases, and retrieve access credentials for such databases, as part of a Cloudify blueprint.
+# Plugin Requirements
+* Python versions
+ * 2.7.x
+* System dependencies
+ * psycopg2
+
+Note: These requirements apply to the VM where Cloudify Manager itself runs.
+
+Note: The psycopg2 requirement is met by running "yum install python-psycopg2" on the Cloudify Manager VM.
+
+Note: Cloudify Manager, itself, requires Python 2.7.x (and Centos 7).
+
+# Types
+## dcae.nodes.pgaas.cluster
+**Derived From:** cloudify.nodes.Root
+
+**Properties:**
+
+* `writerfqdn` (required string) The FQDN used for read-write access to the
+cluster containing the postgres database instance. This is used to identify
+and access a particular database instance and to record information about
+that instance on Cloudify Manager.
+* `use_existing` (optional boolean default=false) This is used to reference
+a database instance, in one blueprint, that was deployed in a different one.
+If it is `true`, then the `readerfqdn` property must not be set and this node
+must not have any `dcae.relationships.pgaas_cluster_uses_sshkeypair`
+relationships. If it is `false`, then this node must have exactly one
+`dcae.relationships.pgaas_cluster_uses_sshkeypair` relationship.
+* `readerfqdn` (optional string default=value of `writerfqdn`) The FQDN used for read-only access to the cluster containing the postgres database instance, if different than the FQDN used for read-write access. This will be used by viewer roles.
+
+**Mapped Operations:**
+
+* `cloudify.interfaces.lifecycle.create` validates and records information about the cluster on the Cloudify Manager server in /opt/manager/resources/pgaas/`writerfqdn`.
+* `cloudify.interfaces.lifecycle.delete` deletes previously recorded information from the Cloudify Manager server.
+
+Note: When `use_existing` is `true`, the create operation validates but does not record, and delete does nothing. Delete also does nothing when validation has failed.
+
+**Attributes:**
+This type has no runtime attributes
+
+## dcae.nodes.pgaas.database
+**Derived From:** cloudify.nodes.Root
+
+**Properties:**
+* `name` (required string) The name of the application database, in postgres. This name is also used to create the names of the roles used to access the database, and the schema made available to users of the database.
+* `use_existing` (optional boolean default=false) This is used to reference an application database, in one blueprint, that was deployed in a different one. If true, and this node has a dcae.relationships.database_runson_pgaas_cluster relationship, the dcae.nodes.pgaas.cluster node that is the target of that relationship must also have it's `use_existing` property set to true.
+* `writerfqdn` (optional string) This can be used as an alternative to specifying the cluster, for the application database, with a dcae.relationships.database_runson_pgaas_cluster relationship to a dcae.nodes.pgaas.cluster node. Exactly one of the two options must be used. The relationship method must be used if this blueprint is deploying both the cluster and the application database on the cluster.
+
+**Mapped Operations:**
+
+* `cloudify.interfaces.lifecycle.create` creates the application database, and various roles for admin/user/viewer access to it.
+* `cloudify.interfaces.lifecycle.delete` deletes the application database and roles
+
+Note: When `use_existing` is true, create and delete do not create or delete the application database or associated roles. Create still sets runtime attributes (see below).
+
+**Attributes:**
+
+* `admin` a dict containing access information for adminstrative access to the application database.
+* `user` a dict containing access information for user access to the application database.
+* `viewer` a dict containing access information for read-only access to the application database.
+
+The keys in the access information dicts are as follows:
+
+* `database` the name of the application database.
+* `host` the appropriate FQDN for accessing the application database, (writerfqdn or readerfqdn, based on the type of access).
+* `user` the user role for accessing the database.
+* `password` the password corresponding to the user role.
+
+# Relationships
+## dcae.relationships.pgaas_cluster_uses_sshkeypair
+**Description:** A relationship for binding a dcae.nodes.pgaas.cluster node to the dcae.nodes.ssh.keypair used by the cluster to initialize the database access password for the postgres role. The password for the postgres role is expected to be the hex representation of the MD5 hash of 'postgres' and the contents of the id_rsa (private key) file for the ssh keypair. A dcae.nodes.pgaas.cluster node must have such a relationship if and only if it's use_existing property is false.
+## dcae.relationships.dcae.relationships.database_runson_pgaas_cluster
+**Description:** A relationship for binding a dcae.nodes.pgaas.database node to the dcae.nodes.pgaas.cluster node that contains the application database. A dcae.nodes.pgaas.database node must have either such a relationship or a writerfqdn property. The writerfqdn property cannot be used if the cluster is created in the same blueprint as the application database.
+## dcae.relationships.application_uses_pgaas_database
+**Description:** A relationship for binding a node that needs application database access information to the dcae.nodes.pgaas.database node for that application database.
diff --git a/pgaas/pgaas/__init__.py b/pgaas/pgaas/__init__.py
new file mode 100644
index 0000000..e3c966c
--- /dev/null
+++ b/pgaas/pgaas/__init__.py
@@ -0,0 +1,10 @@
+import logging
+
+def get_module_logger(mod_name):
+ logger = logging.getLogger(mod_name)
+ handler=logging.StreamHandler()
+ formatter=logging.Formatter('%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
+ handler.setFormatter(formatter)
+ logger.addHandler(handler)
+ logger.setLevel(logging.DEBUG)
+ return logger
diff --git a/pgaas/pgaas/pgaas_plugin.py b/pgaas/pgaas/pgaas_plugin.py
new file mode 100644
index 0000000..287b1be
--- /dev/null
+++ b/pgaas/pgaas/pgaas_plugin.py
@@ -0,0 +1,237 @@
+from cloudify import ctx
+from cloudify.decorators import operation
+from cloudify.exceptions import NonRecoverableError
+from cloudify.exceptions import RecoverableError
+
+import os
+import re
+import json
+import hashlib
+import socket
+import sys
+import traceback
+import base64
+
+opath = sys.path
+sys.path = list(opath)
+sys.path.append('/usr/lib64/python2.7/site-packages')
+import psycopg2
+sys.path = opath
+
+def waithp(host, port):
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ try:
+ sock.connect((host, port))
+ except:
+ a, b, c = sys.exc_info()
+ traceback.print_exception(a, b, c)
+ sock.close()
+ raise RecoverableError('Server at {0}:{1} is not ready'.format(host, port))
+ sock.close()
+
+def doconn(desc):
+ ret = psycopg2.connect(**desc)
+ ret.autocommit = True
+ return ret
+
+def rootdesc(data, dbname):
+ return {
+ 'database': dbname,
+ 'host': data['rw'],
+ 'user': 'postgres',
+ 'password': getpass(data, 'postgres')
+ }
+
+def rootconn(data, dbname='postgres'):
+ return doconn(rootdesc(data, dbname))
+
+def onedesc(data, dbname, role, access):
+ user = '{0}_{1}'.format(dbname, role)
+ return {
+ 'database': dbname,
+ 'host': data[access],
+ 'user': user,
+ 'password': getpass(data, user)
+ }
+
+def dbdescs(data, dbname):
+ return {
+ 'admin': onedesc(data, dbname, 'admin', 'rw'),
+ 'user': onedesc(data, dbname, 'user', 'rw'),
+ 'viewer': onedesc(data, dbname, 'viewer', 'ro')
+ }
+
+def getpass(data, ident):
+ m = hashlib.md5()
+ m.update(ident)
+ m.update(base64.b64decode(data['data']))
+ return m.hexdigest()
+
+def find_related_nodes(reltype, inst = None):
+ if inst is None:
+ inst = ctx.instance
+ ret = []
+ for rel in inst.relationships:
+ if reltype in rel.type_hierarchy:
+ ret.append(rel.target)
+ return ret
+
+def chkfqdn(fqdn):
+ return re.match('^[a-zA-Z0-9_-]+(\\.[a-zA-Z0-9_-]+)+$', fqdn) is not None
+
+def chkdbname(dbname):
+ return re.match('[a-zA-Z][a-zA-Z0-9]{0,43}', dbname) is not None and dbname != 'postgres'
+
+def getclusterinfo(wfqdn, reuse, rfqdn, related):
+ if not chkfqdn(wfqdn):
+ raise NonRecoverableError('Invalid FQDN specified for admin/read-write access')
+ if reuse:
+ if rfqdn != '':
+ raise NonRecoverableError('Read-only FQDN must not be specified when using an existing cluster')
+ if len(related) != 0:
+ raise NonRecoverableError('Cluster SSH keypair must not be specified when using an existing cluster')
+ try:
+ with open('/opt/manager/resources/pgaas/{0}'.format(wfqdn).lower(), 'r') as f:
+ data = json.load(f)
+ data['rw'] = wfqdn
+ return data
+ except:
+ raise NonRecoverableError('Cluster must be deployed when using an existing cluster')
+ if rfqdn == '':
+ rfqdn = wfqdn
+ elif not chkfqdn(rfqdn):
+ raise NonRecoverableError('Invalid FQDN specified for read-only access')
+ if len(related) != 1:
+ raise NonRecoverableError('Cluster SSH keypair must be specified using a dcae.relationships.pgaas_cluster_uses_sshkeypair relationship to a dcae.nodes.sshkeypair node')
+ data = { 'ro': rfqdn, 'pubkey': related[0].instance.runtime_properties['public'], 'data': related[0].instance.runtime_properties['base64private'] }
+ try:
+ os.makedirs('/opt/manager/resources/pgaas')
+ except:
+ pass
+ os.umask(077)
+ with open('/opt/manager/resources/pgaas/{0}'.format(wfqdn).lower(), 'w') as f:
+ f.write(json.dumps(data))
+ data['rw'] = wfqdn
+ return(data)
+
+
+@operation
+def add_pgaas_cluster(**kwargs):
+ """
+ Record key generation data for cluster
+ """
+ data = getclusterinfo(ctx.node.properties['writerfqdn'], ctx.node.properties['use_existing'], ctx.node.properties['readerfqdn'], find_related_nodes('dcae.relationships.pgaas_cluster_uses_sshkeypair'))
+ ctx.instance.runtime_properties['public'] = data['pubkey']
+ ctx.instance.runtime_properties['base64private'] = data['data']
+
+
+@operation
+def rm_pgaas_cluster(**kwargs):
+ """
+ Remove key generation data for cluster
+ """
+ wfqdn = ctx.node.properties['writerfqdn']
+ if chkfqdn(wfqdn) and not ctx.node.properties['use_existing']:
+ os.remove('/opt/manager/resources/pgaas/{0}'.format(wfqdn))
+
+def dbgetinfo(refctx):
+ wfqdn = refctx.node.properties['writerfqdn']
+ related = find_related_nodes('dcae.relationships.database_runson_pgaas_cluster', refctx.instance)
+ if wfqdn == '':
+ if len(related) != 1:
+ raise NonRecoverableError('Database Cluster must be specified using exactly one dcae.relationships.database_runson_pgaas_cluster relationship to a dcae.nodes.pgaas.cluster node when writerfqdn is not specified')
+ wfqdn = related[0].node.properties['writerfqdn']
+ if not chkfqdn(wfqdn):
+ raise NonRecoverableError('Invalid FQDN specified for admin/read-write access')
+ ret = getclusterinfo(wfqdn, True, '', [])
+ waithp(wfqdn, 5432)
+ return ret
+
+@operation
+def create_database(**kwargs):
+ """
+ Create a database on a cluster
+ """
+ dbname = ctx.node.properties['name']
+ if not chkdbname(dbname):
+ raise NonRecoverableError('Unacceptable or missing database name')
+ ctx.logger.warn('In create_database')
+ info = dbgetinfo(ctx)
+ ctx.logger.warn('Got db server info')
+ descs = dbdescs(info, dbname)
+ ctx.instance.runtime_properties['admin'] = descs['admin']
+ ctx.instance.runtime_properties['user'] = descs['user']
+ ctx.instance.runtime_properties['viewer'] = descs['viewer']
+ with rootconn(info) as conn:
+ crx = conn.cursor()
+ crx.execute('SELECT datname FROM pg_database WHERE datistemplate = false')
+ existingdbs = [ x[0] for x in crx ]
+ if ctx.node.properties['use_existing']:
+ if dbname not in existingdbs:
+ raise NonRecoverableError('use_existing specified but database does not exist')
+ return
+ crx.execute('SELECT rolname FROM pg_roles')
+ existingroles = [ x[0] for x in crx ]
+ admu = descs['admin']['user']
+ usru = descs['user']['user']
+ vwru = descs['viewer']['user']
+ cusr = '{0}_common_user_role'.format(dbname)
+ cvwr = '{0}_common_viewer_role'.format(dbname)
+ schm = '{0}_db_common'.format(dbname)
+ if admu not in existingroles:
+ crx.execute('CREATE USER {0} WITH PASSWORD %s'.format(admu), (descs['admin']['password'],))
+ if usru not in existingroles:
+ crx.execute('CREATE USER {0} WITH PASSWORD %s'.format(usru), (descs['user']['password'],))
+ if vwru not in existingroles:
+ crx.execute('CREATE USER {0} WITH PASSWORD %s'.format(vwru), (descs['viewer']['password'],))
+ if cusr not in existingroles:
+ crx.execute('CREATE ROLE {0}'.format(cusr))
+ if cvwr not in existingroles:
+ crx.execute('CREATE ROLE {0}'.format(cvwr))
+ if dbname not in existingdbs:
+ crx.execute('CREATE DATABASE {0} WITH OWNER {1}'.format(dbname, admu))
+ crx.close()
+ with rootconn(info, dbname) as dbconn:
+ crz = dbconn.cursor()
+ for r in [ cusr, cvwr, usru, vwru ]:
+ crz.execute('REVOKE ALL ON DATABASE {0} FROM {1}'.format(dbname, r))
+ crz.execute('GRANT {0} TO {1}'.format(cvwr, cusr))
+ crz.execute('GRANT {0} TO {1}'.format(cusr, admu))
+ crz.execute('GRANT CONNECT ON DATABASE {0} TO {1}'.format(dbname, cvwr))
+ crz.execute('CREATE SCHEMA IF NOT EXISTS {0} AUTHORIZATION {1}'.format(schm, admu))
+ for r in [ admu, cusr, cvwr, usru, vwru ]:
+ crz.execute('ALTER ROLE {0} IN DATABASE {1} SET search_path = public, {2}'.format(r, dbname, schm))
+ crz.execute('GRANT USAGE ON SCHEMA {0} to {1}'.format(schm, cvwr))
+ crz.execute('GRANT CREATE ON SCHEMA {0} to {1}'.format(schm, admu))
+ crz.execute('ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT SELECT ON TABLES TO {1}'.format(admu, cvwr))
+ crz.execute('ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT INSERT, UPDATE, DELETE, TRUNCATE ON TABLES TO {1}'.format(admu, cusr))
+ crz.execute('ALTER DEFAULT PRIVILEGES FOR ROLE {0} GRANT USAGE, SELECT, UPDATE ON SEQUENCES TO {1}'.format(admu, cusr))
+ crz.execute('GRANT TEMP ON DATABASE {0} TO {1}'.format(dbname, cusr))
+ crz.execute('GRANT {0} to {1}'.format(cusr, usru))
+ crz.execute('GRANT {0} to {1}'.format(cvwr, vwru))
+ crz.close()
+ ctx.logger.warn('All done')
+
+@operation
+def delete_database(**kwargs):
+ """
+ Delete a database from a cluster
+ """
+ dbname = ctx.node.properties['name']
+ if not chkdbname(dbname):
+ return
+ if ctx.node.properties['use_existing']:
+ return
+ info = dbgetinfo(ctx)
+ ctx.logger.warn('Got db server info')
+ with rootconn(info) as conn:
+ crx = conn.cursor()
+ admu = ctx.instance.runtime_properties['admin']['user']
+ usru = ctx.instance.runtime_properties['user']['user']
+ vwru = ctx.instance.runtime_properties['viewer']['user']
+ cusr = '{0}_common_user_role'.format(dbname)
+ cvwr = '{0}_common_viewer_role'.format(dbname)
+ crx.execute('DROP DATABASE IF EXISTS {0}'.format(dbname))
+ for r in [ usru, vwru, admu, cusr, cvwr ]:
+ crx.execute('DROP ROLE IF EXISTS {0}'.format(r))
+ ctx.logger.warn('All gone')
diff --git a/pgaas/pgaas_types.yaml b/pgaas/pgaas_types.yaml
new file mode 100644
index 0000000..2554df3
--- /dev/null
+++ b/pgaas/pgaas_types.yaml
@@ -0,0 +1,55 @@
+tosca_definitions_version: cloudify_dsl_1_3
+
+imports:
+ - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
+plugins:
+ pgaas:
+ executor: central_deployment_agent
+ package_name: pgaas
+ package_version: 0.1.0
+
+node_types:
+ dcae.nodes.pgaas.cluster:
+ derived_from: cloudify.nodes.Root
+ properties:
+ writerfqdn:
+ description: 'FQDN used for admin/read-write access to the cluster'
+ type: string
+ use_existing:
+ type: boolean
+ default: false
+ description: 'If set to true, the cluster exists and is being referenced'
+ readerfqdn:
+ description: 'FQDN used for read-only access to the cluster (default - same as writerfqdn)'
+ type: string
+ default: ''
+ interfaces:
+ cloudify.interfaces.lifecycle:
+ create: pgaas.pgaas.pgaas_plugin.add_pgaas_cluster
+ delete: pgaas.pgaas.pgaas_plugin.rm_pgaas_cluster
+ dcae.nodes.pgaas.database:
+ derived_from: cloudify.nodes.Root
+ properties:
+ name:
+ type: string
+ description: 'Name of database (max 44 alphanumeric)'
+ use_existing:
+ type: boolean
+ default: false
+ description: 'If set to true, the database exists and is being referenced'
+ writerfqdn:
+ type: string
+ default: ''
+ description: 'Shortcut for connecting to a pgaas.cluster node (with use_existing=true) with a runson_pgaas_cluster relationship'
+ interfaces:
+ cloudify.interfaces.lifecycle:
+ create: pgaas.pgaas.pgaas_plugin.create_database
+ delete: pgaas.pgaas.pgaas_plugin.delete_database
+
+relationships:
+ dcae.relationships.pgaas_cluster_uses_sshkeypair:
+ derived_from: cloudify.relationships.connected_to
+ dcae.relationships.database_runson_pgaas_cluster:
+ derived_from: cloudify.relationships.contained_in
+ dcae.relationships.application_uses_pgaas_database:
+ derived_from: cloudify.relationships.connected_to
diff --git a/pgaas/pom.xml b/pgaas/pom.xml
new file mode 100644
index 0000000..8a934ac
--- /dev/null
+++ b/pgaas/pom.xml
@@ -0,0 +1,304 @@
+<?xml version="1.0"?>
+<!--
+============LICENSE_START=======================================================
+org.onap.ccsdk
+================================================================================
+Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
+================================================================================
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+============LICENSE_END=========================================================
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <parent>
+ <groupId>org.onap.ccsdk.platform</groupId>
+ <artifactId>plugins</artifactId>
+ <version>1.0.0-SNAPSHOT</version>
+ </parent>
+
+ <!--- CHANGE THE FOLLOWING 3 OBJECTS for your own repo -->
+ <groupId>org.onap.ccsdk.platform.plugins</groupId>
+ <artifactId>pgaas</artifactId>
+ <name>pgaas</name>
+
+ <version>1.0.0-SNAPSHOT</version>
+ <url>http://maven.apache.org</url>
+ <properties>
+ <!-- name from the setup.py file -->
+ <plugin.name>pgaas</plugin.name>
+ <!-- path to directory containing the setup.py relative to this file -->
+ <plugin.subdir>.</plugin.subdir>
+ <!-- path of types file itself relative to this file -->
+ <typefile.source>pgaas_types.yaml</typefile.source>
+ <!-- path, in repo, to store type file -->
+ <typefile.dest>type_files/pgaas/pgaas_types.yaml</typefile.dest>
+ <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+ <sonar.sources>.</sonar.sources>
+ <!-- customize the SONARQUBE URL -->
+ <sonar.host.url>http://localhost:9000</sonar.host.url>
+ <!-- below are language dependent -->
+ <!-- for Python -->
+ <sonar.language>py</sonar.language>
+ <sonar.pluginName>Python</sonar.pluginName>
+ <sonar.inclusions>**/*.py</sonar.inclusions>
+ <!-- for JavaScaript -->
+ <!--
+ <sonar.language>js</sonar.language>
+ <sonar.pluginName>JS</sonar.pluginName>
+ <sonar.inclusions>**/*.js</sonar.inclusions>
+ -->
+ </properties>
+
+ <build>
+ <finalName>${project.artifactId}-${project.version}</finalName>
+ <pluginManagement>
+ <plugins>
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>sonar-maven-plugin</artifactId>
+ <version>2.7.1</version>
+ </plugin>
+
+ <!-- nexus-staging-maven-plugin is called during deploy phase by default behavior.
+ we do not need it -->
+ <plugin>
+ <groupId>org.sonatype.plugins</groupId>
+ <artifactId>nexus-staging-maven-plugin</artifactId>
+ <version>1.6.7</version>
+ <configuration>
+ <skipNexusStagingDeployMojo>true</skipNexusStagingDeployMojo>
+ </configuration>
+ </plugin>
+ </plugins>
+ </pluginManagement>
+
+ <plugins>
+
+ <!-- first disable the default Java plugins at various stages -->
+ <!-- maven-resources-plugin is called during "*resource" phases by default behavior. it prepares the resources
+ dir. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-resources-plugin</artifactId>
+ <version>2.6</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-compiler-plugin is called during "compile" phases by default behavior. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.1</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-jar-plugin is called during "compile" phase by default behavior. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-jar-plugin</artifactId>
+ <version>2.4</version>
+ <executions>
+ <execution>
+ <id>default-jar</id>
+ <phase/>
+ </execution>
+ </executions>
+ </plugin>
+
+ <!-- maven-install-plugin is called during "install" phase by default behavior. it tries to copy stuff under
+ target dir to ~/.m2. we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-install-plugin</artifactId>
+ <version>2.4</version>
+ <configuration>
+ <skip>true</skip>
+ </configuration>
+ </plugin>
+
+ <!-- maven-surefire-plugin is called during "test" phase by default behavior. it triggers junit test.
+ we do not need it -->
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <version>2.12.4</version>
+ <configuration>
+ <skipTests>true</skipTests>
+ </configuration>
+ </plugin>
+
+ <!-- now we configure custom action (calling a script) at various lifecycle phases -->
+ <plugin>
+ <groupId>org.codehaus.mojo</groupId>
+ <artifactId>exec-maven-plugin</artifactId>
+ <version>1.2.1</version>
+ <executions>
+ <execution>
+ <id>clean phase script</id>
+ <phase>clean</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>${session.executionRootDirectory}/mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>clean</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>generate-sources script</id>
+ <phase>generate-sources</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>generate-sources</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>compile script</id>
+ <phase>compile</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>compile</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>package script</id>
+ <phase>package</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>package</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>test script</id>
+ <phase>test</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>test</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>install script</id>
+ <phase>install</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>install</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ </environmentVariables>
+ </configuration>
+ </execution>
+
+ <execution>
+ <id>deploy script</id>
+ <phase>deploy</phase>
+ <goals><goal>exec</goal></goals>
+ <configuration>
+ <executable>${session.executionRootDirectory}/mvn-phase-script.sh</executable>
+ <arguments>
+ <argument>${project.artifactId}</argument>
+ <argument>deploy</argument>
+ </arguments>
+ <environmentVariables>
+ <!-- make mvn properties as env for our script -->
+ <MVN_PROJECT_GROUPID>${project.groupId}</MVN_PROJECT_GROUPID>
+ <MVN_PROJECT_ARTIFACTID>${project.artifactId}</MVN_PROJECT_ARTIFACTID>
+ <MVN_PROJECT_VERSION>${project.version}</MVN_PROJECT_VERSION>
+ <MVN_NEXUSPROXY>${onap.nexus.url}</MVN_NEXUSPROXY>
+ <MVN_SERVER_ID>${project.distributionManagement.snapshotRepository.id}</MVN_SERVER_ID>
+ <TYPE_FILE_SOURCE>${typefile.source}</TYPE_FILE_SOURCE>
+ <TYPE_FILE_DEST>${typefile.dest}</TYPE_FILE_DEST>
+ <PLUGIN_NAME>${plugin.name}</PLUGIN_NAME>
+ <PLUGIN_SUBDIR>${plugin.subdir}</PLUGIN_SUBDIR>
+ </environmentVariables>
+ </configuration>
+ </execution>
+ </executions>
+ </plugin>
+ </plugins>
+ </build>
+</project>
diff --git a/pgaas/requirements.txt b/pgaas/requirements.txt
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/pgaas/requirements.txt
diff --git a/pgaas/setup.py b/pgaas/setup.py
new file mode 100644
index 0000000..6e6dad1
--- /dev/null
+++ b/pgaas/setup.py
@@ -0,0 +1,15 @@
+from setuptools import setup, find_packages
+
+setup(
+ name="pgaas",
+ version="0.1.1",
+ packages=find_packages(),
+ author="AT&T",
+ description=("Cloudify plugin for pgaas/pgaas."),
+ license="",
+ keywords="",
+ url="https://nowhere.bogus.com",
+ zip_safe=False,
+ install_requires=[
+ ]
+)
diff --git a/pom.xml b/pom.xml
index d36297d..1acd221 100644
--- a/pom.xml
+++ b/pom.xml
@@ -33,6 +33,7 @@ limitations under the License.
<packaging>pom</packaging>
<modules>
<module>sshkeyshare</module>
+ <module>pgaas</module>
</modules>
</project>