summaryrefslogtreecommitdiffstats
path: root/docs/sections/blueprints
diff options
context:
space:
mode:
Diffstat (limited to 'docs/sections/blueprints')
-rw-r--r--docs/sections/blueprints/DockerHost.rst23
-rw-r--r--docs/sections/blueprints/PGaaS.rst166
-rw-r--r--docs/sections/blueprints/cbs.rst23
-rw-r--r--docs/sections/blueprints/cdap.rst130
-rw-r--r--docs/sections/blueprints/cdapbroker.rst23
-rw-r--r--docs/sections/blueprints/centos_vm.rst145
-rw-r--r--docs/sections/blueprints/consul.rst23
-rw-r--r--docs/sections/blueprints/deploymenthandler.rst23
-rw-r--r--docs/sections/blueprints/holmes.rst23
-rw-r--r--docs/sections/blueprints/inventoryapi.rst23
-rw-r--r--docs/sections/blueprints/policyhandler.rst23
-rw-r--r--docs/sections/blueprints/servicechangehandler.rst23
-rw-r--r--docs/sections/blueprints/tca.rst23
-rw-r--r--docs/sections/blueprints/ves.rst23
14 files changed, 0 insertions, 694 deletions
diff --git a/docs/sections/blueprints/DockerHost.rst b/docs/sections/blueprints/DockerHost.rst
deleted file mode 100644
index 25a96904..00000000
--- a/docs/sections/blueprints/DockerHost.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-DCAE Docker Host
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/PGaaS.rst b/docs/sections/blueprints/PGaaS.rst
deleted file mode 100644
index eedcfe56..00000000
--- a/docs/sections/blueprints/PGaaS.rst
+++ /dev/null
@@ -1,166 +0,0 @@
-PostgreSQL as a Service
-=======================
-
-PostgreSQL as a Service (PGaaS) comes in two flavors: all-in-one blueprint, and
-separate disk/cluster/database blueprints to separate the management of
-the lifetime of those constituent parts. Both are provided for use.
-
-Why Three Flavors?
-------------------
-
-The reason there are three flavors of blueprints lays in the difference in
-lifetime management of the constituent parts and the number of VMs created.
-
-For example, a database usually needs to have persistent storage, which
-in these blueprints comes from Cinder storage volumes. The primitives
-used in these blueprints assume that the lifetime of the Cinder storage
-volumes matches the lifetime of the blueprint deployment. So when the
-blueprint goes away, any Cinder storage volume allocated in the
-blueprint also goes away.
-
-Similarly, a database's lifetime may be the same time as an application's
-lifetime. When the application is undeployed, the associated database should
-be deployed too. OR, the database should have a lifetime beyond the scope
-of the applications that are writing to it or reading from it.
-
-Blueprint Files
----------------
-
-The Blueprints for PG Services and Cinder
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The all-in-one blueprint ``pgaas.yaml`` assumes that the PG servers and Cinder volumes can be allocated and
-deallocated together. The ``pgaas.yaml`` blueprint creates a cluster of two VMs named "``pstg``" by default.
-
-The ``pgaas-onevm.yaml`` blueprint creates a single-VM instance named "``pgvm``" by default.
-
-Alternatively, you can split them apart into separate steps, using ``pgaas-disk.yaml`` to allocate the
-Cinder volume, and ``pgaas-cluster.yaml`` to allocate a PG cluster. Create the Cinder volume first using
-``pgaas-disk.yaml``, and then use ``pgaas-cluster.yaml`` to create the cluster. The PG cluster can be
-redeployed without affecting the data on the Cinder volumes.
-
-The Blueprints for Databases
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``pgaas-database.yaml`` blueprint shows how a database can be created separately from any application
-that uses it. That database will remain present until the pgaas-database.yaml blueprint is
-undeployed. The ``pgaas-getdbinfo.yaml`` file demonstrates how an application would access the credentials
-needed to access a given database on a given PostgreSQL cluster.
-
-If the lifetime of your database is tied to the lifetime of your application, use a block similar to what
-is in ``pgaas-database.yaml`` to allocate the database, and use the attributes as shown in ``pgaas-getdbinfo.yaml``
-to access the credentials.
-
-Both of these blueprints use the ``dcae.nodes.pgaas.database`` plugin reference, but ``pgaas-getdbinfo.yaml``
-adds the ``use_existing: true`` property.
-
-
-What is Created by the Blueprints
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Each PostgreSQL cluster has a name, represented below as ``${CLUSTER}`` or ``CLNAME``. Each cluster is created
-with two VMs, one VM used for the writable master and the other as a cascaded read-only secondary.
-
-
-There are two DNS A records added, ``${LOCATIONPREFIX}${CLUSTER}00.${LOCATIONDOMAIN}`` and
-``${LOCATIONPREFIX}${CLUSTER}01.${LOCATIONDOMAIN}``. In addition,
-there are two CNAME entries added:
-``${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN} ``
-and
-``${LOCATIONPREFIX}-${CLUSTER}.${LOCATIONDOMAIN}``. The CNAME
-``${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN}`` will be used by further
-blueprints to create and attach to databases.
-
-
-Parameters
-------------
-
-The blueprints are designed to run using the standard inputs file used for all of the blueprints,
-plus several additional parameters that are given reasonable defaults.
-
-How to Run
-------------
-
-
-
-To install the PostgreSQL as a Service
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Installing the all-in-one blueprint is straightforward:
-
-::
-
- cfy install -p pgaas.yaml -i inputs.yaml
-
-By default, the all-in-one blueprint creates a cluster by the name ``pstg``.
-
-You can override that name using another ``-i`` option.
-(When overriding the defaults, it is also best to explicitly
-set the -b and -d names.)
-
-::
-
- cfy install -p pgaas.yaml -b pgaas-CLNAME -d pgaas-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
-
-
-Separating out the disk allocation from the service creation requires using two blueprints:
-
-::
-
- cfy install -p pgaas-disk.yaml -i inputs.yaml
- cfy install -p pgaas-cluster.yaml -i inputs.yaml
-
-By default, these blueprints create a cluster named ``pgcl``, which can be overridden the same
-way as shown above:
-
-::
-
- cfy install -p pgaas-disk.yaml -b pgaas-disk-CLNAME -d pgaas-disk-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
- cfy install -p pgaas-cluster.yaml -b pgaas-disk-CLNAME -d pgaas-disk-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
-
-
-You must use the same pgaas_cluster_name for the two blueprints to work together.
-
-For the disk, you can also specify a ``cinder_volume_size``, as in ``-i cinder_volume_size=1000``
-for 1TiB volume. (There is no need to override the ``-b`` and ``-d`` names when changing the
-volume size.)
-
-
-You can verify that the cluster is up and running by connecting to the PostgreSQL service
-on port 5432. To verify that all of the DNS names were created properly and that PostgreSQL is
-answering on port 5432, you can use something like this:
-
-::
-
- sleep 1 | nc -v ${LOCATIONPREFIX}${CLUSTER}00.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}${CLUSTER}01.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}-${CLUSTER}.${LOCATIONDOMAIN} 5432
-
-
-Once you have the cluster created, you can then allocate databases. An application that
-wants a persistent database not tied to the lifetime of the application blueprint can
-use the ``pgaas-database.yaml`` blueprint to create the database;
-
-::
-
- cfy install -p pgaas-database.yaml -i inputs.yaml
-
-By default, the ``pgaas-database.yaml`` blueprint creates a database with the name ``sample``, which
-can be overridden using ``database_name``.
-
-
-::
-
- cfy install -p pgaas-database.yaml -b pgaas-database-DBNAME -d pgaas-database-DBNAME -i inputs.yaml -i database_name=DBNAME
- cfy install -p pgaas-database.yaml -b pgaas-database-CLNAME-DBNAME -d pgaas-database-CLNAME-DBNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME -i database_name=DBNAME
-
-
-The ``pgaas-getdbinfo.yaml`` blueprint shows how an application can attach to an existing
-database and access its attributes:
-
-::
-
- cfy install -p pgaas-getdbinfo.yaml -d pgaas-getdbinfo -b pgaas-getdbinfo -i inputs.yaml
- cfy deployments outputs -d pgaas-getdbinfo
- cfy uninstall -d pgaas-getdbinfo
diff --git a/docs/sections/blueprints/cbs.rst b/docs/sections/blueprints/cbs.rst
deleted file mode 100644
index 79136d2e..00000000
--- a/docs/sections/blueprints/cbs.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Config Binding Service
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/cdap.rst b/docs/sections/blueprints/cdap.rst
deleted file mode 100644
index cff25617..00000000
--- a/docs/sections/blueprints/cdap.rst
+++ /dev/null
@@ -1,130 +0,0 @@
-CDAP
-======================
-
-Note: This blueprint is intended to be deployed, automatically, as part of the
-DCAE bootstrap process, and is not normally invoked manually.
-
-The ONAP DCAEGEN2 CDAP blueprint deploys a 7 node Cask Data Application
-Platform (CDAP) cluster (version 4.1.x), for running data analysis
-applications. The template for the blueprint is at
-``blueprints/cdapbp7.yaml-template`` in the ONAP
-``dcaegen2.platform.blueprints`` project. The ``02`` VM in the cluster
-will be the CDAP master.
-
-Blueprint Input Parameters
---------------------------
-
-This blueprint has the following required input parameters:
-
-* ``ubuntu1604image_id``
-
- This is the OpenStack image ID of the Ubuntu 16.04 VM image that will be
- used to launch the 7 VMs making up the cluster.
-
-* ``flavor_id``
-
- This is the OpenStack flavor ID specifying the amount of memory, disk, and
- CPU available to each VM in the cluster. While the required values will be
- largely application dependent, a minimum of 32 Gigabytes of memory is
- strongly recommended.
-
-* ``security_group``
-
- This is the OpenStack security group specifying permitted inbound and
- outbound IP connectivity to the VMs in the cluster.
-
-* ``public_net``
-
- This is the name of the OpenStack network from which floating IP addresses
- for the VMs in the cluster will be allocated.
-
-* ``private_net``
-
- This is the name of the OpenStack network from which fixed IP addresses for
- the VMs in the cluster will be allocated.
-
-* ``openstack``
-
- This is the JSON object / YAML associative array providing values necessary
- for accessing OpenStack. The keys are:
-
- * ``auth_url``
-
- The URL for accessing the OpenStack Identity V2 API. (The version of
- Cloudify currently being used, and the associated OpenStack plugin do
- not currently support Identity V3).
-
- * ``tenant_name``
-
- The name of the OpenStack tenant/project where the VMs will be launched.
-
- * ``region``
-
- The name of the OpenStack region within the deployment. In smaller
- OpenStack deployments, where there is only one region, the region is
- often named ``RegionOne``.
-
- * ``username``
-
- The name of the OpenStack user used as a credential for accessing
- OpenStack.
-
- * ``password``
-
- The password of the OpenStack user. (The version of Cloudify currently
- being used does not provide a mechanism for encrypting this value).
-
-* ``keypair``
-
- The name of the ssh "key pair", within OpenStack, that will be given access,
- via the ubuntu login, to the VMs. Note: OpenStack actually stores only the
- public key.
-
-* ``key_filename``
-
- The full file path, on the Cloudify Manager VM used to deploy this blueprint,
- of the ssh private key file corresponding to the ``keypair`` input parameter.
-
-* ``location_domain``
-
- The DNS domain/zone for DNS entries associated with the VMs in the cluster.
- If, for example, location_domain is ``dcae.example.com`` then the FQDN for
- a VM with hostname ``abcd`` would be ``abcd.dcae.example.com`` and a DNS
- lookup of that FQDN would lead an A (or AAAA) record giving the floating
- IP address assigned to that VM.
-
-* ``location_prefix``
-
- The hostname prefix for hostnames of VMs in the cluster. The hostnames
- assigned to the VMs are created by concatenating this prefix with a suffix
- identifying the individual VMs in the cluster (``cdap00``, ``cdap01``, ...,
- ``cdap06``). If the location prefix is ``jupiter`` then the hostname of
- the CDAP master in the cluster would be ``jupitercdap02``.
-
-* ``codesource_url`` and ``codesource_version``
-
- ``codesource_url`` is the base URL for downloading DCAE specific project
- installation scripts. The intent is that this URL may be environment
- dependent, (for example it may, for security reasons, point to an internal
- mirror). This is used in combination with the ``codesource_version`` input
- parameter to determine the URL for downloading the scripts. There are 2
- scripts used by this blueprint - ``cdap-init.sh`` and
- ``instconsulagentub16.sh`` These scripts are part of the
- dcaegen2.deployments ONAP project. This blueprint assumes that curl/wget
- can find these scripts at
- *codesource_url/codesource_version*\ ``/cloud_init/cdap-init.sh`` and
- *codesource_url/codesource_version*\ ``/cloud_init/instconsulagentub16.sh``
- respectively. For example, if codesource_url is
- ``https://mymirror.example.com`` and codesource_version is ``rel1.0``,
- then the installation scripts would be expected to be stored under
- ``https://mymirror.example.com/rel1.0/raw/cloud_init/``
-
-This blueprint has the following optional inputs:
-
-* ``location_id`` (default ``solutioning-central``)
-
- The name of the Consul cluster to register this CDAP cluster with.
-
-* ``cdap_cluster_name`` (default ``cdap``)
-
- The name of the service to register this cluster as, in Consul.
diff --git a/docs/sections/blueprints/cdapbroker.rst b/docs/sections/blueprints/cdapbroker.rst
deleted file mode 100644
index 59ed5d37..00000000
--- a/docs/sections/blueprints/cdapbroker.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-CDAP Broker
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/centos_vm.rst b/docs/sections/blueprints/centos_vm.rst
deleted file mode 100644
index cd2660e4..00000000
--- a/docs/sections/blueprints/centos_vm.rst
+++ /dev/null
@@ -1,145 +0,0 @@
-CentOS VM
-======================
-
-Note: This blueprint is intended to be deployed, automatically, as part of the
-DCAE bootstrap process, and is not normally invoked manually.
-
-This blueprint controls the deployment of a VM running the CentOS 7 operating system, used to
-run an instance of the Cloudify Manager orchestration engine.
-
-This blueprint is used to bootstrap an installation of Cloudify Manager. All other DCAE
-components are launched using Cloudify Manager. The Cloudify Manager VM and the Cloudify Manager
-software are launched using the Cloudify command line software in its local mode.
-
-Blueprint files
-----------------------
-
-The blueprint file is stored under source control in the ONAP ``dcaegen2.platform.blueprints`` project, in the ``blueprints``
-subdirectory of the project, as a template named ``centos_vm.yaml-template``. The build process expands
-the template to fill in certain environment-specific values. In the ONAP integration environment, the build process
-uploads the expanded template, using the name ``centos_vm.yaml``, to a well known-location in a Nexus artifact repository.
-
-Parameters
----------------------
-
-This blueprint has the following required input parameters:
-* ``centos7image_id``
-
- This is the OpenStack image ID of the Centos7 VM image that will be
- used to launch the Cloudify Manager VM.
-
-* ``ubuntu1604image_id``
-
- This is not used by the blueprint but is specified here so that the blueprint
- can use the same common inputs file as other DCAE VMs (which use an Ubuntu 16.04 image).
-
-* ``flavor_id``
-
- This is the OpenStack flavor ID specifying the amount of memory, disk, and
- CPU available to the Cloudify Manager VM. While the required values will be
- largely application dependent, a minimum of 16 Gigabytes of memory is
- strongly recommended.
-
-* ``security_group``
-
- This is the OpenStack security group specifying permitted inbound and
- outbound IP connectivity to the VM.
-
-* ``public_net``
-
- This is the name of the OpenStack network from which a floating IP address
- for the VM will be allocated.
-
-* ``private_net``
-
- This is the name of the OpenStack network from which fixed IP addresses for
- the VM will be allocated.
-
-* ``openstack``
-
- This is the JSON object / YAML associative array providing values necessary
- for accessing OpenStack. The keys are:
-
- * ``auth_url``
-
- The URL for accessing the OpenStack Identity V2 API. (The version of
- Cloudify currently being used, and the associated OpenStack plugin do
- not currently support Identity V3).
-
- * ``tenant_name``
-
- The name of the OpenStack tenant/project where the VM will be launched.
-
- * ``region``
-
- The name of the OpenStack region within the deployment. In smaller
- OpenStack deployments, where there is only one region, the region is
- often named ``RegionOne``.
-
- * ``username``
-
- The name of the OpenStack user used as a credential for accessing
- OpenStack.
-
- * ``password``
-
- The password of the OpenStack user. (The version of Cloudify currently
- being used does not provide a mechanism for encrypting this value).
-
-* ``keypair``
-
- The name of the ssh "key pair", within OpenStack, that will be given access,
- via the ubuntu login, to the VMs. Note: OpenStack actually stores only the
- public key.
-
-* ``key_filename``
-
- The full file path, on the Cloudify Manager VM,
- of the ssh private key file corresponding to the ``keypair`` input parameter.
-
-* ``location_domain``
-
- The DNS domain/zone for DNS entries associated with the VM.
- If, for example, location_domain is ``dcae.example.com`` then the FQDN for
- a VM with hostname ``abcd`` would be ``abcd.dcae.example.com`` and a DNS
- lookup of that FQDN would lead an A (or AAAA) record giving the floating
- IP address assigned to that VM.
-
-* ``location_prefix``
-
- The hostname prefix for hostname of the VM. The hostname
- assigned to the VM is created by concatenating this prefix with a suffix
- identifying the Cloudify Manager VM (``orcl00``). If the location prefix is ``jupiter`` then the hostname of
- the Cloudify Manager VM would be ``jupiterorcl00``.
-
-* ``codesource_url`` and ``codesource_version``
-
- This is not used by the blueprint but is specified here so that the blueprint
- can use the same common inputs file as other DCAE VMs. Some of the other VMs use
- combination of ``codesource_url`` and ``codesource_version`` to locate scripts
- that are used at installation time.
-* ``datacenter``
-
- The datacenter name that is used by the DCAE Consul installation. This is needed so that the Consul agent
- installed on the Cloudify Manager VM can be configured to register itself to the Consul service discovery system.
-
-This blueprint has the following optional inputs:
-
-* ``cname`` (default ``dcae-orcl``)
-
- A DNS alias name for the Cloudify Manager VM. In addition to creating a DNS A record for the Cloudify Manager VM,
- the installation process also creates a CNAME record, using ``dcae-orcl`` by default as the alias.
- For example, if the ``location_domain`` input is ``dcae.example.com``, the ``location_prefix`` input is ``jupiter``,
- and the ``cname`` input is the default ``dcae-orcl``, then the installation process will create an A record for
- ``jupiterorcl00.dcae.example.com`` and a CNAME record for ``dcae-orcl.dcae.example.com`` that points to
- ``jupiterorcl00.dcae.example.com``.
-
-
-How To Run
----------------------
-
-This blueprint is run as part of the bootstrapping process. (See the ``dcaegen2.deployments`` project.)
-Running it manually requires setting up a Cloudify 3.4 command line environment--something that's handled
-automatically by the bootstrap process.
-
-
diff --git a/docs/sections/blueprints/consul.rst b/docs/sections/blueprints/consul.rst
deleted file mode 100644
index f036b345..00000000
--- a/docs/sections/blueprints/consul.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Consul Cluster
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/deploymenthandler.rst b/docs/sections/blueprints/deploymenthandler.rst
deleted file mode 100644
index 427182c5..00000000
--- a/docs/sections/blueprints/deploymenthandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Deployment Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/holmes.rst b/docs/sections/blueprints/holmes.rst
deleted file mode 100644
index 94ca80fc..00000000
--- a/docs/sections/blueprints/holmes.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Holmes Correlation Analytics
-============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/inventoryapi.rst b/docs/sections/blueprints/inventoryapi.rst
deleted file mode 100644
index ab998b2d..00000000
--- a/docs/sections/blueprints/inventoryapi.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Inventory API
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/policyhandler.rst b/docs/sections/blueprints/policyhandler.rst
deleted file mode 100644
index 99637204..00000000
--- a/docs/sections/blueprints/policyhandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Policy Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/servicechangehandler.rst b/docs/sections/blueprints/servicechangehandler.rst
deleted file mode 100644
index 979948ba..00000000
--- a/docs/sections/blueprints/servicechangehandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Service Change Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/tca.rst b/docs/sections/blueprints/tca.rst
deleted file mode 100644
index 85fe70fb..00000000
--- a/docs/sections/blueprints/tca.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Threshold Crossing Analytics
-============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/ves.rst b/docs/sections/blueprints/ves.rst
deleted file mode 100644
index 1df74253..00000000
--- a/docs/sections/blueprints/ves.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-VNF Event Streaming Collector
-=============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
----------------
-
-List where we can find the blueprints
-
-Parameters
-----------
-
-The input parameters needed for running the blueprint
-
-How To Run
-----------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint