summaryrefslogtreecommitdiffstats
path: root/docs/BuildGuide.rst
blob: 8857945cdd5f6a0a2a71e05a6e45170f44104251 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. Copyright 2019 Samsung Electronics Co., Ltd.

OOM ONAP Offline Installer Package Build Guide
=============================================================

This document is describing procedure for building offline installer packages. It is supposed to be triggered on server with internet connectivity and will download all artifacts required for ONAP deployment based on our static lists. The server used for the procedure in this guide is preferred to be separate build server.

Procedure was completely tested on RHEL 7.4 as it’s tested target platform, however with small adaptations it should be applicable also for other platforms.

Part 1. Preparations
--------------------

We assume that procedure is executed on RHEL 7.4 server with \~300G disc space, 16G+ RAM and internet connectivity

More-over following sw packages has to be installed:

* for the Preparation (Part 1), the Download artifacts for offline installer (Part 2) and the application helm charts preparation and patching (Part 4)
    -  git
    -  wget

* for the Download artifacts for offline installer (Part 2) only
    -  createrepo
    -  dpkg-dev
    -  python2-pip

* for the Download artifacts for offline installer (Part 2) and the Populate local nexus (Part 3)
    -  nodejs
    -  jq
    -  docker (exact version docker-ce-17.03.2)

* for the Download artifacts for offline installer (Part 2) and for the Application helm charts preparation and patching (Part 4)
    -  patch

* for the Populate local nexus (Part 3)
    -  twine

This can be achieved by following commands:

::

    # Register server
    subscription-manager register --username <rhel licence name> --password <password> --auto-attach

    # enable epel for npm and jq
    rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

    # enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo

    # install following packages
    yum install -y expect nodejs git wget createrepo python2-pip jq patch

    pip install twine

    # install docker
    curl https://releases.rancher.com/install-docker/17.03.sh | sh

Then it is necessary to clone all installer and build related repositories and prepare the directory structure.

::

    # prepare the onap build directory structure
    cd /tmp
    git clone https://gerrit.onap.org/r/oom/offline-installer onap-offline
    cd onap-offline

Part 2. Download artifacts for offline installer
------------------------------------------------

**Note: Skip this step if you have already all necessary resources and continue with Part 3. Populate local nexus**

All artifacts should be downloaded by running the download script as follows:

./build/download_offline_data_by_lists.sh <project>

For example:

``$ ./build/download_offline_data_by_lists.sh onap_3.0.0``

Download is as reliable as network connectivity to internet, it is highly recommended to run it in screen and save log file from this script execution for checking if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. Downloading consists of 10 steps, which should be checked at the end one-by-one.

**Verify:** *Please take a look on following comments to respective
parts of download script*

[Step 1/10 Download collected docker images]

=> image download step is quite reliable and contain retry logic

E.g

::

    == pkg #143 of 163 ==
    rancher/etc-host-updater:v0.0.3
    digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8
    Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get
    https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
    WARNING [!]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed.
    Attempt: 2/5
    INFO: info waiting 10s for another try...
    v0.0.3: Pulling from rancher/etc-host-updater
    b3e1c725a85f: Already exists
    6a710864a9fc: Already exists
    d0ac3b234321: Already exists
    87f567b5cf58: Already exists
    16914729cfd3: Already exists
    83c2da5790af: Pulling fs layer
    83c2da5790af: Verifying Checksum
    83c2da5790af: Download complete
    83c2da5790af: Pull complete

[Step 2/10 Build own nginx image]

=> there is no hardening in this step, if it failed needs to be
retriggered. It should end with **Successfully built <id>**

[Step 3/10 Save docker images from docker cache to tarfiles]

=> quite reliable, retry logic in place

[Step 4/10 move infra related images to infra folder]

=> should be safe, precondition is not failing step(3)

[Step 5/10 Download git repos]

=> potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again.

E.g.

::

    Cloning into bare repository
    'github.com/rancher/community-catalog.git'...
    error: RPC failed; result=28, HTTP code = 0
    fatal: The remote end hung up unexpectedly
    Cloning into bare repository 'git.rancher.io/rancher-catalog.git'...
    Cloning into bare repository
    'gerrit.onap.org/r/testsuite/properties.git'...
    Cloning into bare repository 'gerrit.onap.org/r/portal.git'...
    Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'...
    Cloning into bare repository 'gerrit.onap.org/r/demo.git'...
    Cloning into bare repository
    'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'...
    Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'...

[Step 6/10 Download http files]

[Step 7/10 Download npm pkgs]

[Step 8/10 Download bin tools]

=> work quite reliably, If it not download all artifacts. Easiest way is probably to comment-out other steps in load script and run it again.

[Step 9/10 Download rhel pkgs]

=> this is the step which will work on rhel only, for other platform different packages has to be downloaded.

Following is considered as sucessfull run of this part:

::

      Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms)
        net-snmp-devel = 1:5.7.2-32.el7
      Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms)
        net-snmp-devel = 1:5.7.2-33.el7_5.2
    Dependency resolution failed, some packages will not be downloaded.
    No Presto metadata available for rhel-7-server-rpms
    https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
    [Errno 12\] Timeout on
    https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm:
    (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes
    received')
    Trying other mirror.
    Spawning worker 0 with 230 pkgs
    Spawning worker 1 with 230 pkgs
    Spawning worker 2 with 230 pkgs
    Spawning worker 3 with 230 pkgs
    Spawning worker 4 with 229 pkgs
    Spawning worker 5 with 229 pkgs
    Spawning worker 6 with 229 pkgs
    Spawning worker 7 with 229 pkgs
    Workers Finished
    Saving Primary metadata
    Saving file lists metadata
    Saving other metadata
    Generating sqlite DBs
    Sqlite DBs complete

[Step 10/10 Download sdnc-ansible-server packages]

=> there is again no retry logic in this part, it is collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it, however there is a bug in upstream that image in place will not work with those packages as old ones are not available and newer are not compatible with other stuff inside that image

Part 3. Populate local nexus
----------------------------

Prerequisites:

- All data lists and resources which are pushed to local nexus repository are available
- Following ports are not occupied buy another service: 80, 8081, 8082, 10001
- There's no docker container called "nexus"

**Note: In case you skipped the Part 2 for the artifacts download,
please ensure that the copy of resources data are untarred in
./install/onap-offline/resources/**

Whole nexus blob data tarball will be created by running script
build\_nexus\_blob.sh. It will load the listed docker images, run the
Nexus, configure it as npm and docker repository. Then it will push all
listed npm packages and docker images to the repositories. After all is
done the repository container is stopped and from the nexus-data
directory is created tarball.

There are mandatory parameters need to be set in configuration file:

+------------------------------+------------------------------------------------------------------------------------------+
| Parameter                    | Description                                                                              |
+==============================+==========================================================================================+
| NXS\_SRC\_DOCKER\_IMG\_DIR   | resource directory of docker images                                                      |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_SRC\_NPM\_DIR           | resource directory of npm packages                                                       |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_SRC\_PYPI\_DIR          | resource directory of npm packages                                                       |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_DOCKER\_IMG\_LIST       | list of docker images to be pushed to Nexus repository                                   |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_DOCKER\_WO\_LIST        | list of docker images which uses default repository                                      |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_NPM\_LIST               | list of npm packages to be published to Nexus repository                                 |
+------------------------------+------------------------------------------------------------------------------------------+
| NXS\_PYPI\_LIST              | list of pypi packages to be published to Nexus repository                                |
+------------------------------+------------------------------------------------------------------------------------------+
| NEXUS\_DATA\_TAR             | target tarball of Nexus data path/name                                                   |
+------------------------------+------------------------------------------------------------------------------------------+
| NEXUS\_DATA\_DIR             | directory used for the Nexus blob build                                                  |
+------------------------------+------------------------------------------------------------------------------------------+
| NEXUS\_IMAGE                 | Sonatype/Nexus3 docker image which will be used for data blob creation for this script   |
+------------------------------+------------------------------------------------------------------------------------------+

Some of the docker images using default registry requires special
treatment (e.g. they use different ports or SSL connection), therefore
there is the list NXS\_DOCKER\_WO\_LIST by which are the images retagged
to be able to push them to our nexus repository.

**Note: It's recomended to use abolute paths in the configuration file
for the current script**

Example of the configuration file:

::

    NXS_SRC_DOCKER_IMG_DIR="/tmp/onap-offline/resources/offline_data/docker_images_for_nexus"
    NXS_SRC_NPM_DIR="/tmp/onap-offline/resources/offline_data/npm_tar"
    NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
    NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
    NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/npm_list.txt"
    NXS_SRC_PYPI_DIR="/tmp/onap-offline/resources/offline_data/pypi"
    NXS_DOCKER_IMG_LIST="/tmp/onap-me-data_lists/docker_img.list"
    NXS_DOCKER_WO_LIST="/tmp/onap-me-data_lists/docker_no_registry.list"
    NXS_NPM_LIST="/tmp/onap-offline/bash/tools/data_list/onap_3.0.0-npm.list"
    NEXUS_DATA_TAR="/root/nexus_data.tar"
    NEXUS_DATA_DIR="/tmp/onap-offline/resources/nexus_data"
    NEXUS_IMAGE="/tmp/onap-offline/resources/offline_data/docker_images_infra/sonatype_nexus3_latest.tar"

Once everything is ready you can run the script as following example:

``$ ./install/onap-offline/build_nexus_blob.sh /root/nexus_build.conf``

Where the nexus\_build.conf is the configuration file and the
/root/nexus\_data.tar is the destination tarball

**Note: Move, link or mount the NEXUS\_DATA\_DIR to the resources
directory if there was different directory specified in configuration or
use the resulting nexus\_data.tar for movement between machines.**

Once the Nexus data blob is created, the docker images and npm packages
can be deleted to reduce the package size as they won't be needed in the
installation time:

E.g.

::

    rm -f /tmp/onap-offline/resources/offline_data/docker_images_for_nexus/*
    rm -rf /tmp/onap-offline/resources/offline_data/npm_tar

Part 4. Application helm charts preparation and patching
--------------------------------------------------------

This is about to clone oom repository and patch it to be able to use it
offline. Use the following command:

./build/fetch\_and\_patch\_charts.sh <helm charts repo>
<commit/tag/branch> <patchfile> <target\_dir>

For example:

``$ ./build/fetch_and_patch_charts.sh https://gerrit.onap.org/r/oom 3.0.0-ONAP /tmp/offline-installer/patches/casablanca.patch /tmp/oom-clone``

Part 5. Creating offline installation package
---------------------------------------------

For the packagin itself it's necessary to prepare configuration. You can
use ./build/package.conf as template or
directly modify it.

There are some parameters needs to be set in configuration file.
Example values below are setup according to steps done in this guide to package ONAP.

+---------------------------------------+------------------------------------------------------------------------------+
| Parameter                             | Description                                                                  |
+=======================================+==============================================================================+
| HELM\_CHARTS\_DIR                     | directory with Helm charts for the application                               |
|                                       |                                                                              |
|                                       | Example: /tmp/oom-clone/kubernetes                                           |
+---------------------------------------+------------------------------------------------------------------------------+
| APP\_CONFIGURATION                    | application install configuration (application_configuration.yml) for        |
|                                       | ansible installer and custom ansible role code directories if any.           |
|                                       |                                                                              |
|                                       | Example::                                                                    |
|                                       |                                                                              |
|                                       |  APP_CONFIGURATION=(                                                         |
|                                       |     /tmp/offline-installer/config/application_configuration.yml              |
|                                       |     /tmp/offline-installer/patches/onap-casablanca-patch-role                |
|                                       |  )                                                                           |
|                                       |                                                                              |
+---------------------------------------+------------------------------------------------------------------------------+
| APP\_BINARY\_RESOURCES\_DIR           | directory with all (binary) resources for offline infra and application      |
|                                       |                                                                              |
|                                       | Example: /tmp/onap-offline/resources                                         |
+---------------------------------------+------------------------------------------------------------------------------+
| APP\_AUX\_BINARIES                    | additional binaries such as docker images loaded during runtime   [optional] |
+---------------------------------------+------------------------------------------------------------------------------+

Offline installer packages are created with prepopulated data via
following command run from offline-installer directory

./build/package.sh <project> <version> <packaging target directory>

E.g.

``$ ./build/package.sh onap 1.0.1 /tmp/package"``


So in the target directory you should find tar files with

offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-sw.tar

offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-resources.tar

offline-<PROJECT\_NAME>-<PROJECT\_VERSION>-aux-resources.tar