Page tree
Skip to end of metadata
Go to start of metadata


Access to Auto Lab at UNH

If you'd like access to the Auto Lab at UNH, contact Parker Berberian to get VPN access.  Just mention that you are requesting access to the Auto Lab.

You will get openvpn credentials to access the lab.

There are two main resources - an x86 server(called Auto Pod 1), and an ARM server pod (Auto ARM Pod). 

Lab Tasks and Resources Overview

DomainTaskHardware ResourceWho is usingDependenciesStatusNotes
ONAPONAP on K8S on x86LaaS x86 Server
In transitionSee Reservation Info
ONAP on K8S on ARMAuto ARM PodBuilding ONAP for arm64Blockedk8s/helm cluster running on Auto ARM pod; ONAP itself is not yet available on ARM.
OPNFVOpenStack for ONAP VMs (e.g. DCAE)Auto ARM PodJoe Kidder
Done
ARM VIM as an ONAP targetAuto ARM PodJoe Kidder
Done
x86 VIM as ONAP targetLaaS x86 ServerJoe Kidder
In Progress

See Reservation Info

LaaS Resource Reservation Info

There are a number of LaaS servers booked at any given time for activities related to Auto.  

To see the reservations, view http://labs.opnfv.org and look for either "Auto" in the "Purpose" column or auto team members in the "Booked by" column.

note: The reservation info table has been removed as it's difficult to maintain/update. Auto team members are now reserving machines as needed, so there isn't a need to coordinate between a reserver and a user.



The Auto project is using LaaS to work on integration and test of the ONAP/OPNFV integration.  

Environment Deployment Recipes for LaaS

OPNFV/Openstack

After reserving a server, log into it with the appropriate credentials.

The following installs OPNFV using the MCP/Fuel installer on the Master branch.

1. mkdir /opt/fuel
2. cd /opt/fuel
3. git clone https://git.opnfv.org/fuel
4. cd fuel
5. vi /opt/fuel/fuel/mcp/config/scenario/os-nosdn-nofeature-noha.yaml

...add these lines to give your openstack virtual pod more resources...

   gtw01:
     ram: 2048
+  cmp01:
+    vcpus: 32
+    ram: 196608
+  cmp02:
+    vcpus: 32
+    ram: 196608
6. sed -i mcp/scripts/lib.sh -e 's/\(qemu-img create.*\) 100G/\1 350G/g'

...this change will provide more space to VMs. Default is 100G per cmp0x. This gives 350 each and 700 total.

7.Then deploy openstack.  It should take between 30 and 45 minutes: 
 ci/deploy.sh -l local -p virtual1 -s os-nosdn-nofeature-noha -D |& tee deploy.log

Lastly, to get access to the extra ram and vcpus, you need to adjust the quotas (this done on the controller at 172.16.10.36):

openstack quota set --cores 64 admin
openstack quota set --ram 393216 admin

ONAP on Kubernetes

Test environment

LabPODOSCPUMemoryStorage
UNH laas Lab10.10.30.157CentOS 7.3.1611Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz117G802G

Deployment guide:

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

vFW blitz daily at 1200EDT until Kubecon - https://wiki.onap.org/display/DW/Vetted+vFirewall+Demo+-+Full+draft+how-to+for+F2F+and+ReadTheDocs

(official) https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom

OOM Discussion list (post any issues here to notify and get responses from OOM personnel like the current) - https://lists.onap.org/pipermail/onap-discuss/2017-September/004616.html

In general most of the sync issues are because of the long lead time getting docker images from ONAP Nexus3 - this can be fixed by prewarming your own docker repo or running the deployment a 2nd time - There is a JIRA on running a script across the yamls to extract out the docker images and load them before bringing up the containers - see https://jira.onap.org/browse/OOM-328


Deploy procedures (For Amsterdam):

1  git clone https://gerrit.onap.org/r/oom 
2  cd oom && git checkout remotes/origin/amsterdam
3  git pull https://gerrit.onap.org/r/oom refs/changes/19/32019/6
4  cd install/rancher
5  ./oom_rancher_setup.sh -b master -s <your external ip> -e onap
6  cd oom/kubernetes/config
7  modify onap-parameters.yaml for VIM connection (manual)
8  ./createConfig.sh -n onap
9  cd ../oneclick
10 ./createAll.bash -n onap


TestStatusNote
Full deployin progress

clear iptables rules to deploy rancher server and agent on one machine ( solved: centos issue )

rancher can't bring up kubernetes. all pods in kube-system stay at pending state ( blocker )

Fix:

https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

OOM-328 - Preload docker images script before createAll.sh will allow 7 min startup IN PROGRESS

Partial deployin progress

aaf: (Successful)

aai: (Successful)

appc: (Successful)

clamp: (Successful)

cli: (Successful)

consul: (Successful)

dcaegen2: (Successful)

esr: (Successful)

kube2msb: (Successful)

log: (Successful)

message-router: (Successful)

msb: (Successful)

mso: (Successful)

multicloud: (Successful)

policy: (Successful)

portal: (Successful)

robot: (Successful)

sdc: (Successful)

sdnc: (Successful)

uui: (Successful)

vfc: (Successful)

vid: (Successful)

vnfsdk: (Successful)

DCAE installation:

Status: Waiting for connection to OpenStack

Currently at:

Issue:


ONAP on OpenStack

Test environment (Huawei x86)

LabPODOSCPUMemoryStorage
Huawei Shanghai Labhuawei-pod4ubuntu

(jumphost) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

(host1) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

(host2) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

(host3) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

(host4) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

(host5) Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz(48)

256G

256G

256G

256G

256G

256G

4T

4T

4T

4T

4T

4T


Deployment guide:

1.Set up Openstack 

Deploy os-nosdn-nofeature scenario using Euphrates Compass4nfv: Containerized Compass

2. ONAP Installation in Vanilla OpenStack

https://wiki.onap.org/display/DW/ONAP+Installation+in+Vanilla+OpenStack


TestStatusNote
Full deployin progress

Heat Template and env parameters: https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/ONAP/1.1.0-SNAPSHOT/

VMs:

onap-aai-inst1 (Active)

onap-aai-inst2 (Active)

onap-appc (Active)

onap-clamp (Active)

onap-dcae-bootstrap (Active)

onap-dns-server (Active)

onap-message-router (Active)

onap-multi-service (Active)

onap-policy (Active)

onap-portal (Active)

onap-robot (Active)

onap-sdc (Active)

onap-sdnc (Active)

onap-so (Active)

onap-vid (Active)


DCAE installation:

Status: DCAE VMs created

+------------------------------------------------+---------------------+--------+------------------------------------------------+-----------------------+

| ID                                                                   | Name                      | Status   | Networks                                                        | Image Name             |   

+------------------------------------------------+---------------------+--------+------------------------------------------------+-----------------------+

| abc90017-6484-4932-86ab-1a1e30940532  | dcaecdap02             | ACTIVE | oam_onap_IZRw=10.0.0.3                              | Ubuntu_16.04_xenial |

| 1d073e93-8c71-4ae5-8f9f-93e207835219    | dcaecdap03             | ACTIVE | oam_onap_IZRw=10.0.0.16                            | Ubuntu_16.04_xenial |

| 74a5049a-1cfa-47d9-bde2-2b58930154a8   | dcaecdap04             | ACTIVE | oam_onap_IZRw=10.0.0.22                            | Ubuntu_16.04_xenial |

| d7cb3fb8-421a-41f5-b72d-ae2fd6159ac2    | dcaecdap01             | ACTIVE | oam_onap_IZRw=10.0.0.17                            | Ubuntu_16.04_xenial |

| 728f5566-ac4a-4452-9ef3-f6979fc8fed4      | dcaecdap00              | ACTIVE | oam_onap_IZRw=10.0.0.15                            | Ubuntu_16.04_xenial |

| 4e9c8a03-940a-4cbc-a9fe-4a02dd98ec60   | dcaedoks00              | ACTIVE | oam_onap_IZRw=10.0.0.5, 192.168.22.129    | Ubuntu_16.04_xenial |

| 9abdb8d3-e58c-4bef-bb23-0c82bac62fb6  | dcaedokp00              | ACTIVE | oam_onap_IZRw=10.0.0.6, 192.168.22.125    | Ubuntu_16.04_xenial |

| bef610dd-5c7a-4c8d-899f-4cbd056b11fa   | dcaecnsl00                | ACTIVE | oam_onap_IZRw=10.0.0.11, 192.168.22.115  | Ubuntu_16.04_xenial |

| 80e67529-1670-4f2a-af67-467ad7202805  | dcaecnsl02                | ACTIVE | oam_onap_IZRw=10.0.0.12, 192.168.22.128   | Ubuntu_16.04_xenial |

| 78e3c05e-108a-44ae-9345-9dae6b854478 | dcaecnsl01                | ACTIVE | oam_onap_IZRw=10.0.0.9, 192.168.22.126     | Ubuntu_16.04_xenial |

| 65f4d6e9-5a01-4d6f-85f3-079ca69f52e0    | dcaeorcl00                | ACTIVE | oam_onap_IZRw=10.0.0.8, 192.168.22.123     | Centos_7                    |

Currently at:

Waiting for CDAP cluster to register

+ curl -Ss http://192.168.22.115:8500/v1/catalog/service/cdap

+ echo -n .

+ sleep 30

Issues:

issuestatuslog

Require functional Designate

solved
ONAP doesn't support httpssolved
Several deployment time oueopen

Timed out waiting for workflow 'install' of deployment 'DockerPlatform' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.

Timed out waiting for workflow 'install' of deployment 'DockerComponent' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.

Timed out waiting for workflow 'install' of deployment 'config_binding_service' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.

Timed out waiting for workflow 'install' of deployment 'PlatformServicesInventory' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.

Timed out waiting for workflow 'install' of deployment 'DeploymentHandler' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.

Timed out waiting for workflow 'install' of deployment 'policy_handler' to end. The execution may still be running properly; however, the command-line utility was instructed to wait up to 900 seconds for its completion.



Test environment (UNH ARM)

LabPODOSCPUMemoryStorage
UNH Auto Labarm-podubuntu(jumphost) Cavium(R) ThunderX(TM) 2.0GHz(48)64G450G



(host1) Cavium(R) ThunderX(TM) 2.0GHz(96)128G450G



(host2) Cavium(R) ThunderX(TM) 2.0GHz(96)128G450G



(host3) Cavium(R) ThunderX(TM) 2.0GHz(48)64G450G



(host4) Cavium(R) ThunderX(TM) 2.0GHz(48)64G450G



(host5) Cavium(R) ThunderX(TM) 2.0GHz(48)64G450G

Deployment guide:

1.Set up Openstack

on the jumphost do the following:

mkdir -p /armband

cd /armband

git clone -b stable/euphrates http://github.com/opnfv/armband

cd /armband/armband

        ci/deploy.sh -b file:///home/ubuntu -l arm -p pod-auto -s os-nosdn-nofeature-ha -B admin1_br0,mgmt1_br0,,,

        note: the pdf and idf referenced via the arm and pod-auto args for -l and -p, are referenced below

2. Follow steps described above for Huawei lab.

Current status is bringing up heat template, onap_openstack.yaml.  

  • Working through a combination of issues, including 
    • Timeouts for some neutron service requests related to floating IP addresses.
      • These timeouts don't appear to have much affect, as the VMs are all ping-able/ssh-able after adding necessary rules (see below). 
      • stack_status_reason   | Resource CREATE failed: NeutronClientException: resources.dcae_c_floating_ip: <html><body><h1>504 Gateway Time-out</h1>                      | The server didn't respond in time.
    • Volume create of vol1-sdc-data is failing.  It's a 100G volume...complaint is not enough space
      • FUEL-330 - Getting issue details... STATUS  There appears to be a problem with cinder-volume.  It's a loop device (maybe a problem, maybe not), but it's also set to 20G. 
      • So the problem is either that cinder is using the loop device or that the loop device is set to 20G.
      • Workaround for now is to change the size of the 100G volume to 10G in the heat template file "onap_openstack.yaml".
  • Due to the above timeouts, the stack creation usually fails, although most/all of the VMs are created.
    • In numerous "stack create" of the heat template, the stack has created successfully (as far as heat is concerned) on two occasions.  It has failed probably 10-15 times.
  • The user_data script that is run by cloud-init is not succeeding:
    • onap_portal
      • The installation of the docker components fails for ARM"
        • The docker and docker-engine) are done via "apt" from the dockerproject.org repo, but this repo has no support for arm64
        • The docker-compose component is done via "curl" from the docker project.org repo, also not supported for arm64
      • I modified the "portal_install.sh" script that is retrieved from the nexus onap via wget, put the modified copy on the jump server, and modified the user_data script
      • The ubuntu 16.04 image (14.04 is requested by the heat template, but docker is not supported for arm in 14.04) only has enough storage defined in the image for /dev/vda1 to have 2G, which is overrun when various docker images are pulled, causing the docker pulls to fail with "not enough space on device."  
        • I updated the flavor for "large" to have 10G of root disk, and this is no longer an issue (about 3G of space appears to be needed).
      • The portal_install.sh gets further now, but still encounters issues
        • Most notably pulling the nexus3.onap.org:10001/onap/portal-app
        • Error: image onap/portal-app:v1.3.0 not found
    • onap_vid
      • Same as above.  This hasn't been modified as for onap_portal
    • onap-dcae-bootstrap
      • Looks similar to above.
    • Suspect all of the xxx_install.sh scripts have the same issue.

Troubleshooting Environment Bring-Up after Server Relocation

Notes for this episode (1/18/18 thru 1/25/18) are archived here: ARM Pod Deployment: Debugging after Server Relocation.

Current Deployment: 2/2/18

The environment was re-deployed on Friday, 2/2/18, with an "os-nosdn-nofeature-ha" scenario.

ONAP on arm64

ONAP deployment on arm64 is currently being pursued on a kubernetes cluster running on top of the OPNFV OpenStack environment.

A heat template, k8sstack.<yml,env>, launches 4 VMs, and configures 1 as a k8s master and the other three as k8s nodes to make a k8s cluster.  The helm server (tiller) is then built and launched.  The cluster, when it finishes coming up, will have k8s and helm running.  To see whether the setup is complete, log into the k8s-master node and do a "tail-f /var/log/cloud-init-output.log", looking for "Cloud-init v. 17.1 finished at ...".  

The floating IP addresses are currently handed out by OpenStack, so you have to do an "openstack server list" on the openstack controller, at 172.16.10.10, accessed via the jump server (10.10.50.12), using the following: "ssh -i /var/lib/opnfv/mcp.rsa ubuntu@172.16.10.10".

The strategy is to deploy ONAP Beijing release, following the instructions here: 

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-ExampleEndtoEndKubernetesbasedONAPinstallanddeployment

The current status is that the k8s/helm cluster appears ready to go, and the next step is deploying ONAP.  This is meant to be the equivalent to what is achieved with "oom_rancher_setup.sh" on the above description.  The next step, installing ONAP using "cd.sh" will not work as the ONAP containers that are pulled are all based on arch=amd64, or x86_64.

Building the ONAP containers for this section is the current challenge.  

Build ONAP components on arm64 platform. 


Instructions for deploying an arm64 onap build environment.

Some prerequisites according to instructions here:
https://wiki.onap.org/pages/viewpage.action?pageId=6590586

* No less than 8 GB
* 40 GB of hard disk drive
* 4 vCPUs should suffice

The procedure is for using Eclipse IDE but I didn't install that as I don't build local but on
an arm64 server from unh lab.

  1. Onap project are build using maven tool:
    1. $ sudo apt-get -y install openjdk-8-jdk maven git-review
  2. Setup git information:
    1. $ git config --global user.email your_LF_account@email
    2. $ git config --global --add gitreview.username your_LF_user_name
  3.  Generate an HTTP Password in order to clone the necessary git repos:
    1. * Go to https://gerrit.onap.org/r/#/settings/http-password
    2. * As highlighted below, go to Settings --> HTTP Password --> Generate Password.
  4. On a clean folder on your Desktop (or your preferred path), create a folder and clone the ONAP APP-C Git Repositories that we will test with (NOTE: Use the previously generated HTTP Password to authenticate):
    1. $ git clone http://<LF_USER_ID>@gerrit.onap.org/r/a/appc
    2. $ git clone http://<LF_USER_ID>@gerrit.onap.org/r/a/appc/deployment
  5. Install docker (Fastest way is using the repository) https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-repository
    1. $ sudo apt-get update
    2. $ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
    3. $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    4. $ sudo add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    5. $ sudo apt-get update
    6. $ sudo apt-get install docker-ce
  6. One subtle thing you should do as soon as posible  is allow Docker to run without root permisions
    1. $ sudo groupadd docker
    2. $ sudo usermod -aG docker $USER
    3. $ sudo shutdown now -r
  7. Setting up Nexus 2 and 3 OSS Repositories.. The tutorial is instructing you the run Nexus 2 and 3 repositories locallyHowever I have not seen Nexus 2 and 3 build for arm64. So I did a compromise using an external x86_64 server torun Nexus 2 and 3 repo's. I don't assume there is a problem I think it's even better as the official onap nexus repositories are likely running on x86 servers any whay.). So on an accesible x86_64 server Nexus OSS 2 (to upload ONAP Component’s Maven Artifacts locally): https://www.sonatype.com/download-oss-sonatype - grab the bundle.tar.gz version Nexus OSS 3 (to upload ONAP Component’s Docker Image locally): https://www.sonatype.com/download-oss-sonatype - grab the bundle.tar.gz version In https://wiki.onap.org/pages/viewpage.action?pageId=6590586 go to "Setting up Nexus 2 and 3 OSS Repositories" and follow instructions there (You will need java installed on the server running nexus as well openjdk-8-jdk).
  8.  copy settings.xml file from https://wiki.onap.org/download/attachments/15997820/settings.xml to /home/ubuntu/.m2/
    1. $ wget -P ~/.m2/ https://wiki.onap.org/download/attachments/15997820/settings.xml
  9. Add the following snippet

<server>
<id>openecomp-release</id>
<username>deployment</username>
<password>deployment123</password>
</server>
<server>
<id>openecomp-snapshot</id>
<username>deployment</username>
<password>deployment123</password>
</server>

to the <servers> section of setting.xml
to set the usernames fro the local nexus server

I skip the eclipse part adn (for compensation) add the following snippet to Configuration properties --> in pom.xml file to make up for settings in eclipse


<skipTests>true</skipTests>
<maven.wagon.http.ssl.insecure>true</maven.wagon.http.ssl.insecure>
<maven.wagon.http.ssl.allowall>true</maven.wagon.http.ssl.allowall>
<MaxPermSize>1024</MaxPermSize>

or you can give them as arguments to "mvn -DSkipTests=true clean install" for ex.
in deployment project you will have to add the following snippet to propertis part of pom.xml

<skipTests>true</skipTests>
<maven.wagon.http.ssl.insecure>true</maven.wagon.http.ssl.insecure>
<maven.wagon.http.ssl.allowall>true</maven.wagon.http.ssl.allowall>
<MaxPermSize>1024</MaxPermSize>
<docker.push.registry>10.10.100.17:8082</docker.push.registry>
<docker.pull.registry>nexus3.onap.org:10001</docker.pull.registry>
<altDeploymentRepository>openecomp-snapshot::default::http://<nexus_server_ip>:8081/nexus/content/repositories/snapshots</DeploymentRepository>
<docker.push.username>admin</docker.push.username>
<docker.push.password>admin123</docker.push.password>
<docker.pull.username>anonymous</docker.pull.username>
<docker.pull.password>anonymous</docker.pull.password>

After all these instruction you should be able to build appc
with mvn clean install and mvn clean deploy
and deployment with
mvn clean install and mvn clean deploy -P docker -B
as in the instructions

These instructions are for appc (core) and (deployment) projects 
but oncce the setup is complete you can build any onap project

List of aarch64 missing dependencies.


This will be a list of identified missing aarch64 missing dependencies.

The ONAP project is fairly architecture agnostic (most of the code is writen in Java), however sometimes performaance enhancement or other third party tools make use of native code. 

The situation varies: some third party projects have native code for x86_64 only others they have multiple arch native code but are missing arm64.

                * jnr (https://github.com/jnr/)

                                               Extend jnr-jffi (https://github.com/jnr/jnr-ffi.git) to support aarch64 compiler (we need hardcore aarch64 assembler knowleageble plus some java)

                                              

                * jline (https://github.com/jline/jline2.git)

                                               Extend jansi project (https://github.com/fusesource/jansi) this seems to be easy achievable (I didn't get help from their community but I was able to build it and store it local in a maven nexus repo.

                              

                * jna (https://github.com/java-native-access/jna.git)

                                               This project has arm(32 I presume) support for the used version 3.4.0. There is a newer version 4.2.0 that has also arm64 (our architecture) support. To Do check that uppgrading the project doesn't brake anything.

               

                * jruby (https://github.com/jruby/jruby.git)

                                               Extend jruby-complete that has the native shared object libjffi-1.2.so it's also buildable for aarch64 (we can do it inhouse as well)

 .........


Network Topology

The 6 ARM servers and their roles and IPMI addresses are shown in the following table:

NicknameServer DescriptionIPMI AddressPublic IP AddressStandard Work Assignment
Big Cavium 196 core, 128G10.10.52.10
Compute Node 1, or 'cmp001'
Big Cavium 296 core, 128G10.10.52.11
Compute Node 2, or 'cmp002'

Small Cavium 1

unh-pod1-jump

48 core, 64G10.10.52.1210.10.50.12Jump Host
Small Cavium 248 core, 64G10.10.52.13
KVM Host for Controller VMs, or 'kvm01'
Small Cavium 348 core, 64G10.10.52.14
KVM Host for Controller VMs, or 'kvm02'
Small Cavium 448 core, 64G10.10.52.15
KVM Host for Controller VMs, or 'kvm03'

The jump host is running Ubuntu 16.04.3 LTS, user/passwd is "ubuntu"/"ubuntu".

We use the Euphrates/Stable MCP (new name for Fuel) installer, which is described here: 

http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html

We use the following PDF file: Auto Lab ARM Pod Description File (PDF)

We use the following IDF file: Auto Lab ARM Pod IDF File

The above files are listed here for information. They will likely change soon for schema validation and eventually be checked into the Pharos git repo.

A quick tour of how to collect information about the OPNFV installation on the ARM Pod is at: Tour of ARM Pod Installation.

Note: If you want to skip the above tour, the floating controller address is 172.16.10.10, accessible from the jump server for user ubuntu using the ssh key in /var/lib/opnfv/mcp.rsa.  The credentials are in the files /root/keystonerc and /root/keystonercv3.  It's possible that a previous user has copied these credentials to the /home/ubuntu directory.

  1. ssh -i /var/lib/opnfv/mcp.rsa ubuntu@172.16.10.10
  2. source keystonercv3
  3. openstack ...

Remote Management

Remote access is required for …

    • Developers to access deploy/test environments (credentials to be issued per POD / user) at 100Mbps upload and download speed

OpenVPN is generally used for remote however community hosted labs may vary due to company security rules. Please refer to individual lab documentation/wiki page as each company may have different access rules and policies.

Basic requirements:

    • SSH sessions to be established (initially on the jump server)
    • Packages to be installed on a system by pulling from an external repo.

Firewall rules accommodate:

    • SSH sessions

Lights-out management network requirements:

    • Out-of-band management for power on/off/reset and bare-metal provisioning
    • Access to server is through a lights-out-management tool and/or a serial console
    • Refer to applicable light-out management information from server manufacturer





+--------------------------------------+---------------------+--------+------------------------------------------+---------------------+| ID                                   | Name                | Status | Networks                                 | Image Name          |+--------------------------------------+---------------------+--------+------------------------------------------+---------------------+| abc90017-6484-4932-86ab-1a1e30940532 | dcaecdap02          | ACTIVE | oam_onap_IZRw=10.0.0.3                   | Ubuntu_16.04_xenial || 1d073e93-8c71-4ae5-8f9f-93e207835219 | dcaecdap03          | ACTIVE | oam_onap_IZRw=10.0.0.16                  | Ubuntu_16.04_xenial || 74a5049a-1cfa-47d9-bde2-2b58930154a8 | dcaecdap04          | ACTIVE | oam_onap_IZRw=10.0.0.22                  | Ubuntu_16.04_xenial || d7cb3fb8-421a-41f5-b72d-ae2fd6159ac2 | dcaecdap01          | ACTIVE | oam_onap_IZRw=10.0.0.17                  | Ubuntu_16.04_xenial || 728f5566-ac4a-4452-9ef3-f6979fc8fed4 | dcaecdap00          | ACTIVE | oam_onap_IZRw=10.0.0.15                  | Ubuntu_16.04_xenial || 4e9c8a03-940a-4cbc-a9fe-4a02dd98ec60 | dcaedoks00          | ACTIVE | oam_onap_IZRw=10.0.0.5, 192.168.22.129   | Ubuntu_16.04_xenial || 9abdb8d3-e58c-4bef-bb23-0c82bac62fb6 | dcaedokp00          | ACTIVE | oam_onap_IZRw=10.0.0.6, 192.168.22.125   | Ubuntu_16.04_xenial || bef610dd-5c7a-4c8d-899f-4cbd056b11fa | dcaecnsl00          | ACTIVE | oam_onap_IZRw=10.0.0.11, 192.168.22.115  | Ubuntu_16.04_xenial || 80e67529-1670-4f2a-af67-467ad7202805 | dcaecnsl02          | ACTIVE | oam_onap_IZRw=10.0.0.12, 192.168.22.128  | Ubuntu_16.04_xenial || 78e3c05e-108a-44ae-9345-9dae6b854478 | dcaecnsl01          | ACTIVE | oam_onap_IZRw=10.0.0.9, 192.168.22.126   | Ubuntu_16.04_xenial || 65f4d6e9-5a01-4d6f-85f3-079ca69f52e0 | dcaeorcl00          | ACTIVE | oam_onap_IZRw=10.0.0.8, 192.168.22.123   | Centos_7

  • No labels

5 Comments

  1. For the pending state this is a result of parallel loading of docker images causing startup contention - the fix for this is to pre pull all docker images into your local repo - see the script on the kubernetes page.

    https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-QuickstartInstallation

    https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

    OOM-328 - Preload docker images script before createAll.sh will allow 7 min startup IN PROGRESS

     

    For the search line limits - this is a known pre Rancher 2.0 bug - it is a red-herring in that it does not affect functionality - it is actually a warning on more than 5 DNS search terms - you can ignore it.

    https://github.com/rancher/rancher/issues/9303

     

    Also everything after "cli" in setenv.sh is still a WIP as the merge from open-o into onap occurs - R1 RC0 was last Thu and the Integration team is addressing issues first in HEAT - with the each team adjusting OOM config to match.

    /michael

    1. How about running a Registry as a pull-through cache https://docs.docker.com/registry/recipes/mirror/#how-does-it-work ?

      here an interesting thread about use cases (https://github.com/docker/distribution/issues/1431) as well as a suggestion to use https://github.com/virtuald/docker-registry-cache

       

      pros:

      • to me it seems we just need to update the docker daemon with the mirror address 
      • minmal traffic hitting the original registries

      cons:

      • yet another component to take care of
      1. Good idea,

           Some of the images are built daily, hence the daily pull refresh.  Yes a docker registry on premises would be better - some of the teams like Bell do this - you can even setup one directly on its' own container.  Anyway nexus3 got a double allocation a week ago - so pull time for all of onap is down to 20 min. with a 7 min startup (without DCAE) - which requires Openstack or Openstack on K8s

        /michael