Page tree
Skip to end of metadata
Go to start of metadata


ONAP on Kubernetes

Test environment

LabPODOSCPUMemoryStorage
UNH laas Lab10.10.30.157CentOS 7.3.1611Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz117G802G

Deployment guide:

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes

vFW blitz daily at 1200EDT until Kubecon - https://wiki.onap.org/display/DW/Vetted+vFirewall+Demo+-+Full+draft+how-to+for+F2F+and+ReadTheDocs

(official) https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/OOM%20User%20Guide/oom_user_guide.html?highlight=oom

OOM Discussion list (post any issues here to notify and get responses from OOM personnel like the current) - https://lists.onap.org/pipermail/onap-discuss/2017-September/004616.html

In general most of the sync issues are because of the long lead time getting docker images from ONAP Nexus3 - this can be fixed by prewarming your own docker repo or running the deployment a 2nd time - There is a JIRA on running a script across the yamls to extract out the docker images and load them before bringing up the containers - see https://jira.onap.org/browse/OOM-328

 

 

TestStatusNote
Full deployin progress

clear iptables rules to deploy rancher server and agent on one machine ( solved: centos issue )

rancher can't bring up kubernetes. all pods in kube-system stay at pending state ( blocker )

Fix:

https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

OOM-328 - Preload docker images script before createAll.sh will allow 7 min startup IN PROGRESS

Partial deployin progress

aaf: (Successful)

aai: (Successful)

appc: (Successful)

clamp: (CrashLoopBackOff)

cli: (Successful)

consul: (Successful)

kube2msb: (CrashLoopBackOff)

log: (Successful)

message-router: (Successful)

msb: (Successful)

mso: (Successful)

multicloud: (Successful)

policy: (Successful)

portal: (CrashLoopBackOff)

robot: (Successful)

sdc: (Successful)

sdnc: (Successful)

vid: (Successful)

vnfsdk: (Successful)

dcae: (TODO)

 

ONAP on OpenStack

Test environment

LabPODOSCPUMemoryStorage
Huawei Shanghai Labhuawei-pod4ubuntu

?jumphost?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

?host1?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

?host2?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

?host3?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

?host4?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

?host5?Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz?48?

256G

256G

256G

256G

256G

256G

4T

4T

4T

4T

4T

4T

 

Deployment guide:

1.Set up Openstack 

Deploy os-nosdn-nofeature scenario using Euphrates Compass4nfv?Containerized Compass

2. ONAP Installation in Vanilla OpenStack

https://wiki.onap.org/display/DW/ONAP+Installation+in+Vanilla+OpenStack

 

TestStatusNote
Full deployin progress

Heat Template and env parameters: https://nexus.onap.org/content/sites/raw/org.onap.demo/heat/ONAP/1.1.0-SNAPSHOT/

issue:

Instance active but ONAP services not running probably due to false settings in onap_openstack.env


Network Topology



Remote Management

Remote access is required for …

    • Developers to access deploy/test environments (credentials to be issued per POD / user) at 100Mbps upload and download speed

OpenVPN is generally used for remote however community hosted labs may vary due to company security rules. Please refer to individual lab documentation/wiki page as each company may have different access rules and policies.

Basic requirements:

    • SSH sessions to be established (initially on the jump server)
    • Packages to be installed on a system by pulling from an external repo.

Firewall rules accommodate:

    • SSH sessions

Lights-out management network requirements:

    • Out-of-band management for power on/off/reset and bare-metal provisioning
    • Access to server is through a lights-out-management tool and/or a serial console
    • Refer to applicable light-out management information from server manufacturer

 

 

 

  • No labels

3 Comments

  1. For the pending state this is a result of parallel loading of docker images causing startup contention - the fix for this is to pre pull all docker images into your local repo - see the script on the kubernetes page.

    https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-QuickstartInstallation

    https://jira.onap.org/secure/attachment/10501/prepull_docker.sh

    OOM-328 - Preload docker images script before createAll.sh will allow 7 min startup IN PROGRESS

     

    For the search line limits - this is a known pre Rancher 2.0 bug - it is a red-herring in that it does not affect functionality - it is actually a warning on more than 5 DNS search terms - you can ignore it.

    https://github.com/rancher/rancher/issues/9303

     

    Also everything after "cli" in setenv.sh is still a WIP as the merge from open-o into onap occurs - R1 RC0 was last Thu and the Integration team is addressing issues first in HEAT - with the each team adjusting OOM config to match.

    /michael

    1. How about running a Registry as a pull-through cache https://docs.docker.com/registry/recipes/mirror/#how-does-it-work ?

      here an interesting thread about use cases (https://github.com/docker/distribution/issues/1431) as well as a suggestion to use https://github.com/virtuald/docker-registry-cache

       

      pros:

      • to me it seems we just need to update the docker daemon with the mirror address 
      • minmal traffic hitting the original registries

      cons:

      • yet another component to take care of
      1. Good idea,

           Some of the images are built daily, hence the daily pull refresh.  Yes a docker registry on premises would be better - some of the teams like Bell do this - you can even setup one directly on its' own container.  Anyway nexus3 got a double allocation a week ago - so pull time for all of onap is down to 20 min. with a 7 min startup (without DCAE) - which requires Openstack or Openstack on K8s

        /michael