Page tree
Skip to end of metadata
Go to start of metadata

Purpose

This Wiki page is intended to capture the Auto Lab requirements and recommend specific Physical and/or Virtual Labs for execution of the Auto Test Cases.

Physical Lab

UNH IoL in US

NXP Lab in India

CTTL Lab in China

Huawei Lab in China, remote access to UNH IoL in US

Orange Lab in France

Lab-as-a-Service

Exam LaaS to determine if it meets the Auto Lab requirements.

Hardware

Servers

One shared (across all PODs) Lab Admin server for managing POD server host OS deployment/configuration. Can also be used for POD access VPN server.

Typically 6 servers per POD (jumphost + 5 platform nodes), though smaller configurations (e.g. 3 servers per POD) may apply as noted below.

Processor:

    • 8 cores minimum

    • Arm ThunderX

    • Intel Xeon

Firmware:

    • xx

Local Storage:

    • Disks: at least 1 x 1TB HDD

Memory:

    • at least 128GB RAM

Power Supply

    • Single power supply is acceptable(redundant power not required/nice to have)

Network Hardware

  • TOR Switch

  • Router

  • others

Networking

    • 48 Port TOR Switch

    • NICs - Combination of 1GE and 10GE based on network topology options

    • Connectivity for each data/control network is through a separate NIC port or a shared port. Separate port simplifies switch management however requires more NICs on the server and also more switch ports

    • BMC (Baseboard Management Controller) for lights-out management network using IPMI (Intelligent Platform Management Interface)

    • 2x1G Control, 1x10G Data(redundancy nice to have), 1x10G Control or Storage(if needed, redundancy nice to have), 48 Port Switch

      • Data NIC used for VNF traffic

      • 1 x 1G for IPMI mangement

      • 1 x 1G for Admin/PXE boot

      • 1 x 10G for control-plane connectivity/storage(if needed,  control plane and storage segmented through VLANs)

      • 1 x 10G for data network

POD

In the following table, we define 3 types of Pod based on the resource usage assumption and each POD recommended configuration is described in the following.  Please note in lab and real deployment scenarios, resource can be over subscribed depending on workload. Also we assume that ONAP platform will be deployed in a separate pod from VNFs. 

 

Type of Pod

Total Memory(GB)

for Compute Nodes

Total VCPU

for Compute Nodes

Total Storage

for Compute Nodes

Number of Control Nodes

Number of Compute Nodes

Large

600GB

120

4TB

3

>=2

Medium

200GB

80

2TB

3

>=2

Small

40GB

24

1TB

1

>=1

 

As an example, taking the above large pod as requirement, we can build a hypothetical pod with servers listed in the following table:

 

Hostname

CPU

Memory

Storage

IPMI

Admin/PXE

Private

Public

Storage

10GbE: NIC#, IP, MAC, VLAN, Network

          

 

Software

ONAP on Kubernentes

 

Links

ONAP on Kubernetes:

https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Delete/Rerunconfig-initcontainerfor/dockerdata-nfsrefresh

 

Interface between VNF and VNFM:

http://www.etsi.org/deliver/etsi_gs/NFV-IFA/001_099/008/02.01.01_60/gs_NFV-IFA008v020101p.pdf

 

  • No labels

2 Comments

  1. Note: Config procedure has changed as of 20170909 - please rerun your config section when bringing up ONAP on K8S - https://wiki.onap.org/display/DW/ONAP+on+Kubernetes#ONAPonKubernetes-Delete/Rerunconfig-initcontainerfor/dockerdata-nfsrefresh

  2. LoL was a typo, LoL (big grin)