Page tree
Skip to end of metadata
Go to start of metadata

Target System State

This wiki defines the target system state that is created by a successful execution of the BGS. This target system state should be independent from the installer approach taken.

OPNFV Target System Definition

The OPNFV Target System is currently defined as OpenStack High Availability (HA) + OpenDaylight Neutron integration, across 3 Controller nodes and 2 Compute nodes. The Controller nodes run all OpenStack services outlined below in this wiki except for nova-compute. HA is defined as having at least mySQL and rabbitmq along with all other dependencies (corosync, pacemaker) working in an Active/Active or Active/Passive state.

The full hardware specification can be found outlined by the Pharos project below:

Key Software Components and associated versions

Component Type

Flavor

Version

Notes

Base OS

CentOS

7

Base OS may vary per Installer. CentOS 7 is the current OPNFV standard

SDN Controller

OpenDaylight

Helium SR2

With Open vSwitch

Infrastructure controller

OpenStack

Juno

 

Target System Operating System and Installed Packages

Install only core components on all target system nodes. Additional dependencies will be included when specific packages are added. The list below contains information pertaining to each Installer, its base OS per Controller/Compute node and the extra packages installed on those nodes.

Installer

Version

Jumphost Package List

Node Package List

OpenStack Services List

Foreman/QuickStack

CentOS 7

Foreman Jumphost Package List

Foreman/QuickStack Package List

Foreman OpenStack Service List

Fuel

 

 

 

OpenSteak

 

OpenStack Juno

OpenStack Juno Components OpenStack Juno

Component

Required?

Version

Notes

Nova

Yes

Juno

 

Glance

Yes

Juno

 

Neutron

Yes

Juno

 

Keystone

Yes

Juno

 

MySQL

Yes

Juno

Must be HA

RabbitMQ

Yes

Juno

Must be HA

Pacemaker cluster stack

Yes

Juno

Required for HA

Corosync

Yes

Juno

Required for HA

Ceilometer

No

Juno

 

Horizon

Yes

Juno

 

Heat

No

Juno

 

Swift

No

Juno

 

Cinder

Yes

Juno

Required to use Ceph Storage as Cinder backend

OPNFV Storage Requirements

Current requirements are that Cinder will be used for block storage using Ceph. There is currently no requirement for external, dedicated storage. Storage is to be implemented as a Ceph storage pool of multiple OSDs for HA along with several Ceph Monitors. The standard implementation is 3 Ceph OSDs and 3 Ceph Monitors, 1 OSD/Mon per Controller. The Controller node's internal hard drives will be used for redundant storage.

OpenDaylight Helium SR2

OpenDaylight

Component

Sub-Component

Version

Notes

 

odl-dlux-all

0.1.2-Helium-SR2

 

 

odl-config-persister-all

0.2.6-Helium-SR2

OpenDaylight :: Config Persister:: All

 

odl-aaa-all

0.1.2-Helium-SR2

OpenDaylight :: AAA :: Authentication :: All Featu

 

odl-ovsdb-all

1.0.2-Helium-SR2

OpenDaylight :: OVSDB :: all

 

odl-ttp-all

0.0.3-Helium-SR2

OpenDaylight :: ttp :: All

 

odl-openflowplugin-all

0.0.5-Helium-SR2

OpenDaylight :: Openflow Plugin :: All

 

odl-adsal-compatibility-all

1.4.4-Helium-SR2

OpenDaylight :: controller :: All

 

odl-tcpmd5-all

1.0.2-Helium-SR2

 

odl-adsal-all

0.8.3-Helium-SR2

OpenDaylight AD-SAL All Features

 

odl-config-all

0.2.7-Helium-SR2

OpenDaylight :: Config :: All

 

odl-netconf-all

0.2.7-Helium-SR2

OpenDaylight :: Netconf :: All

 

odl-base-all

1.4.4-Helium-SR2

OpenDaylight Controller

 

odl-mdsal-all

1.1.2-Helium-SR2

OpenDaylight :: MDSAL :: All

 

odl-yangtools-all

0.6.4-Helium-SR2

OpenDaylight Yangtools All

 

odl-restconf-all

1.1.2-Helium-SR2

OpenDaylight :: Restconf :: All

 

odl-integration-compatible-with-all

0.2.2-Helium-SR2

 

odl-netconf-connector-all

1.1.2-Helium-SR2

OpenDaylight :: Netconf Connector :: All

 

odl-akka-all

1.4.4-Helium-SR2

OpenDaylight :: Akka :: All

Additional Components and Software

Component

Package

Version

Notes

Hypervisor: KVM

 

 

 

Forwarder: OVS

 

2.3.0

Node config: Puppet

 

 

Example VNF1: Linux

Centos

7

 

Example VNF2: OpenWRT

 

version 14.07 – barrier braker)

 

Container Provider: Docker

docker.io (lxc-docker)

latest

FUEL delivery of ODL

Network setup

_ Describe which L2 segments are configured (i.e. for management, control, use by client VNFs, etc.), how these segments are realized (e.g. VXLAN between OVSs) and which segment numbering (e.g. VLAN IDs, VXLAN IDs) are used. Describe which IP addresses are used, which DNS entries (if any are configured), default gateways, etc. Describe if/how segments are interconnected etc. _

List and purpose of used subnets as defined here:
Network addressing and topology blueprint - FOREMAN

  • Admin (Management) - 192.168.0.0/24 - Admin network for PXE Boot, node configuration via puppet
  • Private (Control) - 192.168.11.0/24 - API traffic and inner-tenant communication
  • Storage - 192.168.12.0/24 - separate VLAN for storage
  • Public (Traffic) - management IP of Ostack/ODL + traffic

Network addressing and topology blueprint - FUEL

  • Admin (PXE) - 10.20.0.0/16 - FUEL Admin network (PXE Boot, cobbler/nailgun/mcollective work)
  • Public (Tagged VLAN) - Subnet is up to the users network - this is used for external communication of the Control nodes as well as for L3 Nating (configurable subnet range)
  • Storage (Tagged VLAN)) - 192.168.1.0/24 (default)
  • MGMT (Tagged VLAN) - 192.168.0.0/24 (default) used for Openstack Communication

Currently there are two approaches to VLAN tagging:

  • Fuel - tagging/untagging is done on the Linux hosts, the switch should be configured to pass tagged traffic.
  • Foreman - VLANs are configured on switch and packets are coming to/from Linux hosts untagged.
    There was agreed not use VLAN tagging unless the target hardware lacked the appropriate number of interfaces. It is still viable however for users who want to implement the target system in a restricted hardware environment to use tagging.

Following picture shows ODL connects to Neutron through ML2 pugin and to nova-compute through OVS bridge. (not yet finished, ceph storage will be added, approach with ODL in docker container will be added.)

Additional Environment Requirements (for operation) =

  1. access to a valid NTP server
  2. access to a valid DNS server (or relay)
  3. Web Browser with access to ADMIN network for HTTP based access

NTP setup

Multiple labs will eventually be working together across geographic boundaries.

  • All systems should use multiple NTP Servers
  • Timezone should be set to UTC for all systems
  • Centralized logging should be configured with UTC timezone
  • No labels