Page tree
Skip to end of metadata
Go to start of metadata

Overview

The Openstack-ODL-VPP integration seeks to provide the capability to realize a set of use-cases relevant to the deployment of NFV nodes instantiated  by means of an Openstack orchestration system on VPP enabled compute nodes. The role of Opendaylight controller in this integration is twofold. It provides a network device configuration and topology abstraction via the Openstack Neutron interface, while providing the capability to realize more complex network policies by means of Group Based Policies. Furthermore it also provides the capabilities to monitor as well as visualize the operation of the virtual network devices and their topologies.

In supporting the general use-case of instantiatiting an NFV instance, two specific types of network transport use cases are realized:

UC1: NFV instance with VPP data-plane forwarding using a VLAN provider network

UC2: NFV instances with VPP data-plane forwarding using a VxLAN overlay transport network

High Level Design and software Functional Overview

The overall deployment is depicted in the figure below (see also APEX Network Topology).

Logically a set of NFVs will be orchestrated, as virtual machines, to perform a given role in a virtual customer networking context, realized across multiple compute nodes interconnected by means of a VLAN/VxLAN transport. This is exemplified in the figure below:

 

 

Each compute node is taken to be comprised of at least the following elements:

  • An Openstack Nova enabled compute platform
  • A VPP L2/L3 data plane forwarder with a Netconf capable Honeycomb agent.

 

 

Two models for external network connectivity

In both models below, the overlay, ie the tenant neutron network bridge, sits "on" the VxLAN tunnel interface.

 

Model 1 - BVI model

 PRO:
  • Tunnel source decoupled from physical. VBD just concerned with finding one tunnel source interface, NOT how to configure the physical.
  • Supports multiple physical interfaces attached to bridge.

CON:

  • Extra config required

 

 

 

Model 2 - Physical interface model

 

PRO:

  • Less config

CON:

  • VBD concerned with configuration of physical
  • Doesn't support multiple physical interfaces in bridge.

 

 

 

 

The functional components used to realize the architecture and their roles are defined as follows:

 

Openstack Neutron ML2 ODL Plugin

Handles Neutron data base synchronization and interaction with the southbound Openstack controller using HTTP.

Neutron Nothbound & Neutron MD-SAL Entry Store

Presents a Neutron (v2) extended HTTP API servlet for interaction with Openstack Neutron. It validates and stores the received Neutron data in the MD-SAL data store against the Neutron yang model driven .

Neutron Mapper

The Neutron Mapper listens to Neutron data change events and is responsible for using Neutron data in creating Group Based Policy Data objects, e.g. GBP End-Points, Flood-Domains. A GBP End Point represents a specific NFV/VM port and its identity as derived from a Neutron Port. The mapped data is stored using the GBP End Point yang model and an association between the GBP End-Point and its Neutron object is maintained in the Neutron-GBP map.

GBP Entities store

Stores for the GBP data artifacts against the GBP YANG schemas.

Neutron Group Based Policy Map store

Stores the bi-lateral relation between an End-Point and its corresponding Neutron object. Neutron-GBP map; keyed by Neutron object type, port, and Neutron UUID, gives the GBP End-Point, Flood domain respectively. GBP-Neutron map keyed by GBP object type, end-point

Neutron VPP Renderer Mapper

The Neutron VPP Renderer Mapper listens to Neutron Store data change events, as well as being able to access directly the store, and is responsible for converting Neutron data specifically required to render a  VPP node configuration with a given End Point, e.g. the virtual host interface name assigned to a vhostuser socket.. The mapped data is stored in the VPP info data store.

VPP Info Store

Stores VPP specific information regarding End-Points, Flood domains with VLAN, etc.

GBP Renderer Manager

The GBP Renderer Manager is the central point for dispatching of data to specific device renderers.  It uses the information derived from the GBP end-point and its topology entries to dispatch the task of configuration to a specific device renderer by writing a renderer policy configuration into the registered renderer's policy store.

The renderer manager also monitors, by being a data change listener on the VPP Renderer Policy States, for any errors in the application of a rendered configuration.

Renderer Policy Config Store

The store's schema serves as the API between the Renderer Manager and specific Renderers like the VPP Renderer. The store uses a a YANG modeled schema to represent all end-point and associated GBP policy data.

Topology Entries Store

The yang model based MD-SAL topology store serves two fundamental roles: 1. It maintains a topological representation of the GBP End Points, in the context of customer networks. 2. It maintains an association of each (VPP) compute node's physical interfaces to their neutron provider network (e.g. The association between an ethernet interface and a Neutron provider network). - TODO we need to find a way how to describe external endpoints and gateways for forwarding in GBP.

VPP Renderer

The VPP Renderer registers an instance for VPP nodes with the Renderer Manager by means of inserting operational data into the Renderer Policy config store.

It acts as a listener on the Renderer Policy consumes via the GBP Policy API data + the specific VPP End Point data, to drive the configuration of VPP devices using NETCONF Services.

More specifically, the renderer generates:

i) vhost user port configuration that corresponds to the VM port configuration

ii) VPP bridge instances corresponding to the GBP flood domain

iii) port or traffic filtering configuration, in accordance with the GBP policy.

The VPP Renderer also interacts with the Virtual Bridge Domain Service, by means of the VBD store, in order to establish connectivity between VPP nodes in a bridge domain. For this it uses the VPP device name, and the flood domain data derived from the VPP Info and End-Point data respectively.

For the executed configuration operations it updates state in the Renderer policy state store.

Virtual Bridge Domain Store and Manager

The virtual bridge domain manager is responsible for configuring the VxLAN overlay tunnel infrastructure to arrive at a desired bridged topology between multiple (VPP) compute nodes.
The virtial bridge domain service is also referred to as "VBD manager - virtual bridge domain manager", see also https://gerrit.fd.io/r/gitweb?p=honeycomb.git;a=tree;f=vbd;hb=HEAD for the current preliminary implementation and https://gerrit.fd.io/r/gitweb?p=honeycomb.git;a=blob_plain;f=vbd/impl/vbridge-workflow.txt;hb=HEAD for an overview.
VBD is to support two modes of operation:

  • Pre-defined topology: The network topology is created independently from VBD. I.e. the connectivity between virtual forwarders (like VXLAN tunnels) are created by some means which is independent from VDB, equally well it is assumed that all required bridge instances are present on a particular forwarder. This approach is not going to be followed for FDS.
  • Automated topology: VBD actively manages the topology. I.e. in cases where a new endpoint to and endpoint group which isn't yet present on a particular forwarder, VBD would also configure the necessary tunnels to fully connect the new node into the existing network. This mode will be used for FDS and implemented by VBD.
  • Hybrid topology: Bridge domains and user interfaces are created independently from VBD and VBD creates only tunnels between bridge domains.

Note on loop avoidance for virtual bridge domains: It is assumed that VXLAN tunnels are always configured as a full-mesh, with split-horizon group forwarding applied on any domain facing tunnel interface (i.e. forwarding behavior will be that used for VPLS). In later phases FDS will also explore sparse connectivity between forwarders and loop avoidance rather than always assume a complete graph.

NETCONF Mount Point Service & Connector

Collectively referred to as Netconf Services, provide the NETCONF interface for accessing VPP configuration and operational data stores that are represented as NETCONF mounts.

Virtual Packet Processor (VPP) and Honeycomb server

The VPP is the accelerated data plane forwarding engine relying on vhost user interfaces towards Virtual Machines created by the Nova Agent. The Honeycomb NETCONF configuration server is responsible for driving the configuration of the VPP, and collecting the operational data.

Rendered Policy State Store

Stores data regarding the execution of operations performed by a given renderer.

Nova Agent

The Nova Agent, a sub-component of the overall Openstack architecture, is responsible for interacting with the compute node's host Libvirt API to drive the life-cycle of Virtual Machines. It, along with the compute node software, are assumed to be capable of supporting vhost user interfaces.

Call Flows

 

End-end call flow for creating a Neutron vhostuser port on a VPP using a GBP renderer

 

 

 

 

 

Review Meeting Recording

May 4, 2016

https://cisco.webex.com/ciscosales/lsr.php?RCID=ef76bba14e394d969317dde41fdaa181

Password: QN9Y3Qtk

L2 connectivity design

This architecture uses QRouter to provide external connectivity for L2 scenario and should not affect proposed L3 architecture.

NOTES:
For this scenario to work a TAP port for QRouter needs to be created on VPP with propper naming for each bridge domain (purple vpp-tap ports). This can be done after getting notification that Tap port (green qr-tap) was created on QRouter and will be handled by GBP-HC-VPP. When both ports are created "L3 agent" then wires these ports together using bridges.
QRouter then handles all routing e.g. external routing, cross bridge domain routing and also NAT.

Scenario os-odl_l2-fdio-noha (Colorado 1.0)

Scenario os-odl_l2-fdio-ha (Colorado 2.0)

Adds:

  • OpenStack HA
  • OpenDaylight HA (Cluster deployment)
  • East-West security groups on VPP

 

Network configuration in case of HA

Network configuration with NAT and East-West Security Groups

L3 connectivity design

Scenario os-odl_l3-fdio-ha (Colorado 3.0)

Adds:

  • VPP as a replacement for qrouter/br-ext, incl. NAT
  • VRFs for tenant isolation - with v6 and v4 support

Base L3 scenario

VPP implements NAT and takes the role of the qrouter.

Base L3 scenario - with multiple networks per tenant supported

Base L3 scenario - with multiple networks per tenant and security groups supported

VPP implements east-west security groups (on L2 interface) and north-south security groups (on L3 interfaces)

 

Scenario without the use of NAT (e.g. with IPv6)

Requires route leaking from VRFs.

 

Mapping Neutron-GBP-VPP

DVR example

Detailed information about L3 scenario with DVR can be found here.

  • No labels