Page tree
Skip to end of metadata
Go to start of metadata

Apex Developer Build & Integration Guide

Apex an installation tool deploys according to OPNFV requirements standards using Triple-O from the RDO Project. This document will describe the process to build Apex and to integrate new features into Apex.

To get a good overview of the terms that describe an Apex deployment and the target architecture of what is deployed please review the Installation Instructions. Apex Installation Instructions

Building and Deploying Apex for Development

These instructions are valid for the master branch.

Building and Deploying Apex for development is a set of instructions to build and deploy out of the code repository in order to iterate quickly over building and deploying. If you are interested in deploying OPNFV Apex on a system to evaluate, or use, please use the Installation Documentation released with our official builds of Apex.

CentOS Setup

Below are the required setup steps for preparing a CentOS server to use Apex.

# First setup the build environment (CentOS ONLY - as root user)
yum groupinstall 'Development Tools'
# The line below installs RDO Ocata for the Euphrates release (use Pike for master/Fraser release)
yum install -y
# We also need the epel repository
yum install -y epel-release
yum update -y
cd /etc/yum.repos.d && wget #(Use apex/master/opnfv-apex.repo for master/Fraser)
cd ~
git clone ~/

Building and Installing the Apex RPM

This section is only applicable if you want to build/test the Apex RPM. This is not required to deploy. Do the below steps as root.

cd apex/build
make clean
mkdir -p ~/apex-cache
cd ../apex
python3 -c ~/apex-cache -r dev1
cd ../.build/noarch
yum -y install python34-opnfv-apex-dev1.rpm

Setting up Apex for a Development Environment

The following steps install requirements for Apex and allow executing Apex out of the git workspace. This is the preferred method to deploying Apex for a developer as it allows for quick iteration on the Apex code.

# For CentOS use yum instead of dnf
sudo dnf install python3-pip python34 ansible openvswitch libvirt-devel
git clone ~/
cd ~/apex
sudo pip3 install -r requirements.txt

# Note if you want to also install Apex via PIP and not execute from the git workspace simply do
cd ~/apex && sudo pip3 install .

Apex Standard Deployment (TripleO based - CentOS ONLY)

This section covers a full TripleO based deployment which will run for 1-2 hours. Before deploying, reference:
The latest RDO will be downloaded and will be modified for the settings provided in the deploy settings file.
The deploy settings file is capable of including upstream openstack gerrit patches, along with specifying versions of ODL/OpenStack.
See for an example:
# As root user
cd ~/apex/apex
python3 -v -n ../config/network/network_settings.yaml -d ../config/deploy/os-odl-nofeature-ha.yaml --deploy-dir ../build --lib-dir ../lib --image-dir ../.build --debug

### Execute an overcloud deployment from an existing undercloud (Standard Deployment only)
# connect to undercloud
opnfv-util undercloud
cd /home/stack
source stackrc
bash deploy_command

Apex Snapshot Deployment (CentOS or Fedora)

This section goes over how to deploy Apex via snapshots which typically takes less than 10 minutes to deploy. The execution can be done via a non-root user, but requires sudo access. Before continuing with this section, read the following:

### Apex Snapshot deployment (CentOS or Fedora)
# Read
# Read
# Read
cd ~/apex/apex
sudo PYTHONPATH=~/apex:$PYTHONPATH python3
# Example below deploys All-in-one + Queens + ODL Oxygen
sudo PYTHONPATH=~/apex:$PYTHONPATH python3 -v -d ../config/deploy/os-odl-queens-noha.yaml --deploy-dir ../build --lib-dir ../lib --image-dir ../.build --debug --snapshot --snap-cache ~/snap_cache --virtual-computes 0

Upgrading OpenDaylight in a Snapshot Deployment

It is possible to re-install OpenDaylight with another distribution after a Snapshot Deployment. This functionality will eventually be built into Apex itself, but for now is part of the SDNVPN project in OPNFV. Replacing the OpenDaylight on a snapshot deployment currently will bring down the docker opendaylight_api service on the Controller, and bring up a systemd version of OpenDaylight on the host. In the future this will be changed to be a container.

# Locate the opendaylight distribution you want to install and ensure it is in tar.gz format
# To convert zip to tar.gz can do:
ODL_ZIP=<your zip file>
UNZIPPED_DIR=`dirname $(unzip -qql ${ODL_ZIP} | head -n1 | tr -s ' ' | cut -d' ' -f5-)`
unzip ${ODL_ZIP}
tar czf /tmp/new_odl_distro.tar.gz ${UNZIPPED_DIR}

# Locate the snap cache used for your Snapshot Deployment <snap_cache>/<os_version>/<topology>
# For example The previous Snapshot Deployment command would have the cache in ~/snap_cache/queens/noha-allinone/

# Clone SDNVPN from OPNFV
git clone
pushd sdnvpn/odl-pipeline/lib
./ --pod-config ${SNAP_CACHE}/node.yaml --odl-artifact /tmp/new_odl_distro.tar.gz --ssh-key-file ${SNAP_CACHE}/id_rsa

Overview of the Apex Commands


    • run this before each deploy, it ensures that the undercloud machine and network settings on the jump host are ready for a new deployment.


    • -v means virtual

    • -d and -n are settings files that are discussed more in the installation instructions documentation.

    • {{ deploy_settings }} can be replaced by one of the file in apex/config/deploy/

Network_settings.yaml can be left as it is for a virtual deployment, modifications according to installation documentation are also an option.
Look though the catalog of files in apex/config/deploy to choose the flavor of deployment you would like. There are a collection of SDN controllers and features that can be enabled or swapped out. You can also just edit the stock deploy_settings.yaml if you would like.

Building Apex from a Gerrit Review

git fetch refs/changes/43/14743/8 && git checkout FETCH_HEAD

If you would like to build from a specific review in Gerrit that has not been merged yet you will need to run a command like this in your git repo right after cloning it. Once you have checked out the review cd into build and start with the make commands to rebuild the images.

This command can be copied from gerrit to your clipboard. To do this:

  1. Open the Review in Gerrit
  2. Click the download menu button in the upper right hand corner of the review's page
  3. Click the clipboard icon to the right of the "Checkout" git command
  4. Paste from your clipboard into your terminal in the apex git repo directory and you will have a command like the example above for the review you would like to build.


Building Apex to produce Packages

Building Apex to produce packages will produce a set of RPMs that should be installed following the Installation Instructions distributed with OPNFV Apex.

git clone
cd apex/ci
./ -r {user-defined-release-version} -c file:///{cache-storage-location} [ --rpms | --iso ]

--iso will build RPMs and the ISO, --rpms will just build the RPMs and not the iso

RPMS will be put in the .build/noarch directory.

{user-defined-release-version} is an RPM release number compatible identifier. This is used in both the RPM release value and to name the ISO. The Apex defined version will be used as the primary version and this user-defined release version is a secondary version that can be used for your own purposes. If you don't have a need for it just set it to 1.

{cache-storage-location} is where the 7-10G cache file will be stored between builds

Build process and development details.

There are a collection of files executed during build that are helpful to understand. Not all files used in build are listed here, these are the primary files.


A boiler plate entry point for build. This file is most likely not necessary to be edited during integration


Deployment environment clean up script that deletes libvirt resources and ovs bridges


The entry point into Apex specific build steps. It's responsible for orchestrating the build process, RPM build and ISO build


The undercloud disk image build script. This is where the undercloud's disk image is customized and prepared.


The overcloud disk images build scripts. This is where the overcloud disk image's are customized and prepared.

Apex is built in a few stages and tasks summarized here:

  • download RDO Project undercloud and overcloud disk images
  • modify downloaded disk images
  • build RPM with configuration, scripts and disk images
  • build CentOS ISO for offline deployment

RPM packaging and ISO build in the Makefile

Once the image build has run and has dumped all the files to disk that are needed to build, git archive is run which generates the initial tarball the RPM will build from. During build, files are generated that need to be included in the tarball so that the RPM has access to them for packaging. The tarball is appended with the necessary files for final packaging and deployment. Rpmbuild is executed to roll the RPM that includes the configuration files, deployment scripts and disk images necessary for deployment.

Once the RPM is built the CentOS installation DVD is downloaded, exploded, modified adding the Apex RPM and updating the installation configuration files for the OS and the ISO is rebuilt with the updated contents.

Apex Code Overview

During the Euphrates release the Apex code base has been migrated from bash to Python3 and Ansible to perform the TripleO deployment.  This allows us to provide better code stability with python unit testing, better error tracability, makes the code less prone to syntax errors, and Ansible creates a more robust way to connect to nodes and configure them.  They python code for apex is stored as a python package under the apex repo in the 'apex' subdirectory.  The '' file there is used to execute the deployment, and when the package is installed (via `pip3 install .`, `python3 install`, or installing the python34-opnfv-apex RPM) the file is installed into /bin as 'opnfv-deploy' using setuptools.

From settings parsing, validation, and deployment logic is handled by the Apex python code.  Once the python logic has completed parsing information for deployment and configuring host dependencies (such as creating a libvirt VM for undercloud, firewall rules, ipmi configuration) it will run ansible playbooks which are located in apex/lib/ansible/playbooks.  Each playbook is labeled with a different purpose, such as configure_undercloud.yml is responsible for setting up and installing the undercloud VM.  If an ansible task fails during deployment, a python traceback will be displayed, and above it, a detailed log of the ansible run.  From that ansible out it should be possible to determine which play failed.

When contributing new code to Apex to add a feature, please try to add unit tests in the apex/tests/ directory to cover your code.  Also, keep in mind that the goal should be to only use Ansible as a configuration tool.  Deployment logic and parsing should be done in python and not done in jinja2 with Ansible.  When writing Ansible, try to use Ansible modules that already exist rather than calling shell to do configuration.  For example, instead of calling "systemctl start openvswitch" use the systemd Ansible module instead.

Before uploading your patch to the Apex gerrit, please execute unit tests.  To do this:

tox -e pep8

tox -e py34

Apex Feature Integration

The RDO Project uses Triple-O as the OpenStack installation platform that Apex is built on. Patches submitted must fit 1 of 2 criteria in order to be submitted for merge:

  1. update the Apex and or OPNFV specific contents of the project
  2. intent to update an upstream project used by Apex
    1. patches for upstream projects can be carried temporarily in the Apex code base as long as there is active effort for the patch to be merged into the upstream project.

TripleO Code Integration and Patches

In order to integrate your feature into Apex, you must create a patch for Triple-O and/or RDO code. Triple-O uses heat templates with puppet configuration providers as its method to configure OpenStack. The Triple-O sub-project for heat-templates is called tripleo-heat-templates (THT), while another repo, puppet-tripleo is used to hold the puppet manfiests driven by THT.  The RDO project is used to hold the packages as RPMs of different OpenStack components.  To see a full list of currently built packages in RDO: and click on repo for the latest build.

OPNFV Releases are based on stable OpenStack versions that are already feature frozen when they are integrated into OPNFV. To expose features to OPNFV we must maintain an incubation fork of THT and puppet-tripleo while working to get the features upstream. These forks are maintained at and  Each fork has a branch corresponding to the current OPNFV release (such as stable/euphrates).  Note: previous forks held in github are deprecated and no longer used.  apex-triple-heat-templates is inserted into the undercloud image at build time via apex/build/, while apex-puppet-tripleo is written to the overcloud image itself using apex/build/  In order to carry RDO (packaging) related changes which do not exist in the released version of OpenStack that OPNFV is using, the necessary packages can be added to the Apex build process.  The RPM packages may be built and hosted on a site, and then pulled at build time by Apex.  These packages are then virt-customized (uploaded) to the undercloud or overcloud image at build time.  An example of this can be seen here:

When creating a commit to Apex which requires changes in apex-tripleo-heat-templates or apex-puppet-tripleo, it is required to post patches in opnfv gerrit for each project to the appropriate branch.  Neither of these repos have verification jobs, however verification is accomplished when you post a patch to the Apex repo. To do this, submit an Apex patch with the following text in the commit message:

apex-triple-heat-templates: (Change ID of patch)

apex-puppet-tripleo: (Change ID of patch)

If your patch has no content for Apex repo itself, only for one of the other repos, then add a line to ci/PR_revision.log so that the Apex patch is not empty and is accepted by gerrit.

As previously mentioned we can accomodate applying that patch via the Apex project, until the code is merged upstream. At the time of this writing we currently do this for the FDIO/VPP feature. An example commit for tripleo-heat-templates providing FDIO installation can be found here which is making use of the special tags in the commit message. They key files to modify include the yaml based heat templates, as well as the puppet modules.

TripleO Deployment Overview

TripleO deployment can be broken down into multiple pieces.  At its most basic existence, TripleO consists of an Undercloud and Overcloud.  The Undercloud is ran as a VM in Apex, and is used to install the Overcloud.  The Overcloud is the target OpenStack system you want to deploy, such as an OPNFV setup.  The TripleO deployment is then initiated by a command from the Undercloud 'openstack overcloud deploy', which takes a number of arguments.  The most common argument used here is '-e' which is used to add an additional yaml file to dictate what to deploy.  An example:

openstack deploy
openstack overcloud deploy --templates  -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight-l3-dpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e network-environment.yaml --compute-scale 2 --ntp-server --control-flavor control --compute-flavor compute -e virtual-environment.yaml -e opnfv-environment.yaml --timeout 90

When this command is issued, the following functionality is triggered:

  1. Heat parameters (think of them as deployment variables) are parsed stored to be written to each Overcloud node
  2. Roles (such as Compute/Controller) are parsed, along with the composable services that are going to be ran on them (like Neutron Server, Nova Scheduler, etc).
  3. Heat asks Nova to create the servers to be used for Overcloud nodes
  4. Nova uses nodes defined in Ironic (virtual or baremetal) and initiates bootloading those nodes (using pxeboot).  During this phase the configured overcloud-full.qcow2 image on the Undercloud is copied to each node as an ISCI target.
  5. Nova runs cloud init, and then declares the nodes as 'active' to Heat.
  6. Heat waits for the node to send a request for configuration, which Heat will distribute to the node.  This tool that is running on the nodes to request config is called os-collect-config.  A full description of how os-collect-config and its counterparts work can be found here:
  7. After receiving config, the node will run os-net-config, which will configure each of its network interfaces
  8. Next, the node will write any of the parsed Heat Parameters which were previously parsed as variables to the node to /etc/puppet/hieradata/<file>.yaml.  Note these variables are stored as 'key: value' yaml format and rely on Puppet's built in method of resolving variables via hiera lookups when applying Puppet configuration.  For more info on how hiera works with puppet:
  9. A hiera variable called 'step', will be assigned a value (starting with 1) which will control which parts of the deployment are configured.
  10. The node will execute a puppet manifest that was provided by the Undercloud.  This puppet manifest will make calls to puppet-tripleo (or opnfv-puppet-tripleo in Apex case), which will execute puppet configuration limited to only the configuration allowed for the appropriate 'step' as configured in step 9.
  11. Steps 9 and 10 are repeated until step 5 is complete.  Note: Step 4 is where majority of services are configured.
  12. Post node validation is done, and 'overcloudrc' is created in the Undercloud /home/stack/ directory which can be used as the credentials for accessing the Overcloud.

TripleO Code and Key Integration Areas

The key drivers of the deployment are setting Heat Parameters and Enabling Services, resulting in those services being configured during the deployment.  This methodology starts at THT and then is driven through puppet-tripleo and other puppet repos.  The environment file (specified with the '-e' argument) is used to control which Heat Parameters and Services shall be enabled on a per deployment basis.

Defining a Service in THT means creating a heat template file under tripleo-heat-templates/puppet/services.  Examine the services already present in  Looking at any of these files will reveal three key sections 'parameters', 'resources', 'outputs'.  'parameters' is where Heat Parameters are defined to be used to control service settings, for example, the default tenant network type in Neutron ML2.  'resources' defines heat resources to be created.  'outputs' is used to define what this heat template will output when it is created, in our case the relevant part is 'role_data'.  The 'role_data' consists of 2 key pieces: 'config_settings' and 'step_config'.  This is how to indicate puppet related deployment configuration to THT.  'config_settings' is used to declare hiera variables and then assign them values.  Note: you must indicate the full puppet scope to the variable.  For example:

neutron::keystone::authtoken::password: {get_param: NeutronPassword}

The above indicates that the puppet-neutron 'password' variable located in manifests/keystone/authtoken.pp should be set to the value given by the Heat Parameter NeutronPassword in the puppet/services/neutron-api.yaml THT template.  'step_config' indicates which puppet manifest should be applied when creating using this heat template.  For example:

step_config: |

        include tripleo::profile::base::neutron::server

The above references that the puppet-tripleo profile/base/neutron/server.pp manifest should be used to configure this Service.  This is how puppet-tripleo is triggered by THT.

The key puppet-tripleo manifests are found under

These puppet manifests are used as an intermediate puppet manifest to control when configuration will be applied based on the 'step' of the deployment.  For example:

class tripleo::profile::base::neutron::opendaylight (

  $step         = hiera('step'),

  $primary_node = hiera('bootstrap_nodeid', undef),

) {
  if $step >= 1 {

    # Configure ODL only on first node of the role where this service is

    # applied

    if $primary_node == downcase($::hostname) {

      include ::opendaylight




The above manifest in puppet-tripleo is used to control when the opendaylight manifest should be executed, in this case during or after step 1.  Notice the '$step' parameter to this class is being set to the hiera lookup value for 'step'.  Note: puppet is idempotent, so running multiple times should always resolve to the same state of the host.  Usually each puppet-tripleo manifest will trigger a call to another puppet manifest, specific to each service.  For example, puppet-tripleo based calls for neutron will typically end up calling the puppet-neutron project.  Therefore when adding a new THT service, and corresponding puppet-tripleo change with Apex, you will more than likely need to either update an existing puppet-<service> repo, or create a new puppet-<service> repo to do configuration of your new service.  In Apex, we can also include patch files to update puppet-<service> code that is not upstream, or even add puppet-<service> repos that do not exist yet upstream if needed during our build process.

Key Code Areas when adding a new Service

In addition, when adding a new service a few other files need to be modified.  In THT, the service needs to be registered in the resource registry and defaulted to None:

Also, the service should be mapped by default to the Controller or Compute role as necessary:

In Apex, we override the default roles with OPNFV specific services per Controller or Compute role, so it also needs to be changed here:

Furthermore, the Service needs to be mapped by default to the correct THT network (usually internal_api) as done here:

Enabling a new Service in a Deployment

As previously mentioned, enabling new services or configuration on a per deployment basis is done via the '-e' argument to the deploy command.  This argument can be specified more than once, and indicates a yaml environment file to be used with the deployment.  Example environment files can be found under:

These files will typically have a 'resource_registry', 'parameters', or 'parameter_defaults' sections.  For example:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR


  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None

  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None

  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None

  OS::TripleO::Services::OpenDaylightApi: ../puppet/services/opendaylight-api.yaml

  OS::TripleO::Services::OpenDaylightOvs: ../puppet/services/opendaylight-ovs.yaml

  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None


  NeutronEnableForceMetadata: true

  NeutronMechanismDrivers: 'opendaylight_v2'

  NeutronServicePlugins: 'odl-router_v2,trunk'

'resource_registry' is used to enable or disable Services.  You can see in the above example, NeutronL3Agent is being disabled, while OpenDaylight (with overlapping functionality) is being enabled.  The 'parameter_defaults' section here is where you can define values for Heat Parameters.  Note the 'parameter_defaults' will override any default in any template that exists in THT, which matches that variable name.  The 'parameters' section (missing from this example) is used to pass parameter values, (which do not change parameter defaults) for specific Heat Parameters which are inherited to each Service heat template via the parent overcloud.j2.yaml file.  These parameters are limited, and usually you should not add new parameters to overcloud.yaml.

In Apex, we use configuration "deploy settings files" which contain a simple yaml format to trigger certain environment files to be applied to a deployment.  For example:

This file indicates that opendaylight is the SDN controller and DVR (sdn_l3) should be used.  When Apex parses these settings, it decides which environment files to apply using this library:

Therefore if you add a new service to Apex, it is usually necessary to introduce a new setting to trigger your service to be deployed.  Note settings are first parsed by a python library in Apex, where you will also need to add your setting:

Upstream RDO Integration

For getting your package upstream in RDO, there are several steps that need to be followed.  Please reference the RDO guidelines:

  • No labels