Page tree
Skip to end of metadata
Go to start of metadata

Apex Developer Build & Integration Guide

Apex an installation tool deploys according to OPNFV requirements standards using Triple-O from the RDO Project. This document will describe the process to build Apex and to integrate new features into Apex.

To get a good overview of the terms that describe an Apex deployment and the target architecture of what is deployed please review the Installation Instructions. Apex Installation Instructions

Building and Deploying Apex for Development

This instructions are valid for the master branch.

Building and Deploying Apex for development is a set of instructions to build and deploy out of the code repository in order to iterate quickly over building and deploying. If you are interested in deploying OPNFV Apex on a system to evaluate, or use, please use the Installation Documentation released with our official builds or skip to the section "Building Apex to produce Packages" for using installable packages.

# Do this all as root
 
# First setup the build environment
 
yum groupinstall 'Development Tools'
yum groupinstall 'Virtualization Host'
# The line below installs RDO Ocata for the Euphrates release
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-2.noarch.rpm
yum install -y python2-virtualbmc
# We also need the epel repository
yum install -y epel-release
yum install -y python-devel python-setuptools libguestfs-tools python2-oslo-config python2-debtcollector python-pip openssl-devel bsdtar createrepo
# Needed for overcloud-opendaylight build:
yum install -y libxml2-devel libxslt-devel python34-devel python34-pip
pip3 install gitpython
pip3 install pygerrit2
yum update -y
reboot
wget http://artifacts.opnfv.org/apex/dependencies/python34-markupsafe-0.23-9.el7.centos.x86_64.rpm
wget http://artifacts.opnfv.org/apex/dependencies/python3-jinja2-2.8-5.el7.centos.noarch.rpm
wget http://artifacts.opnfv.org/apex/dependencies/python3-ipmi-0.3.0-1.noarch.rpm
yum install -y python34-markupsafe-*.rpm python3-jinja2-*.rpm python3-ipmi-*.rpm
cd ~
git clone https://gerrit.opnfv.org/gerrit/p/apex.git

# Next Build the disk images
cd apex/build
make clean
make undercloud
make overcloud-opendaylight # just makes vanilla ODL image
# use `make images` for all images or look at the Make targets if you need a specific target
 
# Now execute a deployment
 
export BASE=../build
export LIB=../lib
export IMAGES=../.build/
cd ../ci
./dev_dep_check.sh
./clean.sh
./deploy.sh -v -n ../config/network/network_settings.yaml -d ../config/deploy/{{deploy_settings}}.yaml 

 

The previous commands assume root user privileges. If you build non-privileges then you need to sudo the last three commands and pass the ENV vars to the command like this:

 

sudo LIB=../lib ./clean.sh
sudo ./dev_dep_check.sh
sudo BASE=../build IMAGES=../.build LIB=../lib PYTHONPATH=$PYTHONPATH:../lib/python ./deploy.sh -v -n ~/apex/config/network/network_settings.yaml -d ~/apex/config/deploy/{{deploy_settings}}.yaml 
  • exports

    • by default apex looks in system directories for resources it needs to deploy, BASE, IMAGES and LIB point apex to the git tree to get them
    • There is a python 3 apex module that needs to be imported for deploy.sh
    • Note that since deploy.sh is executed with sudo, PYTHONPATH, BASE, IMAGES, LIB must be included in env_keep variable in sudoer file. If deploy is executed as root without sudo this is not necessary.
    • An alternative to exporting is to pass the variables between sudo and deploy.sh: "sudo BASE=value IMAGES=value LIB=value PYTHONPATH=value ./deploy.sh -a val -b val) This will not require env_keep variable in the sudoer file.
  • clean.sh:

    • run this before each deploy, it ensures that the undercloud machine and network settings on the jump host are ready for a new deployment.

  • dev_deploy_check.sh

    • This script ensures that deploy dependencies are installed, it only needs to be run once. These deps are usually install via RPM packaging, use this script if you are deploying directly from the git repo.
  • deploy.sh:

    • -v means virtual

    • -d and -n are settings files that are discussed more in the installation instructions documentation.

    • {{ deploy_settings }} can be replaced by one of the file in apex/config/deploy/

Network_settings.yaml can be left as it is for a virtual deployment, modifications according to installation documentation are also an option.
Look though the catalog of files in apex/config/deploy to choose the flavor of deployment you would like. There are a collection of SDN controllers and features that can be enabled or swapped out. You can also just edit the stock deploy_settings.yaml if you would like.

Building Apex from a Gerrit Review

git fetch https://gerrit.opnfv.org/gerrit/apex refs/changes/43/14743/8 && git checkout FETCH_HEAD

If you would like to build from a specific review in Gerrit that has not been merged yet you will need to run a command like this in your git repo right after cloning it. Once you have checked out the review cd into build and start with the make commands to rebuild the images.

This command can be copied from gerrit to your clipboard. To do this:

  1. Open the Review in Gerrit
  2. Click the download menu button in the upper right hand corner of the review's page
  3. Click the clipboard icon to the right of the "Checkout" git command
  4. Paste from your clipboard into your terminal in the apex git repo directory and you will have a command like the example above for the review you would like to build.

 

Building Apex to produce Packages

Building Apex to produce packages will produce a set of RPMs that should be installed following the Installation Instructions distributed with OPNFV Apex.

git clone https://gerrit.opnfv.org/gerrit/p/apex.git
cd apex/ci
./build -r {user-defined-release-version} -c file:///{cache-storage-location} [ --rpms | --iso ]

--iso will build RPMs and the ISO, --rpms will just build the RPMs and not the iso

RPMS will be put in the build/noarch directory.

{user-defined-release-version} is an RPM release number compatible identifier. This is used in both the RPM release value and to name the ISO. The Apex defined version will be used as the primary version and this user-defined release version is a secondary version that can be used for your own purposes. If you don't have a need for it just set it to 1.

{cache-storage-location} is where the 7-10G cache file will be stored between builds

Build process and development details.

There are a collection of files executed during build that are helpful to understand. Not all files used in build are listed here, these are the primary files.

ci/build.sh

A boiler plate entry point for build. This file is most likely not necessary to be edited during integration

ci/clean.sh

Deployment environment clean up script that deletes libvirt resources and ovs bridges

build/Makefile

The entry point into Apex specific build steps. It's responsible for orchestrating the build process, RPM build and ISO build

build/undercloud.sh

The undercloud disk image build script. This is where the undercloud's disk image is customized and prepared.

build/overcloud*.sh

The overcloud disk images build scripts. This is where the overcloud disk image's are customized and prepared.

Apex is built in a few stages and tasks summarized here:

  • download RDO Project undercloud and overcloud disk images
  • modify downloaded disk images
  • build RPM with configuration, scripts and disk images
  • build CentOS ISO for offline deployment

RPM packaging and ISO build in the Makefile

Once the image build has run and has dumped all the files to disk that are needed to build, git archive is run which generates the initial tarball the RPM will build from. During build, files are generated that need to be included in the tarball so that the RPM has access to them for packaging. The tarball is appended with the necessary files for final packaging and deployment. Rpmbuild is executed to roll the RPM that includes the configuration files, deployment scripts and disk images necessary for deployment.

Once the RPM is built the CentOS installation DVD is downloaded, exploded, modified adding the Apex RPM and updating the installation configuration files for the OS and the ISO is rebuilt with the updated contents.

Apex Feature Integration

The RDO Project uses Triple-O as the OpenStack installation platform that Apex is built on. Patches submitted must fit 1 of 2 criteria in order to be submitted for merge:

  1. update the Apex and or OPNFV specific contents of the project
  2. intent to update an upstream project used by Apex
    1. patches for upstream projects can be carried temporarily in the Apex code base as long as there is active effort for the patch to be merged into the upstream project.

TripleO Code Integration and Patches

In order to integrate your feature into Apex, you must create a patch for Triple-O and/or RDO code. Triple-O uses heat templates with puppet configuration providers as its method to configure OpenStack. The Triple-O sub-project for heat-templates is called tripleo-heat-templates (THT), while another repo, puppet-tripleo is used to hold the puppet manfiests driven by THT.  The RDO project is used to hold the packages as RPMs of different OpenStack components.  To see a full list of currently built packages in RDO: https://trunk.rdoproject.org/centos7/report.html and click on repo for the latest build.

OPNFV Releases are based on stable OpenStack versions that are already feature frozen when they are integrated into OPNFV. To exporse features to OPNFV we must maintain an incubation fork of THT and puppet-tripleo while working to get the features upstream. These forks are maintained at https://github.com/trozet/opnfv-tht and https://github.com/trozet/opnfv-puppet-tripleo.  Each fork has a branch corresponding to the current OPNFV release (such as stable/danube).  opnfv-tht is inserted into the undercloud image at build time via apex/build/undercloud.sh, while opnfv-puppet-tripleo is written to the overcloud image itself using apex/build/overcloud-full.sh.  In order to carry RDO (packaging) related changes which do not exist in the released version of OpenStack that OPNFV is using, the necessary packages can be added to the Apex build process.  The RPM packages may be built and hosted on a site, and then pulled at build time by Apex.  These packages are then virt-customized (uploaded) to the undercloud or overcloud image at build time.  An example of this can be seen here: https://gerrit.opnfv.org/gerrit/#/c/27785/

When creating a commit to Apex which requires changes in opnfv-tht or opnfv-puppet-tripleo, it is required to make a separate Pull Request (PR) in github to the appropriate branch.  Pull requests must also be accompanied by a patch to the apex gerrit referencing the pull request number, so that CI testing will be triggered. This is done by first creating the pull request, then submitting an Apex patch with the following text in the commit message:

opnfv-tht-pr: (pull request number here)

opnfv-puppet-tripleo-pr: (PR # here)

If your patch has no content for Apex repo itself, only for opnfv-tht, then add a line to ci/PR_revision.log so that the Apex patch is not empty and is accepted by gerrit.

As previously mentioned we can accomodate applying that patch via the Apex project, until the code is merged upstream. At the time of this writing we currently do this for the FDIO/VPP feature. An example commit for tripleo-heat-templates providing FDIO installation can be found here https://gerrit.opnfv.org/gerrit/#/c/28335/ which is making use of the special PR tags in the commit message. They key files to modify include the yaml based heat templates, as well as the puppet modules.

TripleO Deployment Overview

TripleO deployment can be broken down into multiple pieces.  At its most basic existence, TripleO consists of an Undercloud and Overcloud.  The Undercloud is ran as a VM in Apex, and is used to install the Overcloud.  The Overcloud is the target OpenStack system you want to deploy, such as an OPNFV setup.  The TripleO deployment is then initiated by a command from the Undercloud 'openstack overcloud deploy', which takes a number of arguments.  The most common argument used here is '-e' which is used to add an additional yaml file to dictate what to deploy.  An example:

openstack deploy
openstack overcloud deploy --templates  -e /usr/share/openstack-tripleo-heat-templates/environments/neutron-opendaylight-l3-dpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e network-environment.yaml --compute-scale 2 --ntp-server 0.se.pool.ntp.org --control-flavor control --compute-flavor compute -e virtual-environment.yaml -e opnfv-environment.yaml --timeout 90

When this command is issued, the following functionality is triggered:

  1. Heat parameters (think of them as deployment variables) are parsed stored to be written to each Overcloud node
  2. Roles (such as Compute/Controller) are parsed, along with the composable services that are going to be ran on them (like Neutron Server, Nova Scheduler, etc).
  3. Heat asks Nova to create the servers to be used for Overcloud nodes
  4. Nova uses nodes defined in Ironic (virtual or baremetal) and initiates bootloading those nodes (using pxeboot).  During this phase the configured overcloud-full.qcow2 image on the Undercloud is copied to each node as an ISCI target.
  5. Nova runs cloud init, and then declares the nodes as 'active' to Heat.
  6. Heat waits for the node to send a request for configuration, which Heat will distribute to the node.  This tool that is running on the nodes to request config is called os-collect-config.  A full description of how os-collect-config and its counterparts work can be found here: https://fatmin.com/2016/02/23/openstack-heat-and-os-collect-config/
  7. After receiving config, the node will run os-net-config, which will configure each of its network interfaces
  8. Next, the node will write any of the parsed Heat Parameters which were previously parsed as variables to the node to /etc/puppet/hieradata/<file>.yaml.  Note these variables are stored as 'key: value' yaml format and rely on Puppet's built in method of resolving variables via hiera lookups when applying Puppet configuration.  For more info on how hiera works with puppet: https://docs.puppet.com/hiera/3.3/puppet.html
  9. A hiera variable called 'step', will be assigned a value (starting with 1) which will control which parts of the deployment are configured.
  10. The node will execute a puppet manifest that was provided by the Undercloud.  This puppet manifest will make calls to puppet-tripleo (or opnfv-puppet-tripleo in Apex case), which will execute puppet configuration limited to only the configuration allowed for the appropriate 'step' as configured in step 9.
  11. Steps 9 and 10 are repeated until step 5 is complete.  Note: Step 4 is where majority of services are configured.
  12. Post node validation is done, and 'overcloudrc' is created in the Undercloud /home/stack/ directory which can be used as the credentials for accessing the Overcloud.

TripleO Code and Key Integration Areas

The key drivers of the deployment are setting Heat Parameters and Enabling Services, resulting in those services being configured during the deployment.  This methodology starts at THT and then is driven through puppet-tripleo and other puppet repos.  The environment file (specified with the '-e' argument) is used to control which Heat Parameters and Services shall be enabled on a per deployment basis.

Defining a Service in THT means creating a heat template file under tripleo-heat-templates/puppet/services.  Examine the services already present in https://github.com/openstack/tripleo-heat-templates/tree/master/puppet/services.  Looking at any of these files will reveal three key sections 'parameters', 'resources', 'outputs'.  'parameters' is where Heat Parameters are defined to be used to control service settings, for example, the default tenant network type in Neutron ML2.  'resources' defines heat resources to be created.  'outputs' is used to define what this heat template will output when it is created, in our case the relevant part is 'role_data'.  The 'role_data' consists of 2 key pieces: 'config_settings' and 'step_config'.  This is how to indicate puppet related deployment configuration to THT.  'config_settings' is used to declare hiera variables and then assign them values.  Note: you must indicate the full puppet scope to the variable.  For example:

neutron::keystone::authtoken::password: {get_param: NeutronPassword}

The above indicates that the puppet-neutron 'password' variable located in manifests/keystone/authtoken.pp should be set to the value given by the Heat Parameter NeutronPassword in the puppet/services/neutron-api.yaml THT template.  'step_config' indicates which puppet manifest should be applied when creating using this heat template.  For example:

step_config: |

        include tripleo::profile::base::neutron::server

The above references that the puppet-tripleo profile/base/neutron/server.pp manifest should be used to configure this Service.  This is how puppet-tripleo is triggered by THT.

The key puppet-tripleo manifests are found under https://github.com/openstack/puppet-tripleo/tree/master/manifests/profile/base

These puppet manifests are used as an intermediate puppet manifest to control when configuration will be applied based on the 'step' of the deployment.  For example:

class tripleo::profile::base::neutron::opendaylight (

  $step         = hiera('step'),

  $primary_node = hiera('bootstrap_nodeid', undef),

) {
  if $step >= 1 {

    # Configure ODL only on first node of the role where this service is

    # applied

    if $primary_node == downcase($::hostname) {

      include ::opendaylight

    }

  }

}

The above manifest in puppet-tripleo is used to control when the opendaylight manifest should be executed, in this case during or after step 1.  Notice the '$step' parameter to this class is being set to the hiera lookup value for 'step'.  Note: puppet is idempotent, so running multiple times should always resolve to the same state of the host.  Usually each puppet-tripleo manifest will trigger a call to another puppet manifest, specific to each service.  For example, puppet-tripleo based calls for neutron will typically end up calling the puppet-neutron project.  Therefore when adding a new THT service, and corresponding puppet-tripleo change with Apex, you will more than likely need to either update an existing puppet-<service> repo, or create a new puppet-<service> repo to do configuration of your new service.  In Apex, we can also include patch files to update puppet-<service> code that is not upstream, or even add puppet-<service> repos that do not exist yet upstream if needed during our build process.

Key Code Areas when adding a new Service

In addition, when adding a new service a few other files need to be modified.  In THT, the service needs to be registered in the resource registry and defaulted to None:

https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.j2.yaml

Also, the service should be mapped by default to the Controller or Compute role as necessary:

https://github.com/openstack/tripleo-heat-templates/blob/master/roles_data.yaml

In Apex, we override the default roles with OPNFV specific services per Controller or Compute role, so it also needs to be changed here:

https://github.com/opnfv/apex/blob/master/build/opnfv-environment.yaml

Furthermore, the Service needs to be mapped by default to the correct THT network (usually internal_api) as done here:

https://github.com/openstack/tripleo-heat-templates/blob/master/network/service_net_map.j2.yaml

Enabling a new Service in a Deployment

As previously mentioned, enabling new services or configuration on a per deployment basis is done via the '-e' argument to the deploy command.  This argument can be specified more than once, and indicates a yaml environment file to be used with the deployment.  Example environment files can be found under: https://github.com/openstack/tripleo-heat-templates/tree/master/environments

These files will typically have a 'resource_registry', 'parameters', or 'parameter_defaults' sections.  For example:

# A Heat environment that can be used to deploy OpenDaylight with L3 DVR

resource_registry:

  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None

  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None

  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None

  OS::TripleO::Services::OpenDaylightApi: ../puppet/services/opendaylight-api.yaml

  OS::TripleO::Services::OpenDaylightOvs: ../puppet/services/opendaylight-ovs.yaml

  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None

parameter_defaults:

  NeutronEnableForceMetadata: true

  NeutronMechanismDrivers: 'opendaylight_v2'

  NeutronServicePlugins: 'odl-router_v2,trunk'

'resource_registry' is used to enable or disable Services.  You can see in the above example, NeutronL3Agent is being disabled, while OpenDaylight (with overlapping functionality) is being enabled.  The 'parameter_defaults' section here is where you can define values for Heat Parameters.  Note the 'parameter_defaults' will override any default in any template that exists in THT, which matches that variable name.  The 'parameters' section (missing from this example) is used to pass parameter values, (which do not change parameter defaults) for specific Heat Parameters which are inherited to each Service heat template via the parent overcloud.j2.yaml file.  These parameters are limited, and usually you should not add new parameters to overcloud.yaml.

In Apex, we use configuration "deploy settings files" which contain a simple yaml format to trigger certain environment files to be applied to a deployment.  For example:

https://github.com/opnfv/apex/blob/master/config/deploy/os-odl_l3-nofeature-noha.yaml

This file indicates that opendaylight is the SDN controller and DVR (sdn_l3) should be used.  When Apex parses these settings, it decides which environment files to apply using this library:

https://github.com/opnfv/apex/blob/master/lib/overcloud-deploy-functions.sh

Therefore if you add a new service to Apex, it is usually necessary to introduce a new setting to trigger your service to be deployed.  Note settings are first parsed by a python library in Apex, where you will also need to add your setting:

https://github.com/opnfv/apex/blob/master/lib/python/apex/deploy_settings.py

Upstream RDO Integration

For getting your package upstream in RDO, there are several steps that need to be followed.  Please reference the RDO guidelines: https://www.rdoproject.org/documentation/rdo-packaging/#how-to-add-a-new-package-to-rdo-trunk

  • No labels