Page tree
Skip to end of metadata
Go to start of metadata

OPNFV CI overview

There are two types of Jenkins job running on OPNFV CI :

  • Daily job
    Runs daily to verify if a branch is working. 
    Typical flow will be : Jenkins master dispatches a job to Jenkins slave (a pod) according to schedule → Git clone latest code and run on the slave pod  → Run test cases to verify if everything is running as expected.
    We need to define what the jobs will actually do:
    • install OpenStack using one of the installers
    • install ONAP, either on top of OpenStack or on a separate target linked to the cloud. Right now the first option seems more feasible.
    • run tests that use ONAP to deploy and manage VNFs
    The way to look at the CI integration is to consider ONAP as a feature in an OPNFV scenario.

  • Verify job
    Runs when a patch is commited to Gerrit to verify if this patch won't cause any failure after merged to master.
    Some code examination, even a daily job level verification.

Requirements to build Jenkins job:

  • Dedicated POD (virtual or baremetal pod, could use existing pod listed in or add a new pod to community)
  • Successful work flow (Auto) with corresponding test flow (Functest or yardstick)
  • Create Jenkins job for Auto in releng project (create a CI pipeline consists of both running and testing) 

After meeting on May 15th 2018 (Cristina, Joe, Gerard), steps to follow:

  1. identify permanent server/pod resources which will host the Jenkins slave: Auto will use the Arm pod at UNH (jump server IP@
  2. identify list of Auto programs, scripts, tests, etc. to be executed by Jenkins jobs;
    (will start with OPNFV Fuel/MCP installation of OpenStack, and Python script configuring ONAP public and private networks in OpenStack instance)
    (later: install ONAP, onboard/deploy VNFs, configure ONAP, install and start traffic generators, install Auto test cases (for all Auto Use Cases), run test cases)
  3. connect Auto to OPNFV Jenkins:
    (1-time operation, but not trivial: 15 steps, back and forth with OPNFV admins).
    This will result in a Jenkins slave being installed on the jump server. The Jenkins slave will then be available to run jobs dispatched by Jenkins master, based on per-project pipelines specified in YAML files stored in the releng repository
  4. prepare labels in the server (to group jobs/slaves)
  5. prepare one or several YAML files describing Jenkins pipelines: specify work to do, triggers/frequency (time of the week/day, ...), environment, OPNFV scenarios, project repos to download (not just Auto), dependencies, etc.
    (this is where Auto programs to be executed are referenced: basically pointers to executable files (.sh, .py, .exe., ...) in the Auto repository)
    examples from Armband: (ci-jobs, verify-jobs)
    example from vswitchperf: 
  6. upload these YAML files to releng repo using Gerrit for releng project (in jjb/auto folder)
  7. verify results on Jenkins GUI; if errors are found, make changes to Auto programs, scripts, tests, etc., and upload to Auto repo (usual Gerrit-based merging process); if file names are not modified, no need to update anything in the YAML files.
  8. ongoing updates: whenever Auto programs, scripts, tests, etc. file names or locations are updated, or if new ones are created, then update corresponding YAML files in releng jjb/auto folder; the next time the Jenkins master dispatches jobs to Jenkins slaves, the Auto Jenkins slave will receive the updated pipelines to execute.

Changeset references:

IDF and PDF files for Auto pod in UNH lab: Pharos changeset

 Auto CI jobs setup: RELENG changeset

See also OPNFV Octopus project link and generic Jenkins information.

JJB (Jenkins Job Builder): YAML or JSON, supports Templates

Summary of Jenkins and JJB concepts

More details on JJB YAML file(s):

JJB Builders:

JJB Publishers:

JJB Wrappers:

JJB Triggers:

Linux Foundation Releng Global-JJB: 

Illustration of Auto parameters

The diagram below illustrates parameters for Auto (different options for hardware, cloud, OPNFV scenarios, pod sizing, traffic volume and type, VNF types, ONAP deployment options, ONAP configurations, individual Auto test case scenarios, etc.).

This will lead to several options for each step of Auto CI, and therefore multiple runs: one for each combination of parameters. When all combinations prove to be too much for daily CI, they will be configured to run separately (outside of standard OPNFV CI), and only a few key combinations and unitary tests will remain in daily CI executions.

Ultimately, with a large number of use cases and test cases (and infrastructure options), a complete Auto run would take a long time to complete, but will enable interesting analysis of results, comparisons, interpretations, fine-tuning, etc. Some execution parallelization should be possible (infrastructure setups in parallel on multiple clusters, but tests cases running sequentially within a given infrastructure setup).

Installers used by Auto: status summary

(table of what has been tried so far, not what is supported)

Note: JOID is not in Gambia (link)

installerUbuntu, CentOSOpenStack, KubernetesHA, NOHAinstallation completes on blank server?can uninstall?OpenStack CLI works?

OpenStack GUI works?

ONAP-prep script works?ONAP installs on VMs?ONAP GUI works ?ONAP APIs work?



JOIDUbuntuOSnohaN (almost Y)

Apex/TripleOCentOSOSnohaN (almost Y)YY


Related pages:

  • No labels