Page tree
Skip to end of metadata
Go to start of metadata

Introduction

WORK IN PROGRESS

SOME STUFF ON THIS PAGE WILL END UP IN CHILD PAGES

CI in Short

Continuous Integration is a software development practice aiming to build, integrate, test software frequently - several times a day. CI makes it possible to catch faults early and often and also provide working software to all the involved parties frequently.

  • CM101
  • Catch faults early and often - and fix them as quickly as possible: fail fast fix fast
  • Reproducibility: we must be able to reproduce what we release as part of OPNFV
  • Repeatability: we must be able to repeatedly demonstrate what we do
  • Traceability: we must be able to show what went into certain artifact
  • Visibility: everything we do must be visible to community at large and to our users
  • Speed: we must be able to deliver quickly
  • IT WORKS ON MY MACHINE!!!
  • CI vs Test relation: why we can't and should not run all the test cases all the OPNFV test projects have in one go

OPNFV CI Overview

 

The check and post merge activities in dashed boxes differ between projects based on the nature of the project.

For example, installer projects could have build and virtual deploy but not unit test. And feature projects could have unit test but not virtual deployment.

CI for Platform vs CI for Projects

Describe:

  • Platform CI: Running deployment and testing of different scenarios of OPNFV Platform
  • Project CI: Project commit gating, daily builds and test jobs and so on

What Constitutes OPNFV CI

Automation and Jenkins Jobs

Daily job structures and naming are aligned between the installers compass, fuel and joid. Here is how the jobs are structured and named.

Job Structure for the Jobs Running on CI PODs & Used for Release Purposes

Please see the diagram below for the current Jenkins Job Structure used by installers compass, fuel and joid.

Job Naming Scheme for the Jobs Running on CI PODs & Used for Release Purposes

  • Scenario Parent Jobs (Upstream jobs):
    • There is 1 job per installer/scenario/deployment type/branch which triggers and controls downstream deploy, functest and yardstick jobs.
    • The naming scheme for the scenario parent/upstream jobs is: {installer}-{scenario}-{deployment type}-{loop}-{branch}
    • Possible values/examples for the fields in above job naming scheme are
      • {installer}: compass or fuel or joid
      • {scenario}: see the Colorado scenarios from this link and the scenario naming scheme from this link.
      • {deployment type}: baremetal or virtual
      • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
      • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
  • Downstream jobs
    • The downstream deploy, functest and yardstick jobs are common for all the scenarios for given installer and branch.
    • Deployment Jobs:
      • The naming scheme for the deployment/downstream jobs are:  {installer}-deploy-{deployment type}-{loop}-{branch}
      • Possible values for the fields in aoove job naming scheme are
        • {installer}: compass or fuel or joid
        • {deployment type}: baremetal or virtual
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
    • Functest Jobs
      • The naming scheme for the deployment/downstream jobs are:  functest-{installer}-{deployment type}-{loop}-{branch}
      • Possible values for the fields in above job naming scheme are
        • {installer}: compass or fuel or joid
        • {deployment type}: baremetal or virtual
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
    • Yardstick Jobs
      • The naming scheme for the deployment/downstream jobs are:  yardstick-{installer}-{deployment type}-{loop}-{branch}
      • Possible values for the fields in above job naming scheme are
        • {installer}: compass or fuel or joid
        • {deployment type}: baremetal or virtual
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)

Job Structure for the Jobs Running on none-CI PODs & Used for Development Purposes

Please see the diagram below for the current Jenkins Job Structure used by installers compass, fuel and joid. The structure is same with the ones running on CI PODs.

Job Naming Scheme for the Jobs Running on none-CI PODs & Used for Development Purposes

The only difference between the jobs running on CI and none-CI PODs is that the jobs running on none-CI PODs have specific/none-CI POD names instead of deployment types.

  • Scenario Parent Jobs (Upstream jobs):
    • There is 1 job per installer/scenario/deployment type/branch which triggers and controls downstream deploy, functest and yardstick jobs.
    • The naming scheme for the scenario parent/upstream jobs is: {installer}-{scenario}-{pod}-{loop}-{branch}
    • Possible values/examples for the fields in above job naming scheme are
      • {installer}: compass or fuel or joid
      • {scenario}: see the Colorado scenarios from this link and the scenario naming scheme from this link.
      • {pod}: virtual or baremetal (or one of the none-CI PODs if the jobs created for none-production purposes.)
      • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
      • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
  • Downstream jobs
    • The downstream deploy, functest and yardstick jobs are common for all the scenarios for given installer and branch.
    • Deployment Jobs:
      • The naming scheme for the deployment/downstream jobs are:  {installer}-deploy-{pod}-{loop}-{branch}
      • Possible values for the fields in aoove job naming scheme are
        • {installer}: compass or fuel or joid
        • {pod}: It could be one of the none-CI PODs.
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
    • Functest Jobs
      • The naming scheme for the deployment/downstream jobs are:  functest-{installer}-{pod}-{loop}-{branch}
      • Possible values for the fields in above job naming scheme are
        • {installer}: compass or fuel or joid
        • {pod}: It could be one of the none-CI PODs.
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)
    • Yardstick Jobs
      • The naming scheme for the deployment/downstream jobs are:  yardstick-{installer}-{pod}-{loop}-{branch}
      • Possible values for the fields in above job naming scheme are
        • {installer}: compass or fuel or joid
        • {pod}: It could be one of the none-CI PODs.
        • {loop}:daily or weekly (weekly will probably be available after Colorado is out)
        • {branch}:master or colorado (obviously colorado will disappear/be replaced by the D-release name)

Artifact Repository and Reporting

 

Hardware Resources

 

  • What is a CI resource
    • CI resource is a resource that is dedicated to run CI activities 24x7 (project unit test, build, platform virtual deployment, bare metal deployment, testing)
  • What requirements OPNFV Infra has on CI resources
    • Fulfills Pharos Community Lab SLAs - Brahmaputra Specification
    • Dedicated to CI - no project development work is allowed on them
    • Connected to Jenkins using official OPNFV utilities
    • 24x7 availability
    • Strict Configuration Management
    • No manual interaction except CI troubleshooting
    • Internet access/access to online OPNFV resources
    • 1 week notice in advance if the resource needs to be taken in to maintenance
  • What are the types of CI resources OPNFV has
  • How can I see current list of CI resources

How resources can be declared and approved as CI resources

    • Bring this subject to OPNFV Infra WG with the information regarding resources including which project(s) will run on them
    • Once Infra WG approves, the resource will be included in CI Pool and monitored for 2 weeks to determine if it meets the requirements.
    • After 2 weeks period, the resource will be promoted as CI resource.

What is the difference between CI and none-CI resources

    • None-CI resources do not have same SLA requirements on them
    • Manual work is possible on none-CI resources for development purposes
    • The resources can still be connected to OPNFV Jenkins to automate development work but none of the CI jobs will run on them
    • The resources can be taken down without advance notice.
    • The important thing to keep in mind that if a none-CI resource is connected to Jenkins and if the resource is offline for more than 2 weeks, the resource will be removed from Jenkins in order to keep the resource situation on Jenkins up to date.

Build Servers

OPNFV CI has number of build servers to support unit test, build, and document generation jobs. These build servers have Ubuntu 14.04 and Centos 7 installed on them and the jobs are configured to run on servers with the appropriate operating system.

Setting up Centos7 Based Build Servers

These servers are used for ovsnfv builds.

  • Install Centos7
  • Create jenkins user
  • Connect slave to OPNFV Jenkins using one of the supported methods (If the slave is in LF Lab, use SSH Slaves. If not, use JNLP)
  • Install dependencies
    • sudo yum install -y epel-release rpm-build kernel-headers libpcap-devel zlib-devel numactl-devel doxygen python-sphinx libvirt-devel python-devel openssl-devel python-six net-tools bc
  • Install group Development Tools
    • sudo yum group install -y "Development Tools"
  • Install pip
  • Enable artifact upload by installing gsutil using pip. Get the Google Storage credentials from OPNFV Helpdesk.
    • sudo pip install gsutil
    • gsutil config
  • Please note that additional configuration needs to be done to build ovsnfv. This needs to be checked with the project.

Setting up Ubuntu 16.04 Based Build Servers

These servers are used for fuel, compass, parser, yardstick, kvmfornfv build/unit tests and test projects (functest, yardstick, etc.) docker image builds.

  • Install Ubuntu 16.04
  • Create jenkins user
  • Install dependencies
    • sudo apt-get install -y git build-essential curl wget rpm fuseiso createrepo genisoimage libfuse-dev dh-autoreconf pkg-config zlib1g-dev libglib2.0-dev libpixman-1-dev docker python-virtualenv python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev bc qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils monit openjdk-8-jre-headless python-nose dirmngr collectd flex bison libnuma-dev shellcheck python-pip
  • Install docker
  • Configure docker hub credentials
  • Make jenkins user member of docker group, logout, login and start/enable docker service and verify docker installation
    • sudo usermod -aG docker jenkins

    • su jenkins
    • docker run hello-world
  • Install Python packages
    • sudo pip install tox
  • Enable artifact upload by installing gsutil using pip. Get the Google Storage credentials from OPNFV Helpdesk.
    • sudo pip install gsutil
    • gsutil config
  • Connect slave to OPNFV Jenkins using one of the supported methods (If the slave is in LF Lab, use SSH Slaves. If not, use JNLP as described on this document)

Onboarding New Projects to OPNFV CI

Documentation will include information regarding

  • how to onboard a new feature project to OPNFV CI: the need to get integrated with installers
  • how to onboard a new installer project to OPNFV CI
  • how to onboard a new test project to OPNFV CI

 

Static Check Tools of CI

 Background:

  • Open source code of different styles, uneven quality will affect the reading of the code to a certain extent, it would affect the promotion of open source projects.
  • A number of low-level problems will waste the committer's time, and committer would limited investment for logical error checking.
  • Current CI commit gate is simple, just have flake8 check in parts of projects, it's better to apply more checks to more projects. Contact the Infra team on IRC #opnfv-octopus if you have questions on which tools are installed.

 Tools:

Tools'   NamelanguageIntroductionFeatures
flake8pyThe pep8 script simply checks your code against PEP8 and warns you about  inconsistencies. The pyflakes script reads your code and warns you about common sources of errors. Flake8, which brings together pep8 and pyflakes in one convenient standalone package. You don't even need separate installations of pep8 or pyflakes, since those come baked in to Flake8.
    To these scripts, Flake8 also added:
    •A Mercurial commit hook.
    •A way to exempt files or lines from being checked.
    •An optional cyclomatic complexity checker.

PEP8

autopep8

pylintpyPylint is a tool that checks for errors in Python code, tries to enforce a coding standard and looks for code smells. It can also look for certain type errors, it can recommend suggestions about how particular blocks can be refactored and can offer you details about the code’s complexity.
The default coding style used by Pylint is close to PEP8.

PEP8

Normative inspection of variable names

shellcheckshShellCheck is a GPLv3 tool that gives warnings and suggestions for bash/sh shell scripts.
The goals of ShellCheck are
    •To point out and clarify typical beginner's syntax issues that cause a shell to give cryptic error messages.
    •To point out and clarify typical intermediate level semantic problems that cause a shell to behave strangely and counter-intuitively.
    •To point out subtle caveats, corner cases and pitfalls that may cause an advanced user's otherwise working script to fail under future circumstances.
web
yaml lintyamlyamllint does not only check for syntax validity, but for weirdnesses like key repetition and cosmetic problems such as lines length, trailing spaces, indentation, etc.web
pclintC/C++PC-lint is a powerful static analysis tool that checks your C/C++ source code and find bugs, glitches, inconsistencies, non-portable constructs, redundant code, and much more. It looks across multiple modules and therefore can address more issues than a compiler. PC-lint supports MISRA (Motor Industry Software Reliability Association) standards. MISRA is a collaboration between vehicle manufacturers, component suppliers and engineering consultancies with the goal to promote best practice in developing safety-related electronic systems in road vehicles.Among the many capabilities of PC-lint are:
    •Detection of dangling and uninitialised pointers
    •Variable initialisation/value tracking
    •Variable scoping checks
    •Detection of type mismatches and suspicious casts
    •Checking of assignment operator and copy constructor behaviour
    •Detection of potential memory leaks
    •Analysis of thread behaviour (new to PC-lint 9.0)
    •MISRA C/C++ rule validation
   
cppcheckC/C++

Cppcheck is a static analysis tool for C/C++ code. Unlike C/C++ compilers and many other analysis tools it does not detect syntax errors in the code. Cppcheck primarily detects the types of bugs that the compilers normally do not   detect. The goal is to detect only real errors in the code (i.e. have zero false positives).
    Check items include:
    1. Automatic variable inspection
    2. Array of border checks
    3. Class check
    4. Expired function, obsolete function call check
    5. Abnormal memory usage, release check
    6. Memory leak check, mainly through the memory reference pointer
    7. Operating system resources release checks, interrupts, file descriptors, and so on
    8. The exception STL function uses a check
    9. Incorrect code format and performance factor checks

don't detect syntax errors
anteater-fwpython, c/c++, java, perlA security code lint checker and system to prevent the inclusion of binaries and secrets into OPNFV repos.Finds low level security bugs and prevents check in of private keys, binaries and blacklist files.