There are several projects dealing with integration and testing (see https://wiki.opnfv.org/).
A global overview can be described as follow:
See Testing Ecosystem details on the elaboration of the figure
We consider the projects referenced on the wiki main page:
|CPerf||Controller performance testing|
Detect Bottlenecks in OPNFV solution
VIM and NFVI funcionnal testing
Umbrella project for functional testing
|Platform Performance Benchmarking|
Storage Perfomance testing
Data-plane performance testing
Verification of the infrastructure compliance when running VNF applications.
Umbrella project for performance testing
|Test OPNFV validation criteria for use of OPNFV trademarks|
All the test projects are closely connected to additional projects:
Agreed During testing meeting 1/12: http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2016/opnfv-testperf.2016-12-01-15.00.html
|weekly: generic test full||X||X|
weekly: Tempest and Rally full (OpenStack)
daily: Yardstick Tier 2 (should be daily)
Vsperf, storperf Lite test case
(after they integrate with yardstick)
daily: Doctor, promise, bgpvpn, security_scan,..)
daily: Yardstick Tier 1
( HA, IPV6, SFC, KVM,...)
daily: vPing, tempest smoke, Rally Sanity
daily: Yardstick Tier 0
(Lite generic test)
Test cases can be developed by the test projects and/or by the feature projects.
Feature projects are responsible of their own tests as well as the associated troubleshooting.
Test projects are in charge to run their own test cases and help feature projects to be integrated in CI to meet test criteria for the release.
The test coverage is not an easy task. Test projects must focus on NFVI, however, in order to test efficiently NFVI, it may be useful to test VNFs (which are out of OPNFV scope).
We may suggest several views:
In addition of tags and tier, it is possible to specify a domain in the test case definition.
It would be interesting to agree on the different domains in order to leverage such informations for a domain map.
it is possible to associate several domains for a test case e.g. orchestration, vnf for cloudify_ims
See dedicated testing coverage page
Test dashboards shall give a good overview of the different tests on the different PODs. The test dashboards keep evolving along with releases with diverse options and rich enhancements.
For Brahmaputra 3 dashboards have been created:
In Colorado, the target dashboards are:
Basically if you are graphing test status, it will be recommended to use the first option. If you need to graph results = f(time) for longer duration test and visualize bandwidth, lattency, ..evolution, it will be recommended to use InfluxDB/grafana.
Contact the test working group for any question.