Page tree
Skip to end of metadata
Go to start of metadata

Admin

weekly meeting

Testing group resources

ColoradoDanubeEuphrates
    Planned October 2017

Colorado

Colorado Testing

Danube

Danube Testing

Euphrates

Euphrates Testing

Introduction

Testing is still a key challenge for OPNFV.

All the projects must manage their test strategy (unit, functional, security, performance)

Several specific test projects have been validated by TSC and already deal with:

  • Define testcases
  • Perform tests not covered by a single project
  • Create tooling
  • Study Performance end to end

OPNFV Test ecosystem

There are several projects dealing with integration and testing (see https://wiki.opnfv.org/).

Overview

A global overview can be described as follow:

See Testing Ecosystem details on the elaboration of the figure

Project details

We consider the projects referenced on the wiki main page:

Project name

Scope

CPerfController performance testing

Bottlenecks

Detect Bottlenecks in OPNFV solution

Functest

VIM and NFVI funcionnal testing

Umbrella project for functional testing

QTIP

Platform Performance Benchmarking

Storperf

Storage Perfomance testing

VSperf

Data-plane performance testing

Yardstick

Verification of the infrastructure compliance when running VNF applications.

Umbrella project for performance testing

Dovetail

Test OPNFV validation criteria for use of OPNFV trademarks

All the test projects are closely connected to additional projects:

  • Pharos: The Pharos project deals with the creation of a federated NFV test capability that is geographically and technically diverse and hosted by different companies in the OPNFV community. This requires developing a baseline specification for an OPNFV "compliant" test environment along with tools, processes and documentation to support integration, testing and collaborative development projects with needed infrastructure and the tooling.
  • Releng: release engineering that deals with git/gerrit and jenkins management

Tiers

Agreed During testing meeting 1/12: http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2016/opnfv-testperf.2016-12-01-15.00.html

LevelCategoryFunctestYardstickVSPerfStorperfBottleneckQTIP
8Other    XX
7In Service      
6Stress
      
5VNFweekly: vIMS     
4Performance

N.R

weekly: generic test fullXX  
3Components


weekly: Tempest and Rally full (OpenStack)

daily: Yardstick Tier 2 (should be daily)

Vsperf, storperf Lite test case

(after they integrate with yardstick)

    
2Features

daily: Doctor, promise, bgpvpn, security_scan,..)

daily: Yardstick Tier 1

( HA, IPV6, SFC, KVM,...)

    
1Smoke

daily: vPing, tempest smoke, Rally Sanity

daily: Yardstick Tier 0

(Lite generic test)

    
0HealthcheckgatingN.R    

Troubleshooting

Test cases can be developed by the test projects and/or by the feature projects. 

Feature projects are responsible of their own tests as well as the associated troubleshooting.

Test projects are in charge to run their own test cases and help feature projects to be integrated in CI to meet test criteria for the release.

Test coverage

The test coverage is not an easy task. Test projects must focus on NFVI, however, in order to test efficiently NFVI, it may be useful to test VNFs (which are out of OPNFV scope).
We may suggest several views:

  • per domain
  • per component
  • per ETSI domain

In addition of tags and tier, it is possible to specify a domain in the test case definition.

It would be interesting to agree on the different domains in order to leverage such informations for a domain map.

it is possible to associate several domains for a test case e.g. orchestration, vnf for cloudify_ims

DomainFunctestYardstickVSPerfStorperfBottleneckQTIP
compute      
orchestration     
networking     
storage      
vimX     
vnf

X

     

 

See dedicated testing coverage page

Test Dashboards

Test dashboards shall give a good overview of the different tests on the different PODs.

For Brahmaputra 3 dashboards have been created:

In Colorado, the target dashboards are:

  • ELK (Elasticsearch/Logstash/kibana) as an evolution of the Functest home made js dashboard
  • Grafana portal using dataset pushed into InfluxDB

Basically if you are graphing test status, it will be recommended to use the first option. If you need to graph results = f(time) for longer duration test and visualize bandwidth, lattency, ..evolution, it will be recommended to use InfluxDB/grafana.

Contact the test working group for any question. 

How to get support from test projects?

  1.  Contact the testing group
    1. weekly meeting
    2. mail: test-wg AT lists.opnfv.org
    3. IRC: #opnfv-testperf / ...)
  2.  Declare your project in the test DB: http://testresults.opnfv.org/testapi/test_projects
  3.  Declare your test cases in the DB: http://testresults.opnfv.org/testapi/test_projects/doctor/cases
  4.  Provide your constraint (scenarios/installers) e.g. doctor => Apex only
  5.  Provide your test success criteria* e.g. doctor final status should be PASS
  6.  Develop the code of the tests https://git.opnfv.org/cgit/doctor/tree/tests
  7.  Create JIRA in Functest/Yardstick/xPerf for integration 
  8.  Work with test team on integration (CI pipeline, dashboard, …)
  9.  Troubleshoot on the different scenarios
  10.  Document your tests
  • No labels

6 Comments

  1. Morgan Richomme where can I find an editable version of the overview picture? I want to propose a modification to update QTIP status.

    1. https://git.opnfv.org/functest/plain/docs/com/img/OPNFV_testing_group.png

      but it is not editable (sad) 

      I was not able to find it somewhere in a ppt, it will be quicker to redo it ...or you can edit it in any drawing tool

      1. Alright. Let me try to reproduce it with an editable format.

      2. Replaced with an editable diagram. Please check.

  2. I suggest the "Perf&Benchmark " of testperf ecosystem should be replaced with "Score&QPI",

    1. I thought we should not change a project's scope without TSC's approval.

Write a comment…