Page tree
Skip to end of metadata
Go to start of metadata

Please note that this page is a draft page aiming to prepare the Danube release...(image src:

Meetup slide deck

Danube Functest tests

Framework improvements

CI evolution for Functest in Danube

Docker images slicing

JIRA Summary

Key Summary T Created Updated Due Assignee Reporter P Status Resolution

Scenario owner

/functeststart <SUT>start a Functest docker file towards a SUTDone automatically from CI today
 stop Done automatically from CI today
 info <id>  
/functest/testinfo-allget the list of runnable testsavailable in CLI
 info <scenario>get the list of runnable tests for the given scenario (per installer?) 
 run <testcases> / <tiers>run the testsavailable in CLI

select test cases to build a <scenario test list>

reduce the scope (today based on regex)

alternative if we create a scenario method see below
 excludeexclude test cases from the <scenario test list>, Tempest, Rally sub test cases and/or feature testsalternative if we create a scenario method see below
 deploy <VNF>

for the last tier, deploy without testing to let other test project tests

if several deployment modes are possible, it should be possible to precise the deployment (VNFM/orchestrator)

could be used by other test project

partially available in CLI

/functest/resultsgetget the resultsalready possible via test API
/functest/logsgetget scenario logsdone from CI to artifact
/functest/dashboardcreatecreate a dashboard for a given scenario??
 defineCustomListoverride default list of runnable tests (exclude tests) 
 getInstallerretrieve the list of installers supporting this scenario 
 getStatsget stats per installer/pod/.. as we do in the reporting based on the CSV file 
 getScoreget the score based on the last 4 iterations for the given scenario 


Feature project


create <constraints>

declare feature project in Functest

Add feature project in testcases.yaml + exec_test + ...

constaints shall be related to description in testcases.yaml + requirement for docker file






All the openstack action

managed in openstack_utils today

possible to use SNAPS as an alternative

 connectconnect to SUT nodes in SSHdone case by case and installer by installer, need to provide an abstract method
/functest/logsgetget feature logsdone in CI => artifact
/functest/resultspublishpush feature projet results to the DBavailable in test API
 getget resultsavailable in test API
/functest/reportingexcludeexclude feature project from reporting

currently automatically added based on testcases.yaml

sometimes it meust be excluded (e.g. security_scan not pushing results in DB => status woudl be always FAIL)


Add view of the project

precise the metrics you want to see based on results stored in DB

e.g. success rate = f(time), duration = f(time)


closed to what we did through dashboard method for Brahmaputra (possibility to refactor this method)

Test projects (interfaces Functest / Yardstick)

  • run load test
  • generate traffic
  • capture traffic
  • No labels


  1. I am totally with the idea of splitting the docker image by considering domain. It will bring benefits in terms of

    1. after images are separated, the image build process will be more efficient especially for the case that there is a small change to a specific test case and only its image will be rebuilt
    2. test cases of different domain can run in parallel, which decreases the execution duration
    3. it's more flexible to run selective test cases by domain and deploying related container will be enough

    We might build a small launchbox-like image to provide the runtime for CLI. Whenever the CLI fires commands, containers will be deployed accordingly. If whole batch of test cases are needed, separated containers are deployed to run in parallel.

    One issue to be figured out is, how to gather the log. As result is sent to the backend database, this issue won't be a big problem.


  2. Your ideas sound good, however the parallelization of the tests will impact the workflow of the framework. For example when cleaning the resources after each test execution. It will also impact CI dramatically (the way the docker images are built, the way the output is showed to the console, etc...)

    We might think of a clever way of doing that, but I agree in general. We really need to discuss all these ideas and follow up continuously, to impact CI results as least as possible.

    1. Jose, I agree with you.

      Running test cases in parallel introduces huge challenges(schedule, log consolidation, etc), which deserves thorough discussion and wise architecture design.

      We may put what I said in comment #1 into two stories

      1. split the docker image, which requires much work to be designed, implemented and fully tested
      2. consider to run some cases in parallel, to be exact, we all agree some cases will block other cases which is not suitable for the case. We may think about how to optimize the process for most of the feature related cases. This priority is not as high as the first story currently and it will be critical if the whole running performance is not ideal after more features test cases are brought in.


  3. Helen Yao Serena Feng Cedric Ollivier  I created a subpage for docker handling: Docker images slicing

    Go thorough it if you have time before our discussion tomorrow and feel free to add things.