Page tree
Skip to end of metadata
Go to start of metadata

Yardstick test suite files

All test suite files that used in Yardstick CI are stored in "tests/opnfv/test_suites" directory.

Here is a list of Yardstick CI job test suites:

  • opnfv_os-nosdn-kvm-ha_daily.yaml
  • opnfv_os-nosdn-kvm_ovs-ha_daily.yaml
  • opnfv_os-nosdn-lxd-ha_daily.yaml
  • opnfv_os-nosdn-lxd-noha_daily.yaml
  • opnfv_os-nosdn-nofeature-ha_daily.yaml
  • opnfv_os-nosdn-nofeature-noha_daily.yaml
  • opnfv_os-nosdn-ovs-ha_daily.yaml
  • opnfv_os-ocl-nofeature-ha_daily.yaml
  • opnfv_os-ocl-nofeature-noha_daily.yaml
  • opnfv_os-odl_l2-bgpvpn-ha_daily.yaml
  • opnfv_os-odl_l2-nofeature-ha_daily.yaml
  • opnfv_os-odl_l2-nofeature-noha_daily.yaml
  • opnfv_os-odl_l2-sfc-ha_daily.yaml
  • opnfv_os-odl_l2-sfc-noha_daily.yaml
  • opnfv_os-odl_l3-nofeature-ha_daily.yaml
  • opnfv_os-onos-nofeature-ha_daily.yaml
  • opnfv_os-onos-nofeature-noha_daily.yaml
  • opnfv_os-onos-sfc-ha_daily.yaml

Each test suite file represents one scenario in OPNFV and all these scenarios are now deployed by 4 installers: apex, compass, fuel, joid.


Write a test suite file

Let's take "opnfv_os-nosdn-nofeature-ha_daily.yaml" as an example:

# Huawei US bare daily task suite
schema: "yardstick:suite:0.1"
name: "os-nosdn-nofeature-ha"
test_cases_dir: "tests/opnfv/test_cases/"
file_name: opnfv_yardstick_tc002.yaml
file_name: opnfv_yardstick_tc005.yaml
file_name: opnfv_yardstick_tc010.yaml
file_name: opnfv_yardstick_tc011.yaml
file_name: opnfv_yardstick_tc012.yaml
file_name: opnfv_yardstick_tc014.yaml
file_name: opnfv_yardstick_tc037.yaml
file_name: opnfv_yardstick_tc043.yaml
# constraint is where you can specify which installer or pod it can be run
# task_args is parameter you can pass self-customized variable for the test case
installer: compass
pod: huawei-pod1
huawei_pod1: '{"pod_info": "etc/yardstick/nodes/compass_sclab_physical/pod.yaml",
"host": "node4.LF","target": "node5.LF"}'


Write a test case file

Let's take "opnfv_yardstick_tc043.yaml" as an example:

# Yardstick TC043 config file
# Measure latency between NFVI nodes using ping
schema: "yardstick:task:0.1"
# here is where we can set default value for the customized variables
{% set host = host or "node1.LF" %}
{% set target = target or "node2.LF" %}
{% set pod_info = pod_info or "etc/yardstick/nodes/compass_sclab_physical/pod.yaml" %}
type: Ping
packetsize: 100
# {{host}} and {{target}} are customized variable in this test case
host: {{host}}
target: {{target}}
type: Duration
duration: 60
interval: 1
max_rtt: 10
action: monitor

type: Node
name: LF
# {{pod_info}} is customized variable in this test case
file: {{pod_info}}
  • No labels


  1. zte-pod1 uses the test suite opnfv_os-odl_l2-nofeature-ha_daily.yaml in CI,  but the installer is fuel.

    From the CI log,TC055 is failed.

    could you please tell me how to modify constraint?




  2. Hi zhihui,

    Currently TC055 only can be run on compass pods. 

    The reason is TC055 is running on the node directly. 

    In compass we use password to log in the node; in fuel we use key-file to log in the node.

    for now the .py file only handle the password.


    I have add a constraint in patch to only run tc055 on compass pod.

    In this patch, you can also learn how to add constraint. 

    The constraint part and task_args part are independent. you can specify both of the them or add either one.




  3. Hi Jing,

    Thanks for your reply.

    how to add the two different installers in test suite. This example only shows one installer. Refer to CI log, the pod which choose scenario odl_l2  will run test suite opnfv_os-odl_l2-nofeature-ha_daily.yaml or opnfv_os-odl_l2-nofeature-noha_daily.yaml


  4. To add two different installers you just need to add the installer name and separate it by a comma.

    like this: 


                     installer: compass, fuel


    under the "constraint" section both installer and pod are independent.

    So you can only specify the installers or only specify the pods.