Page tree
Skip to end of metadata
Go to start of metadata


Overview

Project NameVSPERF
Target Release NameJerma
Project Lifecycle StateSee OPNFV Lifecycle for more information

Scope

For Jerma release the scope can be summarized as:

  1. Add features to run VSPERF in openstack environment.
  2. Improve performance metrics.
  3. Upgrade to Newer version of T-Rex Traffic Generator.
  4. Develop features to support automated K8S Networking Performance benchmarking.
  5. Improved LMA - Logging, Metrics and Alerting.


All the JIRA Tasks related to Jerma can be found here

Requirements

Provide list of any OPNFV level requirements that are being addressed by the project for this release.  Provide links to requirements documented in RELREQ using the Jira embed tool for Confluence.  If none, enter "none".

RELREQ-6Openstack Dataplane Performance Benchmarking. Currently VSPERF does have have features to run its tests on Openstack. In Jerma, we aim to add this feature to VSPERF.

RELREQ-9:  Kubernetes Container-Networking Benchmarking with VSPERF.

Release Artifacts

Indicate the work product (Executable, Source Code, Library, API description, Tool, Documentation, Release Note, etc) for this release.

NameDescription

Format (Container, Compressed File, etc.)

Source Code

Containers

Traffic Generator VM

Documentation

Ansible Roles and Playbooks

Architecture

High level architecture diagram

VSPerf-Architecture

Internal Dependencies

X-Testing.

Tentative: DDPD-PROX.

External Dependencies

Openstack.

Test and Verification

Testing will be done on Intel Testbeds. 

RELREQ-6

Openstack Testbed: Intel Pod10 OR Intel Pod15 OR Intel Pod18.

Testcases:

Subset of LTD (LTD Test Spec Overview)

  1. RFC2544 - 2 Instances on 1 Compute, L2.
  2. RFC2544 - 2 Instances across  2 Computes, L2.
  3. RFC2544 - 2 Instances on 1 Compute, L3.
  4. RFC2544 - 2 Instances across  2 Computes, L3.
  5. Repeat (1), (2) (3) and (4) for RFC2544 BacktoBack.

RELREQ-9

Testbed: Intel Pod12

Test:

Traffic

For all the tests, unless specified otherwise, we consider bi-directional traffic with single-flow. Bidirectional scenario is considered to be the most precise way how to perform test testing as it is running one separate stream in each direction that is evaluated on the other endpoint.

The Ethernet frames used in the tests are based on Ethernet standard. The packet-size distribution is compliant to RFC 2544 specification, covering the most important frame sizes that might be present in average network -  64, 128, 256, 512, 1024, 1280, 1518 Bytes.

Tests

The tests are based on the Level Test Design (LTD) Specification, which is one of the products of the OPNFV vsperf project. The approach VSPERF has adopted is to take existing tests that are relevant to performance benchmarking of physical switches and applying them to benchmarking virtual switching (to allow for a fair comparison with their physical counterparts). Out of many tests that are part of the VSPERF LTD, we use two tests - RFC 2544 Throughput and RFC 2544 Back to Back frame tests.

Throughput test is the fundamental data path speed test for networking devices, assessing the maximum offered load for the DUT under the constraint that no frames or packets are lost. section 26.1 of RFC 2544 specifies the method for this test (earlier sections of RFC 2544 describe the various test conditions). The term "Throughput" refers to the maximum loss-less sending rate after allowing the DUT queues to drain at the end of the trial. The throughput is measured for the distribution of frame sizes mentioned above – one trial for each frame-size.

The back-to-back test is used to test the equipment’s behavior with presence of  bursty traffic or in other words test the operation of buffers. This test attempts to characterize the longest "burst" of back-to-back frames that the DUT can process without loss, as per section 26.4 of RFC 2544. This benchmark should be repeated, and result consistency examined, as results with physical devices have been unstable in some cases.

Performance Metrics

In this work, we consider the following performance metrics

  1. Throughput (as defined in RFC 1242)
  2. Latency (as defined in RFC 1242)
  3. Frameloss count and percentage.
  4. Packet drops.


Risks

List any risks and a plan to mitigate each risk.

Risk DescriptionMitigation Plan


  • No labels