Page tree
Skip to end of metadata
Go to start of metadata

Tests for FDS Phase 1

Demo for CLUS

Demo/test deliverable for July (CLUS).

Demo objective: The FDS demo is to prove that we can build a viable NFV stack with VPP. In order to prove this, we’ll run a small subset of the tests carried out by OPNFV release operations.

Demo contents:

  • Setup/Installation: Bare-metal deployment of a solution stack consisting of OpenStack-OpenDaylight-VPP on UCS servers with 2 compute nodes and 1 control node. We’ll host the demo in PIRL (the Paris lab).
  • Demos:
    • Functionality demo (using yardstick framework): Create a tenant network and router, spawn two simple VMs, check connectivity using ping.
      • TC002 Network connectivity and latency test using Ping
    • Performance demo (using yardstick framework): (similar network as above) Run yardstick’s performance tests on the setup.
      • TC037  Network throughput and packet loss using pktgen, system load using mpstat and latency using ping
      • TC011 Packet delay variation using iperf3
    • Solutions demo (using functest framework):
      • Run open source vIMS solution (from Clearwater) on the setup (Stretch). vIMS a set of features (e.g. security groups) which are considered "stretch" for CLUS.
  • Demo base requirements:
    • Hardware/Lab: 3 UCS (PIRL)
    • Scenario: "apex-os-odl_l2-vpp-noha" scenario (see also FastDataStacks Work Areas#Initialscenarios) installed by APEX installer
    • Test tools: Yardstick, Functest supported on the setup
    • Scale targets for demo:
      • 10 VMs max (vIMS test case requires 10 VMs)
      • 2 tenant  networks (vIMS test case requires single tenant network only. Minimal GBP tests testing isolation requires at least 2 tenant networks)

System level tests

The table has been moved here.

Unit/Sub-system tests

The table has been moved here.

Unit/Sub-system test descriptions

Test descriptions for unit tests which are not already covered anywhere else.

GBP Unit/Subsystem tests

GBP L2 test cases with tenant networks (VXLAN)

Topology: two compute nodes: VPP1, VPP2

Init:

  • create 2 neutron tenant networks (red, blue)
  • create 2 ports in every network (red: r11, r12; blue: b21, b22)



ODL.GBP-L2-intra-tenant: test l2 connectivity inside every network

ping among all ports inside the same network should pass

      same host case

  • ping from r11 to r12
  • ping from r21 to r22
  • ping from b11 to b12
  • ping from b21 to b22

      different host case

  • ping from r11 to r21
  • ping from r11 to r22
  • ping from r12 to r21
  • ping from r12 to r22
  • ping from b11 to b21
  • ping from b11 to b22
  • ping from b12 to b21
  • ping from b12 to b22
ODL.GBP-L2-inter-tenant: test l2 connectivity between tenant networks

ping among all ports from network red to all ports in network blue should NOT pass

      same host case

  • ping from r11 to b11
  • ping from r11 to b12
  • ping from r12 to b11
  • ping from r12 to b12
  • ping from r21 to b21
  • ping from r21 to b22
  • ping from r22 to b21
  • ping from r22 to b22

      different host case

  • ping from r11 to b21
  • ping from r11 to b22
  • ping from r12 to b21
  • ping from r12 to b22
  • ping from r21 to b11
  • ping from r21 to b12
  • ping from r22 to b11
  • ping from r22 to b12
GBP Provider network test cases with provider physical networks (VLAN)

Topology

  • two compute nodes: VPP1, VPP2
  • two PC within different VLAN: PC1(VLAN 1), PC2(VLAN2)

Init

  • create 2 physical provider networks type VLAN (red:VLAN 1, blue:VLAN 2)
  • create 2 ports in every network (red: r11, r12; blue: r21, r22)

ODL.GBP-L2-intra-network: test l2 connectivity inside every network

ping among all ports and PC inside the same network should pass

same host case

  • ping from r11 to r12
  • ping from r21 to r22
  • ping from b11 to b12
  • ping from b21 to b22

different host case

  • ping from r11 to r21
  • ping from r11 to r22
  • ping from r12 to r21
  • ping from r12 to r22
  • ping from b11 to b21
  • ping from b11 to b22
  • ping from b12 to b21
  • ping from b12 to b22

 

PC on provider network case

  • ping from r11 to PC1
  • ping from r12 to PC1
  • ping from r21 to PC1
  • ping from r22 to PC1
  • ping from b11 to PC2
  • ping from b12 to PC2
  • ping from b21 to PC2
  • ping from b22 to PC2
ODL.GBP-L2-inter-network: test l2 connectivity between networks

ping among all ports from network red to all ports and PC in network blue should NOT pass


same host case

  • ping from r11 to b11
  • ping from r11 to b12
  • ping from r12 to b11
  • ping from r12 to b12
  • ping from r21 to b21
  • ping from r21 to b22
  • ping from r22 to b21
  • ping from r22 to b22

different host case

  • ping from r11 to b21
  • ping from r11 to b22
  • ping from r12 to b21
  • ping from r12 to b22
  • ping from r21 to b11
  • ping from r21 to b12
  • ping from r22 to b11
  • ping from r22 to b12


PC on provider network case

  • ping from r11 to PC2
  • ping from r12 to PC2
  • ping from r21 to PC2
  • ping from r22 to PC2
  • ping from b11 to PC1
  • ping from b12 to PC1
  • ping from b21 to PC1
  • ping from b22 to PC1

 

 

VPP/HC Unit/Subsystem tests for FDS

Consider a 3 node test-setup with VXLAN as overlay technology used

Legend:

  • VM - tenant virtual machine
  • VBD - virtual bridge domain
  • HC - Honeycomb VPP configuration agent

 

VPP.VXLAN tunneling configuration

Verify that VXLAN tunneling in VPP can be configured via HC:

  • Configure VXLAN interfaces and routes on VPP1, VPP2, VPP3, then ping between them
  • Configure bridge domain, add the VXLAN interfaces to it
  • Configure VXLAN tunnels over IPv4 between VPPs.
    • also add the tunnels to bridge domains
  • Configure VXLAN tunnels over IPv6 between VPPs.
    • also add the tunnels to bridge domains
  • Configure split-horizon-groups on VXLAN interfaces
VPP.Tap interfaces configuration

Verify that VPP tap interfaces can be provisioned via HC:

  • Configure a tap interface on VPP via HC. Interface with a process (DHCP) running on the Linux host.
  • Assign VM2 an IP address using DHCP.
VPP.L2 bridging and split horizon forwarding

Verify that L2 bridging and split horizon forwarding works:

  • Configure a full mesh of VXLAN tunnels via HC (as above), configure a tap interfaces via HC (as shown)
  • Broadcast messages sent from VM2 (e.g. DHCP discover) should not "loop", i.e. should only be sent from VPP2 to VPP1 and VPP3. VPP1 and VPP3 should not forward the broadcast to VXLAN tunnels but only to connected host/tap interfaces.
VPP.VLAN provider networks configuration

Verify that VLANs can be configured in HC:

  • Configure bridge domain
  • Configure subinterface on physical interface with VLAN tag
  • Add the subinterface into the bridge domain
VPP.L2 provider network bridging

Verify L2 bridging on VLAN provider networks:

  • Configure 2 VLANs via HC (as above)
  • Ping between VMs on this VLAN should work
  • Ping between VMs on different VLANs should NOT work
VPP.VM provisioning
Configure vhost-user interface in HC, add it to bridge domain
VPP.Floating IPs

Create a router, Assign a floating IP

  • configure NAT in HC (ip addresses, routing)

Test connectivity from/to outside network

  • ping the floating IP from a host on the outside network
  • ping a host on the outside network from the VM with the floating ip
VPP.Tenant Isolation

Test tenant network isolation/access.
Details on security groups in OpenStack can be found here: http://docs.openstack.org/openstack-ops/content/security_groups.html 

  • Details TBD - though test should cover:
  • Black and white-listing for flows between VMs based on the following criteria:
    • L2 MAC (src/dst)
    • IPv4 address (src/dst)
    • IPv6 address (src/dst)
    • L4 protocols UDP/TCP/ICMP (test port ranges (src/dst) for UDP/TCP) - note: L4 filtering should work even in the presence of IPv6 extension headers
    • Direction (in/out)
    • Combination of the above (L2 / v4 or v6 / L4 port)
  • Filtering should work in both the overlay (VXLAN) as well as underlay.

Initial test cases:

  • Create two groups: Group1 (VM1, VM2), Group2 (VM3)
  • Create access rules between those:
    • v4 addresses and specific UDP/TCP ports (e.g. 80, 443) allow
    • v4 addresses and ports ranges allow (e.g. 1-1024 allow, deny above)
    • v6 addresses and specific UDP/TCP ports (e.g. 80, 443) allow
    • v6 addresses and ports ranges allow (e.g. 1-1024 allow, deny above)
    • v4 addresses and specific UDP/TCP ports (e.g. 80, 443) deny
    • v4 addresses and ports ranges deny (e.g. 1-1024 allow, deny above)
    • v6 addresses and specific UDP/TCP ports (e.g. 80, 443) deny
    • v6 addresses and ports ranges deny (e.g. 1-1024 allow, deny above)
VPP.Scale-Tiny

Scale testing - Tiny scenario - bridge domain scaling

  • # of VMs in scenario similar to the setup depicted above: 50
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 3
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: 3

Scale testing - Tiny scenario - Middle Ground

  • # of VMs in scenario similar to the setup depicted above: 50
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 3
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: 3

Scale testing - Tiny scenario - MACs per bridge scaling

  • # of VMs in scenario similar to the setup depicted above: 50
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 3
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: 3
VPP.Scale-Small

Scale testing - Small scenario - bridge domain scaling

  • # of VMs in scenario similar to the setup depicted above: 500
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 10
  • # of bridge domains in scenario similar to the setup depicted above: 100
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Small scenario - Middle Ground

  • # of VMs in scenario similar to the setup depicted above: 512
  • # of MACs per bridge domain: 32
  • # of hosts/VPP nodes in the setup: 10
  • # of bridge domains in scenario similar to the setup depicted above: 32
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Small scenario - MACs per bridge scaling

  • # of VMs in scenario similar to the setup depicted above: 500
  • # of MACs per bridge domain: 100
  • # of hosts/VPP nodes in the setup: 10
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: TBD
VPP.Scale-Medium

Scale testing - Medium scenario - bridge domain scaling

  • # of VMs in scenario similar to the setup depicted above: 5,000
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 80
  • # of bridge domains in scenario similar to the setup depicted above: 1000
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Medium scenario - Middle Ground

  • # of VMs in scenario similar to the setup depicted above: 5,000
  • # of MACs per bridge domain: 100
  • # of hosts/VPP nodes in the setup: 80
  • # of bridge domains in scenario similar to the setup depicted above: 100
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Medium scenario - MACs per bridge scaling

  • # of VMs in scenario similar to the setup depicted above: 5,000
  • # of MACs per bridge domain: 1,000
  • # of hosts/VPP nodes in the setup: 80
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: TBD
VPP.Scale-Large

Scale testing - Large scenario - bridge domain scaling

  • # of VMs in scenario similar to the setup depicted above: 50,000
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 800
  • # of bridge domains in scenario similar to the setup depicted above: 10,000
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Large scenario - Middle Ground

  • # of VMs in scenario similar to the setup depicted above: 51,200
  • # of MACs per bridge domain: 320
  • # of hosts/VPP nodes in the setup: 800
  • # of bridge domains in scenario similar to the setup depicted above: 320
  • # of individual entries in a security rule for access to a bridge: TBD

Scale testing - Large scenario - MACs per bridge scaling

  • # of VMs in scenario similar to the setup depicted above: 50,000
  • # of MACs per bridge domain: 10,000
  • # of hosts/VPP nodes in the setup: 800
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: TBD
VPP.Scale-Huge (unreal for now)

Scale testing - Huge scenario (unreal for now) - bridge domain scaling

  • # of VMs in scenario similar to the setup depicted above: 500,000
  • # of MACs per bridge domain: 10
  • # of hosts/VPP nodes in the setup: 8,000
  • # of bridge domains in scenario similar to the setup depicted above: 100,000
  • # of individual entries in a security rule for access to a bridge: 20,000

Scale testing - Huge scenario (unreal for now) - Middle Ground

  • # of VMs in scenario similar to the setup depicted above: 500,000
  • # of MACs per bridge domain: 1,000
  • # of hosts/VPP nodes in the setup: 8,000
  • # of bridge domains in scenario similar to the setup depicted above: 1,000
  • # of individual entries in a security rule for access to a bridge: 20,000

Scale testing - Huge scenario (unreal for now) - MACs per bridge scaling

  • # of VMs in scenario similar to the setup depicted above: 500,000
  • # of MACs per bridge domain: 100,000
  • # of hosts/VPP nodes in the setup: 8,000
  • # of bridge domains in scenario similar to the setup depicted above: 10
  • # of individual entries in a security rule for access to a bridge: 20,000

vIMS (OPNFV FuncTest) details

vIMS Overview: vIMS solution 

Networking details:

  • The test case use only one network (created by the orchestrator init script).  This network is connected to the external network by one routeur (created by the orchestrator init script too).
  • Some security groups exist for isolation of differents VMs.  

VMs: This test case creates 9 VMs : one for the orchestrator and 8 for Clearwater VNF. All of those VMs are connected to the same network. The IP addresses are provided with the DHCP but the subnet network address is fixed by the init script.

Note: The functest container (hosted on "jumphost") must communicate with floating IP connected on a VM. And the orchestrator VM (cloudily manager) must communicate with OpenStack API


 

Existing tests considered for FDS

At a minimum, FDS needs to cover FuncTest and Yardstick tests. 
Colorado Testing - Discussion and Proposals - summarizes all test projects, their scope and tests. 

FuncTest

<list Functest tests which apply to FDS - initial scenario (O/S-ODL_L2-VPP)>

vPing
Tempest
  • basic CRUD api tests for networks, subnets, ports - ipv4/ipv6
  • test_network_basic_ops - pings, ssh and floating IPs
    • create a VM with floating IP
    • ping it
    • ssh with key into it
    • ping external IP, external hostname, internal IP (same subnet) from that VM
    • detach floating IP and check unrechable
    • associated detached to a new VM and verify connectivity
  • test_server_basic_ops - basically just ssh, the security groups are created but not used
    • Create a keypair for use in launching an instance
    • Create a security group to control network access in instance
    • Add simple permissive rules to the security group
    • Launch an instance
    • Perform ssh to instance
    • Verify metadata service
    • Verify metadata on config_drive
    • Terminate the instance
Rally
  • Authenticate.validate_neutron
    • list networks
  • HeatStacks
    • does creating a stack actually create VMs?
  • NeutronNetworks
    • CRUD floating IP, network, port, router, subnet
  • NovaKeypair
    • Create a VM
  • NovaServer
    • Create a VM
  • NovaSecGroup
    • Create a Security Group
ODL Robot

All current ODL Robot tests - those are:

  • Neutron.Networks :: Checking Network created in OpenStack are pushed to ODL
    Check OpenStack Networks :: Checking OpenStack Neutron for known networks
    Check OpenDaylight Networks :: Checking OpenDaylight Neutron API
    Create Network :: Create new network in OpenStack
    Check Network :: Check Network created in OpenDaylight 

  • Neutron.Subnets :: Checking Subnets created in OpenStack are pushed to ODL
    Check OpenStack Subnets :: Checking OpenStack Neutron for known subnetworks
    Check OpenDaylight subnets :: Checking OpenDaylight Neutron API
    Create New subnet :: Create new subnet in OpenStack
    Check New subnet :: Check new subnet created in OpenDaylight 

  • Neutron.Ports :: Checking Port created in OpenStack are pushed to OpenDaylight
    Check OpenStack ports :: Checking OpenStack Neutron for known ports
    Check OpenDaylight ports :: Checking OpenDaylight Neutron API
    Create New Port :: Create new port in OpenStack
    Check New Port :: Check new subnet created in OpenDaylight 

  • Neutron.Delete Ports :: Checking Port deleted in OpenStack are deleted also
    Delete New Port :: Delete previously created port in OpenStack
    Check Port Deleted :: Check port deleted in OpenDaylight
    Neutron.Delete Ports :: Checking Port deleted in OpenStack are deleted

  • Neutron.Delete Subnets :: Checking Subnets deleted in OpenStack are deleted...
    Delete New subnet :: Delete previously created subnet in OpenStack
    Check New subnet deleted :: Check subnet deleted in OpenDaylight 

  • Neutron.Delete Networks :: Checking Network deleted in OpenStack are deleted
    Delete Network :: Delete network in OpenStack
    Check Network deleted :: Check Network deleted in OpenDaylight

Promise

These are testing reservations for compute resources; the only relevant part is that these tests just provision VMs

vIMS

vIMS (Virtual IP Multimedia Subsystem) is a test defined by FuncTest - see http://artifacts.opnfv.org/functest/brahmaputra/docs/userguide/index.html#vims - leveraging the open source IMS solution from Clearwater.

From the wiki: This functional test will verify that

  • The OpenStack Nova API can be called to instantiate a set of VMs that together comprise a vIMS network function
  • The OpenStack Glance service is capable of serving up the required images
  • The virtual networking component of the platform can provide working IP connectivity between and among the VMs
  • The platform as a whole is capable of supporting the running of a real virtualized network function that delivers a typical service offered by a network operator, i.e. voice telephony
Doctor

Invokes a fault by disabling network on one of the compute hosts for fault management

Yardstick

NFVI test cases
  • TC001 - Network Performance - pktgen
  • TC002 - Network Latency - ping
  • TC008 - Network Performace, Packet loss Extended Test - pktgen
    • Note: combinations of different packet sizes and different numbers of flows
  • TC009 - Network Performace, Packet loss - pktgen
    • Note: combinations of 64 b packet size and different numbers of flows
  • TC011 - Packet delay variation between VMs - iperf3
  • TC037 - Latency, CPU Load, Throughput, Packet Loss - pktgen, ping, mpstat
    • Note: Each port amount is run two times
  • TC038 - Latency, CPU Load, Throughput, Packet Loss (Extended measurements) - pktgen, ping, mpstat
    • Note: Each port amount is run ten times
OPNFV Feature test cases
  • TC027 - IPv6 connectivity between nodes on the tenant network - ping6
  • TC006 - Virtual Traffic Classifier Data Plane Throughput Benchmarking Test - DPDK pktgen
  • TC007 - Virtual Traffic Classifier Data Plane Throughput Benchmarking Test in Presence of Noisy neighbours - DPDK pktgen
  • TC020 - Virtual Traffic Classifier Instantiation Test - DPDK pktgen
  • TC021 - Virtual Traffic Classifier Instantiation Test in Presence of Noisy Neighbours - DPDK pktgen

New tests for FDS

FuncTest

<TBD - list Functest tests which apply to FDS - initial scenario (O/S-ODL_L2-VPP)>

Yardstick

<TBD - list Yardstick tests which apply to FDS - initial scenario (O/S-ODL_L2-VPP)>

FD.io/VPP tests

<TBD> List of existing and new CIST tests which apply to FDS 

Seel also: https://wiki.fd.io/view/CSIT/FuncTestPlan

Scenarios with topologies for FDS

Tenant network is a network created and managed by OpenStack.

Provider network is an existing physical network which has been added manually to OpenStack to mirror it.

OpenDaylight Cluster Testing for FastDataStacks

Dedicated wiki page for ODL cluster testing: ODL Cluster Testing

 

  • No labels