Page tree
Skip to end of metadata
Go to start of metadata

Table of contents:

Enhancements of current components and work areas for FastDataStacks

Work areas

Key work areas for composing the intial FastDataStack out of OpenStack, OpenDaylight, VPP:

  1. OpenStack / OpenDaylight enhancements
    1. O/S Neutron - ODL Neutron Northbound: Evolve OpenStack - OpenDaylight ML2 driver (enhanced synchronization and cluster support)
  2. OpenDaylight enhancements
    1. Create Neutron (Group Based Policy) Mapper (part of the ODL GBP project) for necessary Neutron Objects
      1. Extend GBP infrastructure to allow End-Point Configuration
      2. Neutron Mapper refactor
    2. Augment/extend ODL Topology info to capture provider networks
    3. Renderer Manager
    4. GBP renderer for VPP/Netconf (part of the ODL GBP project)
      1. Overlay transport (VxLan tunnel) manager
  3. FD.IO enhancements
    1. Honeycomb enhancements: YANG models, associated API/ABIs in VPP
      1. FD.io Honeycomb project plan
      2. Honeycomb/VPP YANG models: https://gerrit.fd.io/r/gitweb?p=honeycomb.git;a=tree;f=v3po/api/src/main/yang;h=f32fedf0a116e665dd1f4a4bec9fa60bd7d9af40;hb=HEAD
      3. Honeycomb JIRA: https://jira.fd.io/projects/HONEYCOMB
        1. Honeycomb new features epic: https://jira.fd.io/browse/HONEYCOMB-30
        2. Honeycomb refactoring epic: https://jira.fd.io/browse/HONEYCOMB-5 (Blocks HONEYCOMB-30 to a certain extent)
    2. Honeycomb further items for GBP:
      1. vhostuser support (vhostuser type interface (for L2 connectivity)
      2. Check VxLan Tunneling
      3. Provider networks (VLAN) (subinterfaces; VLan tag (push/pop))
      4. Policy - security groups, security group rules (L2 ACL; srcMAC, dstMAC allow/deny)

    3. VPP enhancement
      1. VPP TAP interface support
      2. VPP NSH support
      3. VPP ACL enhancement to support Security Groups model (especially TCP/UDP port range).
    4. JIRA CSIT Functional and Performance test overview
      i. CSIT JIRA: https://jira.fd.io/projects/CSIT

  4. Installer enhancements
    1. VPP plugin for APEX
  5. Testing/QA
    1. Sytem level testing (FuncTest, YardStick)
      1. Robot tests for ODL-VPP
      2. FuncTest tests (vPing etc.) - (FuncTest user guide)
      3. YardStick tests (Yardstick user guide)

Key contacts per work area (DRAFT!)

Work areaContacts
ODL Neutron Northbound - OpenStack Neutron integrationIsaku Yamahata <isaku.yamahata@intel.com>
ODL GBP Mapper for NeutronTomas Cechvala <tcechval@cisco.com>
ODL GBP Renderer for VPPWojciech Dec <wdec@cisco.com>, Michal Cmarada <mcmarada@cisco.com>
FD.io Honeycomb extensionsMaros Marsalek <mmarsale@cisco.com>, Marek Gradzki <mgradzki@cisco.com>, Keith Burns <krb@cisco.com>, honeycomb-dev@lists.fd.io
FD.io VPP extensionsStefan Kobza <skobza@cisco.com>, Keith Burns <krb@cisco.com>, incl.  vpp-dev@lists.fd.io
APEX: VPP pluginJuraj Linkes <jlinkes@cisco.com>; Marcel Sestak <msestak@cisco.com>; Dan Radez <dradez@redhat.com>, Tim Rozet <trozet@redhat.com>
Testing/QA: Functest, Yardstick additionsJuraj Linkes <jlinkes@cisco.com>; Marcel Sestak <msestak@cisco.com>
StorageAshlee Young <ashlee@wildernessvoice.com>
VPPhttps://wiki.fd.io/view/VPP/Committers/SMEs
VBDTyler Levine (tylevine) <tylevine@cisco.com>

 

Initial scenarios

Colorado 1.0 - Sept 22nd

IRC channel (#opnfv-fds and #opnfv-apex)

(all work, tests, etc. should be done by Sept/2 - so that final release runs on LF lab can be executed in proper time)

Colorado release plan - Colorado

Scenario "apex-os-odl_l2-fdio-noha"

  • Scenario provides tenant connectivity via VPP. Bridge domains across VPPs are configured using ODL GBP. VPP is only used for tenant networks. Bridge domains are connected using either VXLAN or VLANs.
    L3 connectivity is provided centrally via an Openstack provisioned qrouter on the control node. I.e. ODL only configures L2.  
  • 4 node setup:
    • Jumphost: Apex installer
    • 2 x Compute node: VPP and HC and Openstack infra
    • 1 x Control node: Openstack infra, OVS (for br-ext), qrouter/l3agent, ODL, VPP, HC (i.e. HC and ODL run in parallel on the same node)
  • ODL L2 networking
  • Apex installer
  • VPP/HC
    • VXLAN (incl. split horizon), VLAN interfaces support
    • Tap interface per tenant bridge domain for DHCP
    • Tap interface per tenant bridge domain for qrouter (for external connectivity)
  • Tenant network isolation/Security groups
    • NAT/IPtables provided by qrouter (configured via L3 agent)
  • Tests:
    • FuncTest tests
    • Yardstick tests
    • FDS specific tests (see below)
  • Environments:
    • UCS-B (Linux-Foundation lab) - for release operations (OPNFV)
    • UCS-B (Paris) - for test/development
    • CENGN lab - for test/development
Tasks to be completed for Colorado 1.0  
IDComponentTopicJira-TicketPrioImplemented
(ETA, Owner)
Tested
(ETA, Owner)
APEX integrated
(ETA, Owner)

Intern

Project

Dependences

NotesIssues
1GBPDHCP for tenant networks (tap interface to DHCP);
Tap interface per tenant bridge domain for DHCP (for IP assignment) - Test on PIRL UCS- B (Paris) or UCS-B Side1
FDS-17 - Getting issue details... STATUS 1

done

Aug/30

TomasC / MichaC

VladimirL

100%

 done Aug/30

MichalC

TomasC

JurajL

VladimirL

100%

done

Aug/30

UCS-B Paris TomasC + Wojciech

Todo

UCS-B (LF)

MichalC

TomasC

JurajL

100%

OpenStack

ID7

  • 2016/08/17 2pm CET - BDomain not created while creating network subnet (done locally at the moment - ETA Aug/12) - in progress by Woj, Juraj, Michal

  • 2016/08/18 10.40am CET - continuing

  • 2016/08/19 3pm CET - JurajL - woj's patches checked, and running (tap ports being created, VMs being created too). Juraj summarizing current issues (e.g. VXLAN being tested - tunnels not created, etd., DHCP tap ports created, but DHCP not configuring IPs for VMs) - email will be sent today
  • 2016/08/22 9.30am CET - see below (-fixes need cherry picked to BOR)
  • 2016/08/23 9.45am CET - there is only one RQ configured in the VIC. Please increase the number of RQs in the VIC, then reboot and try again - resolved (No. of RQs changed, server restarted)
  • 2016/08/23 3.00pm CET - checking the APEX deploy on UCS-B (using newest VPP and HC /vpp-lib-16.12-rc0~4_g5331c72~b1026.x86_64 vpp-16.12-rc0~4_g5331c72~b1026.x86_64 honeycomb-1.0.0-1036.noarch / - didn't work, now starting to use older versions as per Juraj's case on CENGN enviro, to check those) - done
  • 2016/08/24 11.15am CET - deploy complete using (using newest VPP and HC /vpp-lib-16.12-rc0~4_g5331c72~b1026.x86_64 vpp-16.12-rc0~4_g5331c72~b1026.x86_64 honeycomb-1.0.0-1036.noarch (current achievements: tap ports created both DHCP and qrouter, vhost created; current rests: VBDomain to be created on computes and control and linked to tap ports and VBDs, VLAN and VXLAN tunnels to check yet - in progress)
  • 2016/08/25 12.00pm CET - 16.12 VPP used - vhost port created on both computes - ok, dhcp tap port created (controller ok), vhost added to VBDomain but only on 1 compute! and 16.9 VPP and HC version to be used now to check whether the issue persists. VLAN and VXLAN tunnels to check yet
  • 2016/08/25 3.30pm CET - VPP version changed to 16.09 - deploy tests in progress - will send update by EOD
  • 2016/08/26 12pm CET - new 16.09 VPP version used, latest commits used due to expected fixes being checked; current issues - IP were not assigned to VMs, 1 vxlan tunnel missing (5 of 6), no pings within 1 node or among nodes (consulted with Sean from VPP during FDS IRC call, a patch fix shall be ready now, checked - no progress in resolution though...)
  • 2016/08/29 10.00 CET - Checking the status of creating VXLAN tunnels and assignemnt of Qrouter Tap port to BD
  • 2016/08/29 16.00 CET - tap interface creation works
  • 2016/09/01 10.00 CET - Qrouter Tap port assignment to BD is working, Tested localy and on UCS-B side1. Some more tests to be done before merging the patch
  • 2016/09/02 10.00 CET - Qrouter Tap port fix merged to stable/boron and master (carbon). Manual tests don locally and on UCS-B side 1.
  • 2016/09/02 11.30am CET - Verified also on UCS-B side 2 by Juraj
  1. Glance services are not running after reboot - APEX-241 - Getting issue details... STATUS - tim rozet - open

2. duplicate neutron agents and hostname changed after node reboot - APEX-239 - Getting issue details... STATUS - tim rozet - open

3. vhost added to VBDomain but only on 1 compute!

 4. not able to run VPP with No. of huge pages set to 10,000 - jurajL

5. 1 vxlan tunnel missing (5 of 6) - owner VladimirL on local

6. no pings within 1 node or among nodes (consulted with Sean from VPP during FDS IRC call, a patch fix shall be ready now, will be checked asap)

7. IP were not assigned to VMs

2GBP, HC

VPP Tap interface to qrouter;
Tap interface per tenant bridge domain for qrouter (for external connectivity) - Test on PIRL UCS- B (Paris)  or UCS-B Side1

FDS-18 - Getting issue details... STATUS 1

done

Aug/30

TomasC

100%

 

done Aug/30

MichalC

TomasC

JurajL

100%

in progress

Sep/9

MichalC

TomasC

JurajL

50%

OpenStack

ID7

  • 2016/08/12 10.00am CET - BDomain not created while creating network subnet (done locally at the moment - ETA Aug/15)
  • 2016/08/15 10.00am CET - OStack to be fixed (as DHCP), all will work, setup on UCS-B Paris todo
  • 2016/08/16 9.45am CET - tap ports created on DHCP, needs to be tested on UCS-B Side1
  • 2016/08/17 11.30am CET - see notes under ID1 (dhcp)
  • 2016/08/19 3pm CET - JurajL - tap interfaces created, cannot test the qrouter correct behaviour yet due to VXLAN (floating ip test to confirm this)
  • 2016/08/22 9.30am CET - fixes need cherry picked to BOR
    • ping between vms on different hosts doesn't work - owner MichalC.
    • ping between vms on the same host doesn't work - owner MichaC.
    • vpp+hc running on all 3 nodes, but bound the tenant interface on compute1 only (as in missing in ip a and pci address configured in startup.conf)
    • do not see the bound interface in vpp (on all three nodes) - in progress - owner MichalC. (chasing Ed Warnicke, John Daley)
    • with vpp on controller and mounted in vpp, bridge-domain is created only on one of the compute nodes - owner MichalC.
    • tap ports and vhostuser ports are not deleted when deleting Wojciech's vms and networks (request to delete came to HC, but there are errors) - owner MarsoM
      • errors in honeycomb log. - owner MarosM
    • Q - with vpp on controller and mounted in vpp, tap port for dhcp server is created on controller - where to have the dhcp server?? on controller
    • vxlan tunnels are not being created because of the vpp port binding issue - owner MichalC.
  • 2016/08/23 9.45am CET - there is only one RQ configured in the VIC. Please increase the number of RQs in the VIC, then reboot and try again - in progress
  • 2016/08/24 11.15am CET - see above
  • 2016/08/25 12.00pm CET - 16.12 VPP used - tap port on qrouter created on control node - ok, VBD created on controller - ok, qrouter port not added to the VBD. VLAN and VXLAN tunnels to check yet
  • 2016/08/25 3.30pm CET - VPP version changed to 16.09 - deploy tests in progress - will send update by EOD
  • 2016/06/26 12pm CET - new 16.09 VPP version used, latest commits used due to expected fixes being checked; current issues - qrouter tap port not in VBDomain, 1 vxlan tunnel missing (5 of 6), no pings within 1 node or among nodes (consulted with Sean from VPP during FDS IRC call, a patch fix shall be ready now, checked - no progress in resolution though...)
  • 2016/08/29 10.00 CET - Checking the status of creating VXLAN tunnels and assignemnt of Qrouter Tap port to BD
  • 2016/08/29 16.00 CET - tap interface creation works, though qrouter tap port not placed into correct bridge domain (likely issues in GBP),
    all but 1 vxlan tunnel created correctly. Missing vxlan tunnel between compute1 and compute0 does not come up due to ip address conflicts. Issue in either GBP or VBD - investigating.
  • 2016/09/01 10.00 CET - Qrouter Tap port assignment to BD is working, Tested localy and on UCS-B side1. Some more tests to be done before merging the patch
  • 2016/09/02 10.00am CET - Qrouter Tap port fix merged to stable/boron and master (carbon). Manual tests don locally and on UCS-B side 1.
  • 2016/09/02 11.30am CET - Verified also on UCS-B side 2 by Juraj



  1. ping between vms on different hosts doesn't work - owner MichalC.
  2. ping between vms on the same host doesn't work - owner MichaC.
  3. with vpp on controller and mounted in vpp, VBDomain is created only on one of the compute nodes - owner MichalC.
  4. tap ports and vhostuser ports are not deleted when deleting Wojciech's vms and networks (request to delete came to HC, but there are errors) - owner MarsoM - done, test - MichalC
    • errors in honeycomb log. - owner MarosM - done, test MichalC
  5. qrouter port not added to the VBD
  6. not able to run VPP with No. of huge pages set to 10,000 - jurajL - to be tested
  7. 1 vxlan tunnel missing (5 of 6),
3GBP/HC/VPPVerification that Ports/Tunnels... are deleted or reused properly FDS-19 - Getting issue details... STATUS 2n/a

todo

MichalC, JurajL

n/a   
6GBP, OpenStackHostconfig support in GBP,
Associated patch in OpenStack (see notes)  
FDS-20 - Getting issue details... STATUS 2

Post Colorado

InProgress

Sept/9

GBP - MichalC 25%

OpenStack - ??

??%

todo

Sept/9

GBP - MichalC

0%


todo

Sept/9

GBP - MichalC, JurajL

0%

OpenStackhttps://review.openstack.org/#/c/333186;
See also Isaku's email  
 
6bGBP, TestVerify that loop avoidance (split horizon) works on VLAN (vlan should work)

FDS-21 - Getting issue details... STATUS

1n/a

done

Aug/30

JurajL

MichalC

100%

n/a

 
  • depending on VXLAN (see above) + part of the VLAN test
 
6cGBP, TestTest VBD in a setup with 3 nodes and more. Make sure that split horizon works as designed on VXLAN based networks FDS-22 - Getting issue details... STATUS 1n/a

done

Aug/31

MichalC, Tyler, VladimirL

100%

n/a

 
  • 2016/08/26 12.45pmCET - Vladimir Lavor allocated on this task
  • 2016/09/02 15:15 CET - tested on UCS-B side1, broadcast packets sent from one VM, Received on the others and no duplicate packets occurred.
 
7OpenStackSupport OpenStack changes (patches) required for FDS as part of the Colorado 1.0 release FDS-23 - Getting issue details... STATUS 1

done

ETA Aug/19

Wojciech

100%

Done (as part of Vhost configuration)

Wojciech

100%

in progress

ETA Sep/9

TimR

75%

 

Consider forking (as interim solution until patches get accepted upstream) relevant OpenStack code for FDS (i.e. networking-odl) and apply the required changes to ensure a stable base.

 

 
           
10bHCHC - VPP synch: VPP being down causes tracebacks in HC

 

 

FDS-24 - Getting issue details... STATUS

https://jira.fd.io/browse/HONEYCOMB-78

1

todo

Sept 8

Marek Gratzki, MarsoM

0%

todo

Sept 8

Marek Gratzki, MarsoM

0%

todo

Sept 9

JurajL

0%

 

Tim Rozet (see email) noticed one behavior of the service itself which he is not sure is correct -  It looks like when VPP service is down, starting up honeycomb results in tracebacks and then brings the honeycomb service down because it cannot connect to VPP:

https://paste.fedoraproject.org/410321/raw/

ISSUES:

  • reconnect mechanism currently disabled, since it has to be different now. For now it just reports the VPP connection failure.
  • VPP seems to deny the reconnect even if we try: https://jira.fd.io/browse/HONEYCOMB-78
  1. -2 is returned from vl_map_shmem (memory_shared.c) - https://jira.fd.io/browse/HONEYCOMB-78 - marek gratzki - open
           
11VPPVPP supported on SE-Linux with Centos 7.2 FDS-25 - Getting issue details... STATUS 2

in progress

Aug/18

Ed Warnicke

??%

todo

ETA TBD

Ed Warnicke

0%

todo

ETA TBD

???

0%

 

Ed - investigation in progress

2016/09/07 -10.30 am CET - Check the status on this on, it is probably done.

 
           
14TestFunctest tests:: Tiers: 0-2 FDS-26 - Getting issue details... STATUS 1

done

ETA Sep/05

JurajL

100%

in progress

ETA Sep/09

JurajL

50%

n/a

 

https://git.opnfv.org/cgit/functest/tree/ci/testcases.yaml

  • 2016/09/07 10.30 am CET - If net, subnet and VMs are created quickly after each other the VMs wouldn`t get IP configured. Checking if timeout between tasks will help.

 

 
15TestYardstick tests:: Tier: 0 FDS-27 - Getting issue details... STATUS 1

todo

ETA Sep/12

JurajL

0%

todo

ETA Sep/12
JurajL

0%

n/a https://wiki.opnfv.org/display/SWREL/Test+Release+Criteria  
15bTestTest results reporting for FDS in the OPNFV Jenkins jobs FDS-28 - Getting issue details... STATUS 1

n/a

todo

Sept/02

Juraj

0%

todo

Sept/02

Juraj

0%

 Work with Morgan Richomme to get jenkins jobs adapted so that test results on LF lab are included in testresults database and release documentation 
           
 16a  APEX, IntegrationTry installation on current CENGN lab (check whether the deploy issues seen are due to UCS-B again) FDS-29 - Getting issue details... STATUS 1

In Progress

ETA Sep/1

JurajL, Sean

100%

todo

ETA Sept/1

JurajL, Sean

50%

todo

ETA Sept/1

JurajL

30%

  ORIGINAL ISSUES:
  • 2016/08/17 11.50am CET - checking Woj's patches to create VMs. waiting for further updates once checked.

  • 2016/08/18 10.30am CET - jumphost size increased due to APEX (needs 50GB itself), but jumphost not working now - Broken OStack blocked, Woj's patches check blocked, HC communicating with VPP - blocked

  • 2016/08/19 3pm CET - CENGN running again, all deployed with OVS. Jumhost resize done, redeploy with VPP todo (woj's patches to be still checked)
  • 2016/08/22 3.15pm CET - deploy run on old CENGN, current testing in progress (tap ports woj's patch having issues, tap port for dhcp not created (same with qrouter) - patch check in progress)
  • 2016/08/23 9.45am CET - checking other patch set to resolve woj's patch issues - tap ports creation checked (dhcp ports, qrouter ports still todo), creating VMs now (not seeing the VBDomains being created - check and debug in progress)
  • 2016/08/24 11.30am CET - current issues: VBDomains not created - resolution in progress, VPP bug - https://jira.fd.io/browse/VPP-349, HC - https://jira.fd.io/browse/HONEYCOMB-142;
  • 2016/08/25 12.30p CET - both dhcp and qrouter ports created - ok, VMs created - ok, VBDomains still not created - checking issue. + not able to run VPP with No. of huge pages set to 10,000
  • 2016/08/25 3.30pm CET - running the deploy with the 16.09 VPP version. still not seing the VBDs being installed - check and debug in progress
  • 2016/08/26 12pm CET - using latest commits for 16.09VPP, checking HC too now. VPP frozen when qrouter tap port created (getting kernel:unregister_netdevice: waiting for qr-6eb140b3-e1 to become free. Usage count = 1); see further current issues under issues column 
  • 2016/08/30 12pm CET - redeploying with fdio scenario - should provide full deployment
  • 2016/08/30 10:30an CET - can't deploy fdio scenario. Apex assumes tenant interface to be second in order, but it's third in CENGN; redeploying without VPP
  1. VBDomains not created - JurajL - in progress
  2. FDS blocking bug - VPP bug - https://jira.fd.io/browse/VPP-349 - damjan marion - merged
  3. HC - https://jira.fd.io/browse/HONEYCOMB-142 - Jan Srnicek - ETA Aug/31- todo
  4. not able to run VPP with No. of huge pages set to 10,000 (16.09 VPP version to be checked to resolve this issue) - jurajL
  5. VPP frozen when qrouter tap port created (getting kernel:unregister_netdevice: waiting for qr-6eb140b3-e1 to become free. Usage count = 1) - Shwetha Bhandari looking at it now
  6. dhcps not adding IPs
  7. no pings on same node
16b1APEX, IntegrationTry installation on Pharos CENGN lab (check whether the deploy issues seen are due to UCS-B again) FDS-29 - Getting issue details... STATUS 1

n/a

todo

ETA Sept/09

JurajL, Sean

25%

todo

ETA Sept/09

JurajL, Sean

25%

 
  • 2016/08/26 12pm CET - starting to migrate original CENGN lab to Pharos CENGN - this might resolve some of the current CENGN issues - on hold (blocker with missing mac addresses)
  • 2016/08/29 10am CET - deploy started on friday, deeploy didn`t work (issue with IPMi adresses reachability), redeploying now, then check VPP/HC/GBP states
  • 2016/08/30 12pm CET - problems with PXE booting, Tim to investigate
  • 2016/08/30 10:30am CET - gathered intel on how to debug this; will debug today
  • 2016/09/01 10.30am CET - PXE boot problem investigation in progress
  1. missing mac addresses - email sent to raymond
3GBP, VPPConfirm proper VLAN support on UCS-B Side1 (ENIC driver fixes) FDS-30 - Getting issue details... STATUS 1n/a

done

ETA Aug/26

JurajL, MichalC

100%

done

Aug/26

JurajL,

MichaC

100%

http://www.dpdk.org/dev/patchwork/patch/14911/

ID1

Confirm that http://www.dpdk.org/dev/patchwork/patch/14911/  fixes the issue

  • 2016/08/12 10.20am CET - task started on Aug/12
  • 2016/08/12 3pm CET - setup on UCS-B in progress (UCS-B Paris still down!!)
  • 2016/08/15 10.00am CET - demo run on UCS-B Side1 - pings not done (currently debugging where packets getting lost - redeploy in progress)
    • 2016/08/15 3pm CET - UCS-B Side1 - Apex redeploy falling down
  • 2016/08/15 3pm CET - UCS-B - Paris- issue with setting up the interfaces in VPP (binding correctly) - checking with a newer version of VPP now
  • 2016/08/16 10.00CET - on hold, helping Tomas with qrouter and DHCP
  • 2016/08/22 3.20pm CET - blocked by interface not binding to vpp
  • 2016/08/23 9.45am CET - there is only one RQ configured in the VIC. Please increase the number of RQs in the VIC, then reboot and try again - in progress
 
4GBP, Test

Confirm OpenStack security groups (implemented via L3-agent/qrouter) work - external connectivity

FDS-31 - Getting issue details... STATUS 1

n/a

todo

ETA Aug/29

JurajL, Sean

0%

n/a

ID1, ID2
  • depending on VXLAN - will be tested on CENGN
 
5GBP, TestConfirm OpenStack floating IP addresses (implemented via L3-agent/qrouter) work - external connectivity FDS-32 - Getting issue details... STATUS 1n/a

done

ETA Aug/29

JurajL, Sean

100%

n/a

ID1, ID2
  • depending on VXLAN will be tested on CENGN
  • 2016/09/02 10.00 am CET - workiing/ verified
 
 16b2 APEX, Integration

Debug deployment on CLUS UCS-B lab 

L2 scenario deployment (Apex deploy - OpenStack, ODL (GBP, VBD), OVS (L2 -->, L3), VPP (L2), HC (L2) on controller; OStack, HC and VPP on compute) - UCS-B Side2

 1

n/a

In progress

ETA Sep/09

JurajL, Sean

90%

in progress

ETA Sep/09

JurajL

90%

  ORIGINAL NOTES -
done parallel with L2 scenario deployment on CENGN
  • SSH keys don't appear on compute node, therefore the A-deploy crashes at the beginning (issue with HW, wrong deploy on 1 node) - currently issue should be resolved (david milles) so deploy being tested now
  • deploy in progress 2016/8/10 3:40pm CET (HC not communicating with VPP yet)- the setup after deploy needs to be checked yet! - deployed with OVS ok, VPP still needs to be checked manually
  • 2016/08/15 12.00am - once CENGN up and running, UCS-B will be replaced
  • 2016/08/19 3pm CET - CENGN running again, all deployed with OVS. Jumhost resize done, redeploy with VPP todo (woj's patches to be still checked,

     

2016/08/22 3.30pm CET - on hold, see issue about

  • 2016/09/02 10.00am - Working enviroment but without the Qrouter Tap port fix, Floating IPs working (with manual Qrouter Tap assignment to BD), Pings working

 

 
 16cAPEX, Integration

Move to official CENGN Pharos OPNFV lab -

L2 scenario deployment (Apex deploy - OpenStack, ODL (GBP, VBD), OVS (L2 -->, L3), VPP (L2), HC (L2) on controller; OStack, HC and VPP on compute)

 1

n/a

in progress

ETA Sep/09

JurajL, Sean

25%

in progress

ETA Sep/09

JurajL

25%

possible duplicate of 16b1

  • In case we really have UCS-B ENIC issues, we should consider hooking up CENGN formally to OPNFV Jenkins and run release jobs there.
    Fatih Degirmenci fatih.degirmenci@ericsson.com is the right contact to achieve this (this would likely be Raymond from CENGN to work with Juraj and Fatih).

ORIGINAL ISSUES:

  • 2016/08/17 11.50am CET - checking Woj's patches to create VMs. waiting for further updates once checked.

  • 2016/08/18 10.30am CET - jumphost size increased due to APEX (needs 50GB itself), but jumphost not working now - Broken OStack blocked, Woj's patches check blocked, check whether HC communicating with VPP - blocked

  • 2016/08/19 3pm CET - CENGN running again, all deployed with OVS. Jumhost resize done, redeploy with VPP todo (woj's patches to be still checked)
  • 2016/08/25 12.45pm CET - info from Raymond Maika <raymond.maika@cengn.ca> received - lab prep still in progress - access for team created
  • see ID 16b1 for reference
 
16dAPEX, IntegrationAutomated deploy by OPNFV Jenkins on OPNFV lab

 

 1

todo

ETA Sept/1

JurajL

0%

todo

ETA Sept/1

JurajL

0%

todo

ETA Sept/1

JurajL

0%

17among last tasks 
17APEX, Integration

Puppet VPP/HC Manifests for HC/VPP installation/configuration

UCS-B side2 ("Pod2")

FDS-33 - Getting issue details... STATUS 1

In Progress

Aug/26

MarcelS/TimR

60%

 

In Progress

Aug/26

TimR/MarcelS

50%

Todo

Aug/29

JurajL/MarcelS

0%

17a, 17b

done locally - params checked after deploy though

2016/08/18 10.40am CET - checking light weight HC's & rpms for HC puppet manifest (1. config section in HC class to be done, 2. add separate HC repo to install, 3. change HC port vs ODL port)

2016/08/19 10.00 CET - adding puppets to underclound from VPP and HC from Tim's and Feng Pan's private repos - start the puppet modules manually now on CENGN (on both compute and control nodes)

2016/08/19 3.15pm CET - MarcelS - tested Feng's puppet manifest (VPP) - not running correctly yet (Marcel summarizing the issues in email now). HC check depends on this.

2016/08/23 10.00am CET - MarcelS - Feng's patch amended again, marcel checking/testing the VPP patch again

2016/08/23 3.30pm CET - MarcelS - working on new full redeploy of the pupett (using Tim's patches and on top of that the FD.io patches) - 2 dependencing patches are missing - email describing issues to be sent to Tim asap now

2016/08/24 12.00pm CET - MarcelS - Tim's and Feng's patch changes seems not to be synchronized - marcek sending info on both

2016/08/25 12.45pm CET - MarcelS - build with using amended patch from Tim and Feng in progress

2016/08/25 3.30pm CET - MarcelS - checking commmits in Colorado master; - in progress

2016/08/26 12.00pm CET - MarcelS - working with new apex commits (https://gerrit.opnfv.org/gerrit/#/q/project:apex ), whole mac checked, all issues fixed; now running whole deploy again to check proper vpp and deploy setup. - juraj will check the completion of the deployment

2016/08/29 10.00am CET - JurajL to check the status of deployment (in progress).

2016/08/29 10.30am CET - deploy failed, JurajL started redeploy.

2016/08/29 16.00pm CET - RQs fixed on the pod, deployment successful, but not fully:

            • honeycomb doesn't start - http://pastebin.ca/3707395
              computes are incorrectly bound - node-id doesn't match nova service-list - APEX-247 - Getting issue details... STATUS
              vpp after node restart doesn't bind the interface because it's not down - APEX-248 - Getting issue details... STATUS

2016/08/30 12pm CET - redeploying with latest apex build which should contain many fixes - should provide full deployment with possibly only tap port patch missing

2016/08/30 10:30an CET - deployed os-odl_l2-fdio-noha scenario, pending issues from APEX perspective: hugepages not configured; tenant interface configured ONBOOT=yes; default route not configured on controller; dhcp agent patch not applied; l3 agent patch not applied; ping between vms doesn't work

2016/09/02 10:00am CET - Some fixes are ready, redeployment and tests are pending.

  1. Tim's patches missing 2 dependences (tim requested marcel to process install to jumbhost to continue)
  2. Tim's and Feng's patch changes seems not to be synchronized - marcel sending info on both - currently waiting for feng to amend the patch once edited by tim, so changes from both are synced in 1 patch
17aAPEX, Integration
  • IP address to private interface to VPP
 1

done

ETA Aug/23

TimR

100%

done

Aug/25

TimR/MarcelS

100%

done

Aug/26

JurajL/MarcelS

100%

 

2016/08/18 10.40am CET - resolution in progress by Tim

2016/08/26 12pm CET - fixed, now checking whether ok (owner MarcelS)

2016/08/30 10:30an CET - doesn't work if second nic is not on tenant network

 

 
17dAPEX, Integration
  • automated testing and automated test results reporting 
FDS-28 - Getting issue details... STATUS 1

Todo

Sep/2

MarcelS/TimR, JurajL

0%

n/a

Todo

Sep/2

MarcelS/TimR, JurajL

0%

15bthere is a small edition needed to Jenkins jobs – and eventually some work by the FuncTest team to select the right tests – work with Morgan Richomme 
18APEX, IntegrationFDIO with Honeycomb Agent

APEX-133 - Getting issue details... STATUS

FDS-34 - Getting issue details... STATUS

1

in progress

ETA Aug/15

TimR / MarcelS / JurajL

todotodo  
19APEX, IntegrationIncrease huge pages

APEX-184 - Getting issue details... STATUS

FDS-35 - Getting issue details... STATUS

 

1

todo

TimR

todo

TimR

todo JurajL Work-around via configuration change (-> Tim Rozet for details) 
20APEX, IntegrationVPP Honeycomb unable to get VPP config

APEX-186 - Getting issue details... STATUS

FDS-36 - Getting issue details... STATUS

 

1

done

TimR

donedone JurajL   
21APEX, IntegrationNova returns "Insufficient compute resources: Requested instance NUMA topology cannot fit the given host NUMA topology" when starting VM   

APEX-187 - Getting issue details... STATUS

FDS-37 - Getting issue details... STATUS

1

todo

TimR

todo TimRtodo JurajL   
22APEX, IntegrationUnable to spin up more than two VMs

APEX-188 - Getting issue details... STATUS

FDS-38 - Getting issue details... STATUS

1

todo

TimR

todo TimRtodo JurajL   
23APEX, IntegrationVPP and Honeycomb do not start automatically after node reload

APEX-133 - Getting issue details... STATUS

FDS-34 - Getting issue details... STATUS

1

done

TimR

done

TimR

done

JurajL

   
24APEX, IntegrationKeystone services not configured and the error is silently ignored

APEX-215 - Getting issue details... STATUS

FDS-39 - Getting issue details... STATUS

 

2

todo

TimR

todo TimRtodo JurajL Work-around by not having VLAN-aware/trunk interfaces presented to compute/control nodes (i.e. all interfaces need to be vanilla interfaces) 
24APEX, IntegrationNeed ability to specify which nic to place vlan on 

APEX-208 - Getting issue details... STATUS

FDS-40 - Getting issue details... STATUS

 

2

todo

TimR

todo TimRtodo JurajL   
           

25

 

VBDVBD does not set split horizon group for VXLAN tunnel interfaces

ODL-6241

FDS-41 - Getting issue details... STATUS

2

done,
Tyler 

done
MichalC
todo 

https://git.opendaylight.org/gerrit/#/c/42161/

3 node test still to do (otherwise tested locally + VXLAN tested and running on UCS-C already)

 
26ODLLatest ODL BOR RPM 1

done

D.Farrel

donedone   
           
Related/supporting work items
ComponentActivityStatus

Owner

Start date

Planned

End Date

Planned

End Date

Real

Inter project dependencesNotesissues
GBP, Test

Robot - Test connectivity between 2 VMs on the two compute nodes via VXLAN - UCS-C


Todo
TBD      
GBP, Test

Robot - Test connectivity between 2 VMs on the two compute nodes via VLAN - UCS-B (Paris)

TodoTBD      
GBP, Test

Robot - Basic ping test without floating ips and without dhcp

On holdJurajLAug 4Aug 12Aug 19 
  • 2016/08/12 10.15 CET - almost completed, currently on hold due to higher priorities
 
GBP, Test

Manual - Test connectivity between 2 VMs on the two compute nodes via VLAN and VXLAN at the *same* time - UCS-B (Paris) - nice to have!!!

BlockedM. CmaradaAug 4Aug 12  
  • 2016/08/12 10.15am CET - UCS-B Paris lab is down + lower priority

 

 
GBP, Test

Robot - Test connectivity between 2 VMs on the two compute nodes via VLAN and VXLAN at the *same* time - UCS-B (Paris)

ToDoTBD      
GBP, Test

Manual - Test coexistence HC and ODL on 1 node - UCS- B (Paris)

DoneM. CmaradaAug 17Aug19Aug17HC - ID9, ID10 (Table 1)

lower priority - being done also on UCS-B by Juraj Linkes

on hold also due dependency on HC HONEYCOMB-18

2016/08/17 11.40am CET - checked manually on UCS-B Side1

  1. ODL and HC coexistence on control node - HONEYCOMB-18  - maros marsalek - done (docs on wiki missing)
FDS Integration test

L2 scenario deployment (Apex deploy - OpenStack, ODL (GBP, VBD), OVS (L2 -->, L3), VPP (L2), HC (L2) on controller; OStack, HC and VPP on compute) - CENGN

DoneJuraj Linkes / Marcel SestakAug 18Aug 18Aug 19

VBD project ??

HC project ??

  • VPP does not run on Control node - error found when deploying vpp. Currently issue communicated with tim who is working on fix.

  • At the moment we process install manually (ODL/HC/VPP on control + VPP/HC on all compute nodes) - currently no version of VPP is working on CENGN! issues passed on VPP team 8/10/2016 (email) - VPP issues fixed. needs to be checked.

  • 2016/08/12 10.25am CET - bug http://pastebin.ca/3684361 (Excp. - HC Unable to open VPP management connection)

  • 2016/08/12 10.25am CET - official ODL version used (does not contain VBD though) - therefore VBD not in ODL BOR as we need

  • 2016/08/12 3pm CET - conflicts with ports resolved, both karafs running, HC and VPP on both control and compute nodes after manual setup (HC not running as service, after manual run ok). In VPP having correct interface, VXLAN tunnel still to do.

  • 2016/08/12 3pm CET - issue - HC not running as service

  • 2016/08/15 11.30am CET - VXLAN tunnel created automatically.

    - Neutron NB down (when OS communication wtih ODL) - due to maintenance (being solved with Isaku J) - workaround NNB needs to be loaded as 1st deafult feature (loading in a correct way)

    -Checking what version of ODL to be used (due to VBD and other components) - APEX is not using nexus repo with correct ODL version (incl. VBD) but rpms are missing, so APEX using other repo.- sent message to IRCs (#odl, #odl meeting, etc.)

    - wojciech patches missing - need merge (driver for DHCP and qrouter)

  • 2016/08/16 10.00am CET - working on deploy with correct ODL version (incl. VBD) + new HC tried out, running as service now + new ODL from official repo tried out, Neutron NB issue resolved. working on brand new deploy now.

  • 2016/08/17 11.50am CET - checking Woj's patches to create VMs. waiting for further updates once checked.

  • 2016/08/18 10.30am CET - jumphost size increased due to APEX (needs 50GB itself), but jumphost not working now - Broken OStack blocked, Woj's patches check blocked, check whether HC communicating with VPP - blocked

  • 2016/08/19 3pm CET - CENGN running again, all deployed with OVS. Jumhost resize done, redeploy with VPP todo (woj's patches to be still checked,
 
FDS Integration test

L2 scenario deployment (Apex deploy - OpenStack, ODL (GBP, VBD), OVS (L2 -->, L3), VPP (L2), HC (L2) on controller; OStack, HC and VPP on compute) - UCS-B Side2

DoneJuraj Linkes / Marcel SestakAug 4Aug 12Aug 19 
done parallel with L2 scenario deployment on CENGN
  • SSH keys don't appear on compute node, therefore the A-deploy crashes at the beginning (issue with HW, wrong deploy on 1 node) - currently issue should be resolved (david milles) so deploy being tested now
  • deploy in progress 2016/8/10 3:40pm CET (HC not communicating with VPP yet)- the setup after deploy needs to be checked yet! - deployed with OVS ok, VPP still needs to be checked manually
  • 2016/08/15 12.00am - once CENGN up and running, UCS-B will be replaced
  • 2016/08/19 3pm CET - CENGN running again, all deployed with OVS. Jumhost resize done, redeploy with VPP todo (woj's patches to be still checked,
 
FDS Integration test

Manual - Test coexistence HC and ODL on 1 node - UCS-B (on Control node)

Done

MarosM

JurajL

Aug 4Aug 12  done as part of the L2 scenario deployment on UCS-B 
FDS Integration test

Manual - Test L2 tenant networks: VXLAN - UCS-B

In progressJuraj LinkesAug 19     
FDS Integration test

Manual - Test L2 provider networks: VLAN - UCS-B

On holdJuraj LinkesAug 19     
FDS Integration test

FDS Setup Documentation review

+ Sum up issues

Done

Juraj Linkes

FrankB

Aug 12Aug15  

documentation (done - FrankB) - https://gerrit.opnfv.org/gerrit/#/c/18257/ + https://git.opnfv.org/cgit/fds/tree/docs/scenarios

+ Sum up issues (done - JurajL)

 
FDS Integration testPort Woj's cdlvpptestTODOJuraj Linkes    http://codehub-one-fw-review.cisco.com:8090/gitweb?p=integration.git;a=tree;f=test/csit/suites/vpp/neutron-cdl-vpp_integration;h=8204fe580c81fef6a5942722992a6ea83bcbd9c2;hb=refs/heads/cdl 
Completed 
ComponentActivityStatus

Owner

Start date

Planned

End Date

Planned

End Date

Real

Inter project dependencesNotes/Bugs  
GBPL3 Architecture         
GBP
  • draft/proposal
DoneT. Cechvala      
GBPFuncTest tests        
GBP
  • Manual - Test connectivity between 2 VMs on the two compute nodes via VXLAN - UCS-C
DoneM. Cmarada      
GBP
  • Manual - Test connectivity between 2 VMs on the two compute nodes via VLAN - UCS-C
DoneM. Cmarada      
12VPPVPP Install issue: API-segment not handing ERANGE from getgrnam_r correctlyVPP-3191done, Ed Warnicke

Aug/11, done

JurajL

done

JurajL

  
13VPP/HC

honeycomb doesn't talk to vpp

 1n/a

Done

JurajL

done

JurajL

 

logs at http://pastebin.ca/3684361

resolved in later versions of HC

9HC, IntegrationODL and HC coexistence on control nodeHONEYCOMB-181

done

ETA Aug/16

MarosM

done

Aug/17

MarosM - done

MichalC - done

done on UCS-B Side1

Aug/24

JurajL

 

HC - ID10

 

2016/08/15 1.30pm CET - see https://gerrit.fd.io/r/#/c/2360/

(RPMs to be completed yet, testing, code cleanup and rebase yet)

10HCLight-weight HC agent available in rpm

HONEYCOMB-137

HONEYCOMB-125

1

done

ETA Aug/16

MarosM

Marek Gradzki 

done

Aug/22

MarosM - done

MichalC - in progress

done on UCS-B Side1

Aug/24

JurajL / MarosM

 

!!!!!!! putting higher priority !!!!! - draft RPM early Aug/-15-16
17bAPEX, Integration
  • Find out when Apex binds interfaces to VPP + incl. PCI address too
 1

Done

ETA Aug/23

MarcelS/TimR/Feng

100%

Done

Aug/25

TimR/MarcelS

100%

n/a

 

2016/08/19 10.00 am CET:

  • resolving issue with binding (putting them down before starting binding VPPs) - Tim in progress - to be tested by Marcel Aug/19 CENGN
  • resolving issue with binding (putting them down before starting binding VPPs) - Feng Pan in progress - to be tested by Marcel Aug/19 CENGN
17cAPEX, Integration
  • Make sure latest Boron rpm is available and used (eventually work with Dan Farrell to get the latest rpm)
 1

n/a

Done

Aug/24

TimR/MarcelS, JurajL

100%

n/a

 

 2016/08/26 12pm CET - Dan created now rpm

 

Scenario "apex-os-nosdn-fdio-noha"

  • Scenario provides tenant connectivity via VPP. Bridge domains across VPPs are configured using Neutron ML2 driver for VPP. VPP is only used for tenant networks. Bridge domains are connected using either VLANs.
  • 4 node setup:
    • Jumphost: Apex installer
    • 2 x Compute node: VPP and python agent (for ML2 plugin)
    • 1 x Control node: Openstack infra, ...
  • APEX installer
  • Tests: FuncTest, Yardstick, FDS specific tests (see below)
  • Environments:
    • UCS-B (Linux-Foundation lab) - for release operations
  • Further details TBD.

Colorado 2.0

Scenario "apex-os-odl_l3-fdio-noha"

  • Scenario provides tenant and external connectivity via VPP. Bridge domains across VPPs are configured using ODL GBP. Bridge domains are connected using either VXLAN or VLANs.
    L3 connectivity is provided as distributed virtual routing (DVR) implemented by VPPs (see OpenStack-ODL-VPP integration design and architecture#ODL-VPPintegrationdesignandarchitecture-L3connectivitydesign). 
  • ODL L3 networking
    • VPP serves as router and bridge (no use of qrouter/br-ext/l3-agent on control node): Distributed virtual routing. VPP is bound to both: Tenant network interface as well as external network interface.
    • Security groups (L3/L4 rules) implemented via GBP / VPP.
    • Floating IPs implemented via GBP / VPP.
    • Dynamic address assignment via DHCP (DHCP connected via tap interface on control node)
  • APEX installer
  • VPP/HC
    • VXLAN (incl. split horizon), VLAN interfaces support
    • Tap interface per tenant bridge domain for DHCP
    • L3 routing
    • Security groups / IP filtering
    • NAT
  • Tests:
    • FuncTest tests
    • Yardstick tests
    • FDS specific tests (see below)
  • Environments:
    • UCS-B (Linux-Foundation lab) - for release operations
    • UCS-C (Cisco PIRL) - for test/development
    • CENGN lab - for test/development

Scenario "apex-os-odl_l2-fdio-ha"

    • Similar scenario to "apex-os-odl_l2-fdio-noha".
    • 6 node setup (3 x control, 2 x compute, 1 x jumphost)
    • Additions:
      • Openstack HA: 3 control nodes
      • ODL clustering: 3 instances of ODL running on the 3 control nodes

Scenario "apex-os-odl_l3-fdio-ha"

    • Similar scenario to "apex-os-odl_l2-fdio-noha".
    • 6 node setup (3 x control, 2 x compute, 1 x jumphost)
    • Additions:
      • Openstack HA: 3 control nodes
      • ODL clustering: 3 instances of ODL running on the 3 control nodes
ProjectActivityStatus

Code complete for interim scenario
"apex-os-odl_l2-fdio-noha" 

- PLANNED

Code complete for interim scenario

- PROJECTED / COMPLETED

OwnerInter project dependencesNotes/Bugs 
GBPL3 Architecture      
 
  • DHCP address assignment (DHCP port implementation) - test
      
GBP
  • loopback BVI implementation
      
GBP
  • SNAT implementation
      
GBP
  • security groups
      
GBP
  • split horizon optimalization
     

Resources

Plan

 

 

ProjectActivityStatus

Code complete for interim scenario
"apex-os-odl_l2-fdio-noha" 

- PLANNED

Code complete for interim scenario

- PROJECTED / COMPLETED

OwnerInter project dependencesNotes/Bugs 
GBPHigh-level design doneMay/31May/4Martin Sunal, Wojciech, Keith, Frank, Jan Medved  
GBP

Base Endpoint RPC Update editing capabilities

doneMay/13May/16Michal Cmarada Added validation for RPC input; Update existing Endpoints in Op. DS
GBPNeutron Mapper (GBP)doneMay/6May/24Tomas Cechvala  
GBP
  • Neutron Mapper refactoring 
doneApril/22April/22Tomas Cechvala  
GBP
  • Implementation Base Endpoint RPC

doneMay/4May/10Michal Cmarada  
GBP
  • Implement new Endpoint model 
doneMay/18May/24Tomas Cechvala  
GBPAugment/Extend ODL Topo info to capture provider networksdoneMay/6May/4Martin Sunal completed / merge needed yet
GBPExtend GBP infra to allow End-Point Configuration (included in neutron 2 vpp mapperdoneMay/4May/10Martin Sunal  
GBPNeutron 2 VPP Mapper doneMay/6May/25Tomas CechvalaDependency on Neutron Mapper 
GBPRenderer Manager (GBP)doneMay/13May/17Martin Sunal 

 

moved due to Martin’s absence and changes with Renderer yang

GBPVPP Renderer (tenant network) - OVERALLdoneJune/10June/20

Michal Cmarada

(Wojciech, Michal C., Tomas)

VBD project
•Implement Vhost user interface – in progress – ETA 5/25
•Renderer manager registration – todo - ETA 5/27
•L2 connectivity – todo – ETA 6/10
GBP
  • Implement Vhost user interface

done

May/18June/01Michal Cmarada, Martin Sunal

HoneyC - Check VXLAN (Tunneling) 

VBD project - Tyler Levine, Andrew Li (Zhaoxing Li <zhaoxili@cisco.com>)

 

HComb bug No.41 - https://jira.fd.io/browse/HONEYCOMB-41 - todo

implementation of renderer to crate vhost user interface on VPP.

after this we need VBD to complet rest of the tasks..

VBD to have defined API, that shall be used from GBP

GBP
  • Renderer manager registration
doneMay/27June/10Michal Cmarada  
GBPdoneJune/10June/10Michal Cmarada  
GBP
  • Integration with VBD (tenant network)
doneJune/10June/20Michal Cmarada; Martin SunalVBD project - Tyler Levine, Andrew Li (Zhaoxing Li <zhaoxili@cisco.com>)

Pod2 - booting VMs needed

VBD bugs being resolved

Further issues discovered

GBP
  • Provider network VLAN - nice to have??
??TBDTBDTBD

HoneyC - Provider networks (VLAN)

GBP Architecture - External networking

nice to have?? needs decision - Tyler, Wojciech, Martin
GBP
  • Policy enforcing - nice to have???
??TBDTBDTBDHoneyC - Policy - security groups, - rules 
GBPVPP Renderer (provider network) - OVERALLdoneJune/10June/22Tomas Cechvala  
GBP
  • APIs definition
doneJune/10June/10Tomas Cechvala
  
GBP
  • Adding APIs mappingfor Prov. Netw. to Neutron VPP Mapper
doneJune/10June/10Tomas Cechvala
  
GBP
  • Impelment provider networks in VPP Renderer (APIs)
doneJune/10June/10Tomas Cechvala  
GBP
  • Integration with VBD
TestingJune/20June/22Tomas Cechvala VBD needs to be done
GBPVPP SFF (SFC) Renderer - nice to have (if time allows, if VPP supports)??TBDTBDTBD  
GBPVPP Renderer supports Floating IP NAT configuration??TBDTBDTDBHoneyC - 1:1 IP NAT44 (TCP/UDP and ICMP)required for vIMS use-case, and any that uses Neutron Floating IP
GBPNeutron-GBP Mapper supports/maps Floating IPs to GBP End Point data??TBDTBDTBD  
GBPTarget: OPNFV Colorado - Priority 0      
GBPDHCP tap interfaces – (1-a-i)doneJuly/20July/20Tomas Cechvala  
GBPDocumentation - (4) - needs confirmationIn ProgressSep/16Sep/16Tomas Cechvala  
GBPL3 architecture - (1-c)In ProgressAug/12Aug/12

Tomas Cechvala,

Michal Cmarada

 
  • based on current L3 architecture proposal - API for adding records to ARP termination table - HC (API as equivalent to command "set bridge-domain arp entry 13 7.0.0.11 11:12:13:14:15:16)

Missing parts:

- VBD does not set split horizon group for VXLAN tunnel interfaces https://bugs.opendaylight.org/show_bug.cgi?id=6241

- VBD does not allow to set ARP termination for BD

- HC does not have API for adding record to ARP termination table - [AVanko] reported under HONEYCOMB-125 

Note from HC [maros marsalek 2016/7/22]

The CLI call uses this binary API: bd_ip_mac_add_del. So we can expose Create and Delete functionality easily.

However I couldn’t find read anywhere in the VPP binary APIs, so it will be impossible for reading the ARP termination table for a bridge domain straight from VPP, meaning also no reconciliation possible with pre-configured VPP. 

So if you could open a feature request for Honeycomb in jira.fd.io for adding “bridge-domain ARP termination table management” that would be great.

- API for reading from ARP termination table is missing in VPP [AVanko] reported under VPP-212

 Opened questions:

- How policy can be enforced? GBP is white-list model so we should deny all traffic implicitly on VPP and allow traffic explicitly. What we want to use for policy classification? N-tuple classifier? https://wiki.fd.io/view/VPP/Introduction_To_N-tuple_Classifiers

- Is NAT supported on VPP? There should be a command for SNAT in CLI - reported under VPP-231 and for HC under HONEYCOMB-135

- Do we want to support both central and DVR routing from openstack?

- How is DVR set in openstack?

GBPTarget: OPNFV Colorado - Priority 1todoSep/16Sep/16   
GBPGBP L3 connectivity (v4/v6) – (1-c-i)todoSep/16Sep/16   
GBP- VXLAN tenant networktodoSep/16Sep/16   
GBPSecurity groups L3/L4 - (1-d-i)todoSep/16Sep/16  

dependences on HC and VPP:

  • check how access control and clasification is done in VPP
GBPFloating IP (NAT) - (1-c-ii-1)todoSep/16Sep/16  

dependences on HC and VPP:

  • check whether NAT in VPP !!!
GBPStatic routing (1-c-iii)todoSep/16Sep/16   
GBPColorado release documentation for all three scenarios - (4)todoSep/16Sep/16   
GBPTarget: OPNFV Colorado - Priority 2todoSep/16Sep/16   
GBPTenant networks: VLANs - (1-b-i-2)todoSep/16Sep/16   
GBPService chaining: GBP + SFC - (1-e)todoSep/16Sep/16  dependency on Openstack neutron ("what shall we listen to")
GBP- Tests for GBP+SFC solution (Intel provides SFF renderer for VPP)todoSep/16Sep/16   
GBPTarget: OPNFV D-River - Priority 3 todo     
GBPProvider networks: Vxlan/Flat (1-b-ii-1/3)todo     
GBPClustering (2)todo     
GBPTests for clustering (CSIT and OPNFV) (3-a)todo     
GBPGBP to check with OVS mitaka deploy tootodo     
        
HoneyCvhostuser type interface (for L2 connectivity)doneMay/25May/13Maros Marsalek, Marek Gradzkiyangs definition done for GBPyangs definition done for GBP
HoneyCCheck VXLAN (Tunneling) - maybe no dependency on GBPdone
May/6May/16Marek Gradzkiyangs definition done for GBP

yangs definition done for GBP

blocked by vpp - waiting for vpp-dev list answer to fix vpp possible bug (VXLAN delete)

HoneyCProvider networks (VLAN)done
May/31May/31Maros Marsalek, Marek Gradzki  
HoneyC
  • subinterfaces
doneMay/31May/31Maros Marsalek, Marek Gradzki  
HoneyC
  • VLAN tag (push&pop)
doneMay/20May/20Maros Marsalek, Marek Gradzki  
HoneyCPolicy - security groups, security group rulesin progressMay/31July/5Maros Marsalek, Marek Gradzki  
HoneyC
  • L2 ACL
in progressMay/31July/5Maros Marsalek, Marek Gradzki  
HoneyC
  • srcMAC, dstMAC allow/deny
in progressMay/31July/5Maros Marsalek, Marek Gradzki  
HoneyC
  • L3 and L4 rule sets
  Target:
OPNFV Colorado 
  target: OPNFV Colorado
HoneyCDHCP - tap type interfacedoneMay/18May/13Maros Marsalek, Marek Gradzki CLUS nice to have for GBP
HoneyC
  • VPP TAP
done
May/18May/13Maros Marsalek,   
HoneyCFDIO NSH_SFC support?? Target:
OPNFV Colorado 
Maros Marsalek, Marek Gradzki  very nice to have (for GBP)
HoneyCArchi types TBCTBCMaros Marsalek, Marek Gradzki  
HoneyCSupport for Apps TBCTBCMaros Marsalek, Marek Gradzki  
HoneyCNAT44 - 1:1 address translation TBCTarget:
OPNFV Colorado 
  Required for floating IP support
        
VPPNSH VPP features - very nice to have (for GBP)?? Target:
OPNFV Colorado 
  duplicity FDIO NSH_SFC support
        
VBD

Topology manager

done

May/11

June/3Andrew Li (Zhaoxing Li), Tyler Levine  
VBD
  • Generalization of vbridge-topology.yang
doneMay/6May/6Andrew Li (Zhaoxing Li), Tyler Levine  
VBD
  • Implementation of topology manager
doneMay/11June/3Andrew Li (Zhaoxing Li), Tyler Levine  
VBD

VPP VBD implementation

doneMay/9May/27Andrew Li (Zhaoxing Li), Tyler Levine  
VBD
  • Augmenting of vbridge-topology.yang iwth VPP specific items
doneMay/6May/6Andrew Li (Zhaoxing Li), Tyler Levine  
VBD
  • Using Topology Manager in existing implementation
doneMay/13June/3Andrew Li (Zhaoxing Li), Tyler Levine  
VBDSupport heterogeneous device VxLAN tunnel configuration  Target:
OPNFV Colorado 
  Support VxLAN tunnel mesh consisting of VPP and OVS nodes
VBDProvide an API to program ARP & MAC entries for VBD, to avoid destination unknown flooding for ARP resolution.
API to support stats as well (table sizes, change rates, etc.) 
  Target:
OPNFV Colorado 
   
        
Testing
  • Test cases definition
1st set doneMay/15May/15Martin Sunal, Juraj Linkes, Viliam Luc 

https://wiki.opnfv.org/display/fds/FDS+Testing#FDSTesting-ScenarioswithtopologiesforFDS

Testing
  • Test enviro prep for testbeds
doneMay/31Jun/3Juraj Linkes FDS Testing Colorado
Testing
  • Testbeds prep for OVS
doneMay/31May/30Viliam Luc
 FDS Testing Colorado
TestingManual - Test L2 tenant networks: VXLANin progressJul/25Jul/25Viliam Luc, Juraj Linkes, GBP Team FDS VXLAN Testing
TestingManual - Test L2 provider networks: VLANin progressJul/25Jul/25Viliam Luc, Juraj Linkes, GBP Team FDS VLAN Testing
TestingRobot - Basic ping test without floating ips and without dhcpin progressJul/25Jul/25Juraj Linkes  
IntegrationUCS-C CENGN testbedin progressJul/29

Jul/29

Viliam Luc, Juraj Linkes
  
IntegrationUCS-C Paris testbedin progressJul/29

Jul/29

Viliam Luc, Juraj Linkes
  
IntegrationUCS-B POD2in progressJul/29

Jul/29

Viliam Luc, Juraj Linkes
  
IntegrationUCS-B POD1in progressJul/29

Jul/29

Viliam Luc, Juraj Linkes
  
TestingRobot - Modify existing ocl testcasestodoAug/19Aug/19Juraj Linkes  
        
ApexFD.io/VPP Integrationin progressMay/31May/31Viliam Luc FDS Installation
Apex
  • Add RPMs to overcloud
doneMay/5May/5Viliam Luc FDS Installation
Apex
  • go through manual steps to configure VPP
doneMay/13May/13Viliam Luc FDS Installation
Apex
  • add VPP and HC install to the Apex
doneMay/31May/27Viliam Luc FDS Installation
Apex
  • add VPP configuration and vxlan tunnel to the Apex
in progressMay/31 Viliam Luc FDS Installation
Apexin progressmid of June Viliam Luc FDS Installation
Apex
  • update Tripleo-heat-templates
todomid of June Viliam Luc FDS Installation
Apex
  • update apex deploy file
todomid of June Viliam Luc FDS Installation
Apex
  • apex integration testing - whole stack
    • VPP, HC installed
    • ODL installed with features
todoJul/29    
        
KillerAppDefinitionin progress  Chris Metz  
KillerAppDevelopmentin progress  Chris Metz/Alex  
KillerAppTestingtodo  Chris Metz/Alex  
        

System level tests

Test deliverables for July (CLUS) and September (OPNFV Colorado). 

NOTE: Release criteria for Colorado: Test Release Criteria (release criteria includes tests which are not network related - and as such not listed below).

 

NameDescriptionReference/Details

ETA

Planned

ETA

Projected/

Completed

Target (CLUS/Colorado)
OPNFV-TC001Network PerformanceTC001Jun/10

Jun/10

integration process delayed - affects all testcases

Colorado
OPNFV-TC002Network Latency with PingTC002Jun/10Jun/10CLUS
OPNFV-TC008Network Performace, Packet loss Extended Test. Note: combinations of different packet sizes and different numbers of flows

TC008

Jun/10Jun/10Colorado
OPNFV-TC009Network Performace, Packet loss. Note: combinations of 64 b packet size and different numbers of flowsTC009Jun/10Jun/10Colorado
OPNFV-TC011Packet delay variation between VMsTC011Jun/10Jun/10Colorado
OPNFV-TC037Network throughput and packet loss using pktgen, system load using mpstat and latency using pingTC037Jun/10Jun/10CLUS
OPNFV-TC038Latency, CPU Load, Throughput, Packet Loss (Extended measurements). Note: Each port amount is run ten times

TC038

Jun/10Jun/10Colorado
OPNFV-TC027IPv6 connectivity between nodes on the tenant networkTC027Jun/10Jun/10Colorado
OPNFV-TC006Virtual Traffic Classifier Data Plane Throughput Benchmarking TestTC006Jun/10Jun/10Colorado
OPNFV-TC007Virtual Traffic Classifier Data Plane Throughput Benchmarking Test in Presence of Noisy neighboursTC007Jun/10Jun/10Colorado
OPNFV-TC020Virtual Traffic Classifier Instantiation TestTC020Jun/10Jun/10Colorado
OPNFV-TC021Virtual Traffic Classifier Instantiation Test in Presence of Noisy NeighboursTC021Jun/10Jun/10Colorado
OPNFV-vIMSvIMS solution test

vIMS test in OPNFV Functest

FDS wiki

Jun/10Jun/10CLUS
OPNFV-DoctorDisable network on one host and bring it backDoctor wikiJun/10Jun/10Colorado
OPNFV-vPing_SSH

Ping from a VM after connecting to its floating ip

vPing test in OPNFV FunctestJun/10Jun/10CLUS
OPNFV-vPing_userdataPing from a VM using metadata servicevPing test in OPNFV FunctestJun/10Jun/10CLUS
OPNFV-Tempest 

ping (external, same network) with floating ip
ssh to instance

Tempest test in OPNFV FunctestJun/10Jun/10Colorado (Stretch for CLUS)
OPNFV-rally_Authenticate.validate_neutronMultiple CRUD tests for list networks, Create VMs, floating IPs, networks, ports, routers, subnets, security groupsRally test in OPNFV FunctestJun/10Jun/10Colorado (Stretch for CLUS)

Unit/Sub-system tests

Test deliverables for July (CLUS) and September (OPNFV Colorado).

 

Unit /

Project

Execution

in

Name
Description
Reference/Details

ETA

Planned

ETA

Projected/

Completed

Target (CLUS/Colorado)
Owner
ODL GBPColoradoTest connectivity between 2 VMs on the two compute nodes via VXLAN 

manual test - done (UCS-C and B)

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColoradoTest connectivity between 2 VMs on the two compute nodes via VLAN 

manual test - done (UCS-C and B)

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColoradoTest connectivity between 2 VMs on the two compute nodes via VLAN and VXLAN at the *same* time

Test connectivity between 2 VMs on the two compute nodes via VLAN and VXLAN at the *same* time

1) VMs on the two compute nodes via VLAN and VXLAN at the *same* time

2) Test connectivity between a VM and an external gateway with a given IP via VLAN “X” while at the same time VXLAN and VLAN tenant networks are configured.

Stage #1

- manual tests

Stage #2

- robot tests

 manual test - in progress on UCS-C

auto robot test - todo

 Stage #1 Aug/19

Stage #2

Oct/22 (Col 2.0)


Stage #1 Aug/19

Stage #2

Oct/22 (Col 2.0

after CLUS / Boron release plan

Michal
Cmarada 

ODL GBPColoradoa VM and an external gateway with a given IP via VLAN “X” while at the same time VXLAN and VLAN tenant networks are configured.

Test connectivity between a VM and an external gateway with a given IP via VLAN “X” while at the same time VXLAN and VLAN tenant networks are configured.

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColorado3 VMs on the 2 compute nodes via VXLAN (i.e. 2 VMs are on the same compute node)Test connectivity between 3 VMs on the two compute nodes via VXLAN (i.e. 2 VMs are on the same compute node)

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColorado3 VMs on the 2 compute nodes via VLAN (i.e. 2 VMs are on the same compute node)Test connectivity between 3 VMs on the two compute nodes via VLAN (i.e. 2 VMs are on the same compute node)

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColorado 3 VMs on the 2 compute nodes via VLAN and VXLAN at the *same* timeTest connectivity between 3 VMs on the two compute nodes via VLAN and VXLAN at the *same* time

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColorado3 compute nodes, connected via VXLAN and VLAN, external connectivity via VLAN “X”3 compute nodes, connected via VXLAN and VLAN, external connectivity via VLAN “X”

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBPColoradoSimilar tests to the ones above. Verify that loop avoidance (split horizon) works on VXLAN.Similar tests to the ones above. Verify that loop avoidance (split horizon) works on VXLAN.

manual test - todo

auto robot test - todo

Oct/22 (Col 2.0Oct/22 (Col 2.0  
ODL GBP

ODL

Neutron NB integration test

Predispositions: OS in Jenkins

Input: Command in Neutron

Output: Checking data in DataStore in GBP

Purpose: when something changes in the upstream project (e.g. Neutron NB or OStack), then we can run this auto tests

Jan M. – checking different data sets/parameters: checking all the mappings + Including data CRUD

ODL Robot testsJul/31Sep/9after CLUS / Boron release planTomas Cechvala
ODL GBPODL

VPP Renderer/HC test 

Predispositions: Mountpoint simulator used instead of HC and VPP

Input: entities in GBP (configuration of entities)

Output:  checking configuration in HComb

ODL Robot tests - todo

(manual test done - result ok)

Jul/31Aug/31after CLUS / Boron release planTomas Cechvala
ODL GBPODLVPP Renderer/VPP CSIT test jobs

CSIT testing can be split into two stages:

Stage #1

Sending input data via restconf/Openstack, examination of generated data in ODL datastore

Test cases can be written with currently available Jenkins resources, which means:

 - no major modification of existing job templates needed

 - no major installations on available VMs

 - Openstack already available in Jenkins

Create jenkins jobs with the following input output:

1) VPP renderer test

input: JSON data for VPP renderer inserted into vpp-renderer.yang via restconf

 Bridge domain ->VXLAN, VLAN

 Port -> vhostuser, dhcp

output: Verification of generated data in ODL

 network-topology - id, type (vbridge), tunnel-type (VXLAN, VLAN), params (flood, learn, arp termination, etc), VNI or VLAN depending on type.

Port

- for tap and vhost check key, enabled, type, description

- for vhost check vppinerfaceaugmentation where bridge domain, socket and role have to be checked

- for tap check vppinerfaceaugmentation where tapname, mac address, device instance

 

2) Neutron-Vpp-Mapper test

input: openstack neutron data: network, subnet, port

network - type: vxlan, vlan or flat

port - vif-type: vhostuser, device owner: compute or dhcp

output: verification of data written to vpp-renderer.yang

Bridge domain should be written to datastore when neutron network type is flat or vlan. Fields to check in bridge domain: id, description, type, vlan-id (if vlan type), physical location ref

 Vpp-endpoint type vhostuser should be written to datastore when vif-type=vhostuser and device-owner contains *compute*. Fields to check here: key, node path, interface name, description, socket in vhostuser-case

Vpp-endpoint type tap should be written to datastore when vif-type=vhostuser and device-owner contains *dhcp*. Fields to check: key, node path, interface name, description, tap-case(name, physical address)

Stage #2

First we need to investigate possibilities of having vpp and HC in Jenkins. This will require a patch to releng builder. Then it will be possible not only to examine generated data, but design networking scenarions

More details later

Note: neutron-mapper testing with Openstack almost done

 

ODL Robot tests - todo

(stage #1 - 1,2 manually tested -result ok)

Jul/31Aug/31after CLUS / Boron release plan 
ODL GBPODL

IOS-XE renderer/CSR test

(Enterprise UCase test)

IO-XE renderer/CSR test

Mountpoint simulator used instead of CSR

Input: entities in GBP (configuration of entities)

Output:  checking configuration in CSR

ODL Robot testsJul/31Jul/31after CLUS / Boron release plan 
ODL GBPODLReconciliation test

Will be covered in FDS tests with VPP and Enterprise UCase tests

Including Restart of the controller

Failover test in Clustered case (3 cluster case (active, stand-by replaced by active, etc.))

Notes:

We need to define number of policies

We might use netconf simulator

We should use at least 2 VPPs

For three node clusters, some netconf device should be connected to different nodes in the cluster

For future, think about OVS

Test l2 connectivity before and after

ODL Robot tests.Jul/31Jul/31after CLUS / Boron release plan 
ODL GBPODLDS-ODL-VPP (5 VMs) test

1 Devstack VM

3 VMs for ODL – cluster test (failover test, etc.)

1-2 VPP VMs ---- Provide VPP image to ODL testing

ODL Robot testsJul/31Jul/31after CLUS / Boron release plan 
ODL GBPCSIT - on holdmajor OS/VPP Renderer/HC/VPP test

test l2 connectivity inside every tenant network

test l2 connectivity between tenant networks

test l2 connectivity inside every network

test l2 connectivity between networks

FDS wiki

CSIT JIRA

gerrit

Jun/30Jun/30

CLUS

Pod2 - enviro setup issues

VBD bugs being resolved

Further issues discovered

 
         
ODL NeutronODLODL.Neutron.Networks

Checking Network created in OpenStack are pushed to ODL
Check OpenStack Networks :: Checking OpenStack Neutron for known networks
Check OpenDaylight Networks :: Checking OpenDaylight Neutron API 
Create Network :: Create new network in OpenStack 
Check Network :: Check Network created in OpenDaylight 

ODL Robot testsJun/10Jun/10CLUS 
ODL NeutronODLODL.Neutron.SubnetsChecking Subnets created in OpenStack are pushed to ODL
Check OpenStack Subnets :: Checking OpenStack Neutron for known subnetworks
Check OpenDaylight subnets :: Checking OpenDaylight Neutron API 
Create New subnet :: Create new subnet in OpenStack 
Check New subnet :: Check new subnet created in OpenDaylight 
ODL Robot testsJun/10Jun/10CLUS 
ODL NeutronODLODL.Neutron.PortsChecking Port created in OpenStack are pushed to OpenDaylight
Check OpenStack ports :: Checking OpenStack Neutron for known ports 
Check OpenDaylight ports :: Checking OpenDaylight Neutron API
Create New Port :: Create new port in OpenStack 
Check New Port :: Check new subnet created in OpenDaylight 
ODL Robot testsJun/10Jun/10CLUS 
ODL NeutronODLODL.Neutron.Delete Ports

Neutron.Delete Ports :: Checking Port deleted in OpenStack are deleted also
Delete New Port :: Delete previously created port in OpenStack 
Check Port Deleted :: Check port deleted in OpenDaylight
Neutron.Delete Ports :: Checking Port deleted in OpenStack are deleted

ODL Robot testsJun/10Jun/10CLUS 
ODL NeutronODLODL.Neutron.Delete SubnetsChecking Subnets deleted in OpenStack are deleted...
Delete New subnet :: Delete previously created subnet in OpenStack 
Check New subnet deleted :: Check subnet deleted in OpenDaylight 
ODL Robot testsJun/10Jun/10CLUS 
ODL NeutronODLODL.Neutron.Delete NetworksChecking Network deleted in OpenStack are deleted
Delete Network :: Delete network in OpenStack 
Check Network deleted :: Check Network deleted in OpenDaylight
ODL Robot testsJun/10Jun/10CLUS 
         
VPPCSITVPP on Centos 7.2All CSIT tests that have distro dependencies should be executed on Centos 7.2   CLUS 
VPPCSITvxlan_bd_dot1q.robot:31VPP can encapsulate L2 in VXLAN over IPv4 over Dot1Qhttps://wiki.fd.io/view/CSIT/FuncTestPlanJun/10Jun/10CLUS 
VPPCSITl2_xconnect_untagged.robot:26:VPP forwards packets via L2 xconnect in circular topologyhttps://wiki.fd.io/view/CSIT/FuncTestPlanJun/10Jun/10CLUS 
VPPCSITvxlan_bd_untagged.robotSupport for VXLAN over v4 and v6 tunnels

https://wiki.fd.io/view/CSIT/FuncTestPlan

FDS wiki

Jun/10Jun/10CLUS 
VPPCSITVPP.Tap InterfacesSupport for Tap interfaces

FDS wiki

CSIT JIRA

Jun/10Jun/10CLUS 
VPPCSITVPP.L2 bridging and split horizon

Support for L2 bridging between different ports, split horizon for loop avoidance

Supported for two devices and no broadcast messages

FDS wikiJun/17Jun/17CLUS 
VPPCSIT/OPNFV-FDSVPP.tenant isolation

Isolate different tenant (security groups)

HC testers will implement after HC they are required to test it

FDS wikiJun/24TBDColorado (stretch for CLUS) 
VPPTBDVPP.tiny vxlan scale

Scale test with 10/10 vxlans/macs per db

CSIT won't do this one in the near future

OPNFV doesn't have resources to do scale testing - postponed

FDS wikiJul/8TBDTBD 
VPPTBDVPP.small vxlan scale

Scale test with 100/10 vxlans/macs per db or 10/100 vxlans/macs per db or in between

CSIT won't do this one in the near future

OPNFV doesn't have resources to do scale testing - postponed

FDS wikiJul/20TBDTBD 
VPPTBDVPP.medium vxlan scaleScale test with 1,000/10 vxlans/macs per db or 10/1,000 vxlans/macs per db or in betweenFDS wikiTBDTBDTBD 
VPPTBDVPP.large vxlan scaleScale test with 10,000/10 vxlans/macs per db or 10/10,000 vxlans/macs per db or in betweenFDS wikiTBDTBDTBD 
VPPTBDVPP.huge vxlan scaleScale test with 100,000/10 vxlans/macs per db or 10/100,000 vxlans/macs per db or in betweenFDS wikiTBDTBDTBD 
         

 

FDS GBP documentation

COMPONENTDETAILSUSER GUIDECOMMENTSDOCUMENT SOURCE LINK
neutron-mapperdesign overviewDoneall OpenStack-related parts of user/dev guides were moved to a separate guide (Howto OpenStack) 
classes/code overviewn/a  
data processing from neutron to gbpDone  
CSIT brief development guideIN PROGRESS (Tomas)  
feature installationDone  
input/output dataON HOLDThere are already pictures illustrating processes between Neutron and GBP, and a link to a demo with examples. 
CSIT brief user guideON HOLD  
openstack configurationIN PROGRESS (Tomas)  
renderer-managerdesign overviewDone[dev] 
conditions and dependencies on rendered dataDone[dev] locations, endpoints, policy between groups 
data rendering from policy and forwardingDone[dev] 
input/output dataTBD  
neutron-vpp-mapperdesign overviewDone  
applied constraints for vppDone  
overview components for supporting neutron entitiesDone  
initial configuration data in DSDonein network-topology 
input/output dataDone  
Openstack configurationTODO TomasAfter patches for VPP are merged in VPP 
location-managerdesign overviewDone[dev] lower priority 
key classes descriptionDone[dev] lower priority 
netconf-testtool as a compensation for real mount pointsTBDlower priority
Does it have to be here? (Tomas, Matej?)
 
location data examplesTBDlower priority 
VPP rendererinitial configurationIn progress  

 

- With minimal distro I had to disable the reconnect mechanism for a bit, since it has to be different now. For now it just reports the VPP connection failure.

- VPP seems to deny the reconnect even if we try: https://jira.fd.io/browse/HONEYCOMB-78

  • No labels