This space is designed was the original place to collaborate on OPNFV Demo Possibilities for OPNFV Summit, June 12-15, 2017 in Beijing.
THESE CALLS ARE NOW TAKING PLACE IN SMALLER TEAMS: THE LAB TEAM AND THE SCRIPT TEAM. IF YOU HAVE ANY QUESTIONS, PLEASE EMAIL This first demo (VCO Demo 1.0) was a success and now a page has been created to collaborate on a new demo (VCO Demo 2.0).
Go to the VCO Demo 2.0 Planning page here: OPNFV VCO Demo 2.0
If you have any questions, please email: firstname.lastname@example.org.
ORIGINAL MATERIAL (NOW DATED)
An onsite summit demo showing a virtual central office (vCO) providing broadband access service (wireless). Demo will be shown in its full form on the show floor, and a highlights version will be developed for a keynote presentation. There is currently a Virtual Central Office Proof of Concept with OpenDaylight that will be leveraged. This uses OpenDaylight as the SDN controller and focuses essentially on residential mobile services to create an entire architecture in a PoC. This involved the onboarding of VNFs through a common structure/process, and mapping to a service. We need volunteer companies/people to highlight to participate and highlight their strengths. Benefits for User: Service Providers can reduce the amount of work that they do, lower costs, lower cost per user. This has traditionally been done in a proprietary way. Virtualized CO is the use case for NFV and we can we demonstrate interoperability at the same time. The key is getting the fabric together. AHA Moment is E2E Provisioning and adding subscribers. A draft whitepaper on the PoC is available here: https://docs.google.com/document/d/1JVcKVt2MKKmoug2dPh3rsBYzMUuvlzx3RlW836WMi5g.
Core Message to Come Across:
- OPNFV community integrates open source components (e.g. ODL, OpenStack) into deployable NFV solution stacks (a.k.a. scenarios)
- OPNFV community pre-tests the function and performance of these NFV solution stacks (Functest and Yardstick) and OPNFV platform compliance to improve interoperability in ecosystem.
- Interoperability: e.g. mixed hardware, independent installer, independent/open stack NFV solution stack, independent VNF and VNFM, all interoperate to deploy a vCO service.
- NFV values as demonstrated through the vCO demo: rapid on-boarding, agile service-on-demand, and analytics driven operation automation (related to telemetry/Doctor if that part is included.)
- Part I: Deploy an OPNFV Controller (CI/Auto). (Apex deploying os-odl-ovs / Compass installing os-odl-ovs)
- Shows installers, deployment of rack servers on stage
- test validation of platform for pre-deployment validation
- Pre-recorded video, speed-up replay (may take too long?)
- 1-2 minutes to talk through deployment process, 4-5 minutes to show a video (accelerated)
- Part 2: Onboading VNFs (demoed components: TOSCA NFV templates parsed and deploying VNFs with a VNFM)
- VNFs being used are some open source VNFs (SampleVNF project) plus some proprietary VNFs, using TOSCA NFV templates
- Tacker as VNFM? Cloudify?
- On-board VNFs in advance and turn them on live on stage
- 1-2 minutes to describe the TOSCA NFV “industry standard template” and onboarding process
- Part 3: Virtual Central Office use-case demo (how do you demo vCO?) - ODL, OPNFV platform, VNFs
- Presentation that describes the topology and capability - VCO POC.pptx
- Residential broadband Internet service
- vFW - OpenSense, Vyatta
- vRouter - Vyatta, Cloud-router
- vBNG - Intel or OpenSSG
- OLT + YANG model for config - Calix
- Connect an AP/wireless router to the CO
- Demo virtual CPE services - connect to the CO, show that the services are provisioned, and connect live to the WiFi hotspot
- Show connectivity from end-points through vCO
- Provisioning an Access Point for presenter’s cellphone after the CPE connects to the vCO
- 5 or 6 minutes - describe problem statement of traditional CO, and how vCO addresses some of them, connect to wifi, and show connection counts in our CO dashboard
- Part 4: Service assurance: Platform telemetry feeding into Doctor
- Dashboard showing activity from hardware
- Service events via VNF Event Stream?
- Break/fix in Doctor & Vitrage? Bridge too far
- 4-5 minutes? Maybe too long to include in the keynote demo, but can be shown at the end, and can be included in show floor demo(s)
See the presentation above for details
- 3 PODs/Racks with 12 Nodes each - 3 Controllers, 3 Storage Nodes, 6 Compute Nodes
- 3 Leaf Switches min (6 for redundancy) - 100Gbps uplinks and 10G to servers
- 2 Spine Switches - 100Gbps only
- Three compute nodes can be used for ODL Cluster
- Compute - HP, Cisco, Dell, Huawei
- Whitebox Switches - EdgeCore - ????
- OLT - Calix and or Fujitsu
- Leaf/Spine switches used for fabric with full mesh connectivity
- EVPN - Ericsson/Cisco code on EVPN
- ODL acts as a fabric controller
- ODL also acts controller for VNFFG for service chaining
Demo Working Group Additional Inputs:
- Volunteers/Companies: Casey Cain, Azhar Sayeed, Sanjay Alyagari, Pasi Vaananen - Cisco, Ericsson, Rift.IO, Bell Canada, T-Mobile, Red Hat, Inocybe Technologies, Intel, Brocade, Huawei, Clearpath, Fujitsu, China Mobile, Thomson Reuters, AT&T and Comcast.
Note: Huawei and Nokia have offered up servers, Huawei lab/testing space in China, and CENGN lab-space in Canada (for remote connectivity needs).
- Demo Name: vCO Proof of Concept
- Resources Required (HW/SW, Connectivity):
- Timeline/Milestones for Demo Completion: Demo is complete, needs HW resources to test and run.
- Ideas for combining with other Directions:
- Next-Level Dialog Needed from Group:
- As a Beijing demo, should we choose local VNF vendors? How do we invite other VNF vendors to be involved? How many do we need?
- Who is on point to pull this together? Brandon is overall project manager but needs volunteer leads for different sections of of the demo and help tie it all together.
- Do we need local hardware? If not, how do we verify that we can connect from Beijing?
- Need local staff to manage AP, verify that we can connect and provision it.
- General Learnings From First Meeting (4/3):
- Target Audience is Service Provider Technical Decision Makers
- Key Attribute: Demo OPNFV applicability in real networks. Go beyond functionality to performance in an operational network
- Needs to be contain and interesting visible element
- vCO Demo is ready to go but needs a home. Would be shown remotely with connectivity. CENGN could provide resources.
- General Learnings form the Second Meeting (4/4):
- The group has landed on vCO (formerly Direction 3) as the demo focus. Demo will be shown in it's full form on the show floor, and a highlights version will be developed for a keynote presentation. Speaker TBD, but would like to involve a local service provider.
- The team that put together the vCO POC is motivated and committed to making this work. The PoC is "done" but needs a home (hardware/labspace) and a modified for the OPNFV Summit to highlight OPNFV as the mechanism (using an OPNFV scenario).
- Listed below is a demo description, script, and required resources. Volunteers from the demo working should make additions/edits as needed below
- Next Steps: Present demo idea to OPNFV Board (4/7) and OPNFV Marketing Committee (4/12). Flesh out the wiki and schedule a follow up call week of 4/10 with working group. Establish a detailed timeline. Working Group (Pasi) to submit a PoC as part of the the OPNFV Summit CFP (week of 4/10)
- General Learnings form the Third Meeting (4/14):
- The current bi-weekly vCO call will be discontinued and rolled into this call. A Doodle Poll will go out and a new weekly demo call scheduled.
- Azhar gave more background on the vCO PoC which started to build VNF control and fabric with services on top. This will be a residential services demo with a vBNG
- A presentation was shared that describes the topology and capability - VCO POC.pptx. These charts will be added to the wiki as well.
- There was a question about redundancy and if there is time to build it in.
- For Installer, TripleO is being used but could also work with Apex
- There was a question whether every component of the demo needed to be open source. Ideally they would be, but given the time pressure, non-open source technology could be used with caveats.
- A list of VNFs will be required and some VNFs might already be under OPNFV testing. Current project capabilities will be explored (e.g. Models).
- Intel suggested a vBNG that could be considered (Haidee + Rob to connect)
- Keynote Demo: Because of time constraints all vendors donating equipment to the demo should be local to China (to avoid shipping/customs, etc).
- Remote Demo: 1 remote lab needs to be identified to start the onboarding
- GitHub URLs will be added to the wiki
- Many of the folks in this working group will be a at OpenStack Summit and a couple F2F planning meetings will be scheduled. A timeline will be created and added to the wiki.
- General Learnings from Fourth Meeting 4/19
- Immediate Goal: Get all volunteers/hardware/software/labs signups in place by OpenStack Summit Boston
- There are still gaps project requirements, see "Hardware and software components for demo" table below
- There was a request for "baseline VNFs"
- Working group members need to follow up with potential vendors where needed to compile a complete list of options
- For labs, there are 2 options. 1 ship all equipment to China now for lab set up and testing. 2. Stage equipment first in the Bay Area (preferred option if time/space).
- Calex worked on the vCO PoC development and is being invited to these demo calls now. Dell and Quanta are also being considered.
- Service assurance piece: Initial discussion with NetScout sounded promising, FU needed
- A question was raised on whether the demo would use Apex or Compass or both. Using both would show more interop, but also double level of complexity. One may need to be cut to fit the timeline
- A question was raised on involving the Doctor, Barometer, and/or other OPNFV projects in the part 4 of the script and it was determined this should be explored.
- A question was raised about involving the testing Projects, eg. Functest, Dovetail in part and it was determined this should be explored. (Because this is long, this work could be captured in a highlights video for demo)
- Action Items:
- Sign up to the char below before OpenStack Boston – All
- Set upcoming meeting schedule (including at OpenStack). Ensure Calex attendance - Brandon
- Connect Azhar with Ericsson for HW questions - Brandon
- Check on labs in China - Nan
- Check in on Quanta Lab - Azhar
- Check on EdgeCore - Pasi
- FU meeting on service assurance - Azhar + Netscout
- Check in with Doctor/Vitrage/Barometer teams - Maryam
- Checkin with Functest/Dovetail teams - Helen
- Inquire about OpenSSG - Giles
- Update from Azhar 4/24
- My friend, Hanen, spoke to QCT and there is high interest from QCT and they will hopefully join the next conference call we have…
- E/// is also lined up with EVPN code
- Cumulus will provide the white box switch code that supports EVPN - Talking to QCT about white box switches
- The Server configuration needed is as follows
- Since we are only doing functional testing this is deemed to be sufficient and this is Min config - Higher than this will work as well...
- Controllers - OSP and ODL
- Intel Xeon with Min 8MB Cache
- 64GB Memory
- 1 TB Storage - Ephemeral for local storage
- 2 x 10GbE - Two NICs with 2 10Gb ports each
- Intel Xeon min 8MB Cache
- 32 GB Memory
- 1 TB Storage - Local Storage
- 2 x 10GbE - 2 NICs with min 1 port each - prefer 2 ports
- Storage Node
- Intel Xeon
- 64GB Memory
- 8TB Storage - Any storage over 2TB is sufficient
- 2 x 10GbE NICs - 2 NICs with 2 10GbE ports
- Storage Node
- So the configuration is nothing special - memory needed is 32GB to 64GB depending on the nodes
- NICs minimum 2 x 10GbE
- Local disks for Ephemeral storage
- Storage only to store images (Glance) - there is no heavy lifting wrt Storage synchronization etc.
- General Learnings from 5th Meeting 4/27
- Azhar was unable to make the call and will join next week and the following week in Boston
- Hardware and Lab providers need to sign up on this wiki page before the group meets at OpenStack
- There was a discussion around Leaf Switches and Spine Switch. Options are being explored with EdgeCore and Quanta. Because of limited time, it was stated that whatever can be done through remote access, the better.
- Nan asked if the configuration (3 Pods/12 nodes) need to be from the same vendor. Because of time constraints. Hardware in the same lab should be the same vendor.
- Intel and ZTE are also willing to possibly provide lab locations in China. Open question: Could demo elements accessed remotely from diverse labs/hardware configurations?
- Intel is not a in a position offer a vBNG at this time.
- Some folks from Inocybe are willing to do the ODL part and integrate with OpenStack. Need to know the hardware details first.
- Nan/Yifei noted that Huawei lab in Shanghai is available for the demo (12 nodes with full access to the remote portion)
- Open question: Can we have the switches sent to Shangahi now? What are the specs?
- For Part 1 of the Demo script, we need to figure out how the OPNFV testing projects can play a role. Helen is helping to facilitate this conversation.
- For Part 4 of the Demo script, Azhar is talking with NetScout and the OPNFV teams (Barometer/Doctor, etc) need to see which VNFs will be involved.
- Next Steps: Signup for hardware and software components below
- The next call will be on May 4 at 7:00 AM PT. The in-person meeting will be held OpenStack Summit in Boston, May 9, 14:00 - 16:00 in the Republic Ballroom. Add your name to the OPFNV Wiki here: OPNFV at OpenStack Summit - Boston
- Update from Azhar 5/1
- Following up with Quanta and also considering Lenovo for hardware and will be talking with them at Red Hat Summit.
- Huawei's hardware additions look good and now we need to now obtain the white box switches.
- We need to host this in SJC and possibly replicate to China.
- Meeting at OpenStack Summit 5/9
- It was determined that because of the timeline, that the primary demo needs to be hosted in North America and connected to the OPNFV Summit via the Internet.
- Adequate testing will occur to ensure any connection roadblocks can be overcome. We are also considering on-site hardware for some portion of the demo and Nokia has offered up servers for this.
- Once the demo is further fleshed out, an outreach will be made to Chinese service providers to see id there is a fit with their NFV strategy/direction.
- The Critical Path right now is getting hardware. Because Huawei's servers/switches are in Shanghai, this will not be needed for the remote demo. Thank you Huawei for the offer.
- Lenovo in Raleigh has offered use of their lab and servers.
- Leaf switches and spine switches are still needed. Dell declined and outreach is being made to HPE.
- For VNF onboarding, we need to determine which VNFs will be used and who will do what.
- Ericsson confirmed they will provide the vBNG/Francois is working on the Tosca descriptors.
- Dave & Brandon continue to rev on the Demo script available here. Please chime in with any questions/suggestions. Need to define vCO and clear value statments
- The Goal of the demo is to show Residential Service End-to-End (Service Chaining, VBNG, Fabric Topology for OpenStack install, ODL, etc)
- Video will b used to show the more time consuming parts of the demo (VNF onboarding,e etc)
- Idea: Show dashboard with Operational State, Service Assurance Tags (traffic stats/graphs, etc)
- Barometer and Doctor tie ins: Doctor could potentially identify performance issues, and monitor infrastructure. If an error, sent out a notification report. VM manager, take some action.
- Barometer needs to use DPDK to show stats/metrics
- Inocyble is doing the ODL specific work
- The VNF architecture needs to be defined (along with a view of the software stacks/diagram) before these pieces can be developed. Azhar/Pasi can help here
- Idea: Noisy neighbor/degradation/error. Load based monitoring of the systemn
- Demo will be an End to End orchestrator. Tosca/Heat. But not a Tosca Demo. Orchestrator OpenSource
- Slides will need to be created for the show floor demo. Hanan can help here
- As a backup for the onstage demo, a recording will be prepared for for the whole thing: Plan B
- Pretesting the demo parts starting next week
Hardware and software components for demo
Qty 3 for non-redundant
Qty 6 for redundant configuration
2 x 100Gbps Uplinks
Min 12 10Gbps port for server
|Dell & Arista||Netconf capable|
|Spine Switches||10 x 100Gbps Ports - Min 10||Dell & Arista||Netconf Capable|
E9000 with 12 blades, each:
Please contact email@example.com if any question.
Compute nodes are ready, waiting for White box switches.
Let me (firstname.lastname@example.org) know if switches are ready.
|Switch Software||Cumulus? Snaproute?||Cumulus (also talking to Snaproute)||Yes|
Any other open source options?
Cumulus code available here: https://cumulusnetworks.com/products/cumulus-vx/
OSP Release 10 or 11
|SDN Controller||OpenDaylight Carbon+ release||Red Hat, Inocybe or upstream ODL||Yes||Why Carbon? As I know Carbon is not in OPNFV Danube.|
|vRouter||CloudRouter, Vyatta||open source||Yes||Red Hat|
|vDPI||Deep Packet Inspection - nTOP||Opensource||Yes|
|vFW||Firewall||OpenSense or Vyatta||Yes|
|vParental Control||Kidlogger||opensource||Yes||Probably Not needed|