This page describes the design and status of a demo planned for the OPNFV Summit 2017. The demo will include integration of the draft release code and sample VNFs from the ONAP project, as below.
- VDU1, VDU2: two VNF application nodes with nginx-based web server running in a docker container, with the VES agent running on the host VM
- VDU3: a VPP-based load balancer with integration to VES, from the ONAP Demo VNFs repo
- VDU4: a VPP-based firewall with integration to VES, from the ONAP Demo VNFs repo
- VDU5: the VES Collector, with integration to an InfluxDB/Grafana backend for analytics storage and presentation in a dashboard
- The Barometer VES plugin running on all bare-metal hosts and inside all VDUs
Stay tuned for a video to be posted and linked here.
|Feature||Description and related demo prep tasks||Status||Working on it|
|Lab infra for the demo||Get a single server (for virtual install) or multi-node (preferred) allocated from Pharos thru June (JIRA: INFRA-114)||Complete||Bryan Sullivan|
|Bare metal, VM host, and app analytics|
Using collectd, collect system analytics and adapt to VES JSON schema for delivery to the Collector.
- Resolve the current merge conflict
- Update the VES schema to AT&T 5.0 (Gerrit 34733)
- Update the VES plugin to the current schema
- Update the VES C agent to the current schema
|Complete||Maryam Tahhan, Bryan Sullivan|
Using the VES Collector code, run the Collector in a VM, providing a UI for visualization of the analytics.
- Integrate the current collector code into an InfluxDB frontend, creating a schema for VES data to be stored in InfluxDB, running the collector listener and saving event data in the database as received
- Integrate a Grafana dashboard with the InfluxDB database, and develop queries to present the following statistics
- Packet volume in/out each VM
- HTTP request rate at the web servers
- VM and host resource utilization e.g. CPU, memory, disk I/O
Deployment of the demo blueprint.
- Create vHello_ONAP.sh based upon the Barcelona demo install script (vHello_VES.sh).
- Create copy blueprint folder blueprints / tosca-vnfd-hello-ves to vhello_onap
- As needed, update the VES blueprint to align with the current Models vHello_3Node_Tacker.sh test
- Copy the blueprint VDU3 config (from blueprint.yaml) into VDU4 and update the current VDU4 (Monitor) to VDU5 in the blueprint and scripts.
- Update the iptables setup for VDU4 to act as a firewall only (forward all incoming requests to the private address of VDU3). Later we will update VDU4 to use the new vFW code.
- Update test scripts to route client request thru the vFW VM.
- Update the deployment scripts to add collectd agent deployment in VDU4.
- Using the existing monitor.py, deploy and test operation of the new demo blueprint in this form, to ensure that the basic topology and script updates work.
- Update VDU3 to use the ONAP vLB.
- Update VDU4 to use the ONAP vFW.
- Using the existing monitor.py, deploy and test operation of the demo blueprint.
- Add deployment of the new collector, InfluxDB, and Grafana functions in VDU5.
- Using the new VDU5 collector etc, deploy and test operation of the completed demo blueprint.
|Overall demo script|
Scenarios to be demonstrated:
- deployment and idle web server, vLB, and vFW stats in the dashboard
- startup of traffic, showing updated stats in the dashboard
- suspend/resume of one web server, showing updated stats in the dashboard
|In progress||Bryan Sullivan|
Following is a drawing of the overall demo concept: VES_OPNFV_Summit_2017.pptx