Skip to end of metadata
Go to start of metadata

LaaS Vendor Support

Pharos "lab-as-a-Service (LaaS)" project

Community labs provide physical and virtual resources for project development and testing as well as production resources (along with the Linux Foundation lab). The number of OPNFV projects needing access to bare-metal resources are expected to increase as well as need more scaled and diverse deployment environments. This means that community lab resources will likely continue to be in heavy demand from approved projects and the OPNFV production pipe-line.

The OPNFV LaaS project will use some form of Cloud environment to provide individual developers with an easy way of getting started ... i.e. to "try-out" an OPNFV deployment and begin to develop and test features with minimal overhead or knowledge about configuring and deploying an OPNFV instance.

Pharos community is collecting requirements for this project ... please add your edits/comment to this Wiki page or use the mailing list ...
opnfv-tech-discuss@lists.opnfv.org with [pharos] included in the subject line.

Requirements for OPNFV LaaS include ...

  • Automated reservation and access (i.e. does not depend on emails and providing credentials manually)
    • See what resources are booked and what are free
  • Admin dash-board
    • Usage statistics (including who accessed and how long)
    • Control access manually if needed
  • On-line documentation (overview, get started, searchable help, FAQ, etc.)
  • OPNFV "hello world" application activated with a button or simple command
  • Graphical user interface
    • Compute storage and network configuration of the virtual configured environment
    • Easy for user to recreate default setup
  • ???

Proposed design of the IOL Lab at UNH Durham

Goals / Deliverables Phase 1

  • Set of scripts to provision / create a virtual instances of a Pharos Pos, consisting of 6 VMs (Jump Host & 5 nodes)
  • Integration of script with resource request / dashboard / Jenkins, allowing for full automation
  • Working system for 6 pods, available to community developers through the dashboard

Design / Architecture Phase 1

Static Setup – Virtual Machines and network are pre-configured, to function as a “Virtual Pharos Pod.” A fixed number of “Virtual Pods” would be operated over the set of hardware, which access / assignments handles in a similar fashion to existing infrastructure. Each “Virtual Pod” would be longer lived, i.e. not torn down after usage, but could be “re-initialized” to a “known state” from a previously saved image / snapshot. This would be in contrast to a difficult to engineer and maintain dynamic setup.

  •  Simple setup and maintenance   
  • With a static setup 6 identical virtual machines can be run per server  

    • Jump Host runs as one of the node, running either CentOS or Ubuntu with KVM installed

    • Networks established using either Linux bridging or OVS

    • Availability of ISO for each installer to the Jump Host

  • Establish a proposed time limit for the resource, approximately 1 week, and allowing extensions of the time.

    • This might be able to be linked to the Pharos booking tool that is currently being developed.

    • Enhance the booking tool to “setup” the environment/handle extensions of the service.


Phase 2 Changelog:

Terminology new to this release:

Pod: a collection of hardware configurations that can be reused, embodied by what would be put into an OPNFV PDF file.

Config: a software configuration descriptor that applies to a specific POD. This includes operating systems and (in the future) software installation information for OPNFV and other LFN projects

Booking: a Pod + a Config + some metadata, such as how long it will last and who/what it’s for. Since both pods and configs are reusable, a booking makes them into a real single use thing that embodies what a Booking was in v1.0. If your needs aren’t too novel, you should be able to reuse an existing standard Pod and matching Config to book some hardware and get to work in a flash.

At-a-Glance:

  • Users can now create Pods

  • Pods can have more than one machine

  • Users can specify network layout for their pods with a GUI tool

  • Pods can be configured with standalone Configs

  • Users can view deployment status

  • Users can now create snapshots of certain machines that can be substituted for the standard images so they can bring their custom environments with them across bookings and machines

  • Workflows allow users to seamlessly create a pod, a config, and a booking with them all in one go

  • Users can now add collaborators to their booking

  • PDFs are automatically generated based on how a user defines their Pod and Config for a Booking

Technical Coolness:

  • Heavily restructured codebase to better model an OPNFV installation and allow for more extensibility

  • Logging is now properly implemented, allowing for faster issue resolution and more efficient debugging

  • Tests for all major components have been implemented to aid further development and avoid regressions

  • New API supports multiple labs

  • Heavy use of templates simplifies user interaction and allows for more flexibility for the labs

  • Support for outages on a per host and per lab basis for routine maintenance and emergencies

  • Flexible linear workflow: a workflow format has been created in such a way that additional “workflow extensions” and steps can be easily added with minimal fuss

  • Analytics and statistical data on bookings are now being generated

  • The foundation for automatic OPNFV installation has been laid

  • If a user gets interrupted during completing a workflow, they can pick up right where they left off

  • Users can view their bookings in detail, including the status of various subtasks, overall progress, and any messages or info sent by labs to them that are specific to a given subtask

  • Removed Jenkins slave views. This will become a separate app if popular demand requires it be brought back.

  • Added proper homepage

  • Secured API

Virtual Hardware Requirements

Minimum virtual pod requirements Nodes do not need to meet the exact Pharos requirements when virtual, utilizing around 8GB RAM  for 6 nodes per server. Setup with 4 virtual NICs.

Per node:

  • RAM: 8GB

  • CPU: 4 cores (Largest amount per openstack deployment)

  • Storage Space: 100Gb (Largest requirement among openstack deployments)

  • Network: 4 NICs (Users would be required to setup VLANs for addutional networks)

Hypervisor

KVM  Use KVM, with a template to create the Virtual Machines with automated scripts. KVM will allow for a completely FOSS testing environment as well.



  • No labels

3 Comments

  1. Jack Morgan

    In case if you need, here are the needed CPU, RAM and Disk needs for virtual OPNFV Deployments. The figures are per virtual node (master if valid, controller and compute) and it is not guaranteed to be exactly correct since there is no alignment or common place where this information is kept. I used the deployment settings or libvirt vm template files and so on. But if we want to handle the machines homogeneously without thinking what installer will be used by the people use LaaS, we need to aim for the installer that needs the most. So 50GB RAM, 24CPUs, 600GB disk needed per LaaS machine. (This figure excludes the base OS needs)

    apex: HA deployment needs 5 VMs at least. In total 10CPUs, 50GB of RAM and 200GB of Disk needed.
    CPU: 2
    RAM: 8GB for all nodes
    Disk: 40GB

    compass: compass master + nodes. HA deployment needs 6 VMs (5 openstack nodes and compass master). In total 24CPUs, 24GB of RAM and 600GB of Disk needed .
    CPU: 4
    RAM: 4GB for all nodes
    Disk: 100GB

    joid: maas master + nodes: HA deployment needs 6 VMs (5 openstack nodes and maas master). In total 24CPUs, 24GB of RAM and 600GB of Disk needed .
    CPU: 4
    RAM: 4GB
    Disk: 100GB

    fuel: fuel master + nodes: : HA deployment needs 6 VMs (5 openstack nodes and fuel master). In total 12CPUs, 42GB of RAM and 600GB of Disk needed .
    CPU: 2
    RAM: 8GB for openstack nodes, 2GB for fuel master
    Disk: 100GB

    1. Thank you very much, this will be helpful for templating vms when we get our hardware setup.

  2. No problem, anytime.

    I have 1 more concern I must share with you. I like the idea of having 2 VMs per machine and lending each VM to different people.

    But then what will happen is, in these VMs, there will be 6 or 7 other VMs, using nested virtualization. And then people will create VMs on deployed OpenStack/OPNFV and perhaps try some VNFs when we reach to that point. Those OpenStack VMs will not have nested virtualization and they will be dead slow. Networking stuff will become complicated as well since you will have libvirt networks on each of these first level of VMs, and then the OPNFV  deployment will have their own networks in those VMs and then OpenStack and ODL/ONOS networking and whatnot might make things more complicated than one might want - especially newcomers.