Page tree
Skip to end of metadata
Go to start of metadata

Pharos "lab-as-a-Service (LaaS)" project

Community labs provide physical and virtual resources for project development and testing as well as production resources (along with the Linux Foundation lab). The number of OPNFV projects needing access to bare-metal resources are expected to increase as well as need more scaled and diverse deployment environments. This means that community lab resources will likely continue to be in heavy demand from approved projects and the OPNFV production pipe-line.

The OPNFV LaaS project will use some form of Cloud environment to provide individual developers with an easy way of getting started ... i.e. to "try-out" an OPNFV deployment and begin to develop and test features with minimal overhead or knowledge about configuring and deploying an OPNFV instance.

Pharos community is collecting requirements for this project ... please add your edits/comment to this Wiki page or use the mailing list ...
opnfv-tech-discuss@lists.opnfv.org with [pharos] included in the subject line.

Requirements for OPNFV LaaS include ...

  • Automated reservation and access (i.e. does not depend on emails and providing credentials manually)
    • See what resources are booked and what are free
  • Admin dash-board
    • Usage statistics (including who accessed and how long)
    • Control access manually if needed
  • On-line documentation (overview, get started, searchable help, FAQ, etc.)
  • OPNFV "hello world" application activated with a button or simple command
  • Graphical user interface
    • Compute storage and network configuration of the virtual configured environment
    • Easy for user to recreate default setup
  • ???

Proposed design of the IOL Lab at UNH Durham

Goals / Deliverables Phase 1

  • Set of scripts to provision / create a virtual instances of a Pharos Pos, consisting of 6 VMs (Jump Host & 5 nodes)
  • Integration of script with resource request / dashboard / Jenkins, allowing for full automation
  • Working system for 6 pods, available to community developers through the dashboard

Design / Architecture Phase 1

Static Setup – Virtual Machines and network are pre-configured, to function as a “Virtual Pharos Pod.” A fixed number of “Virtual Pods” would be operated over the set of hardware, which access / assignments handles in a similar fashion to existing infrastructure. Each “Virtual Pod” would be longer lived, i.e. not torn down after usage, but could be “re-initialized” to a “known state” from a previously saved image / snapshot. This would be in contrast to a difficult to engineer and maintain dynamic setup.

  •  Simple setup and maintenance   
  • With a static setup 6 identical virtual machines can be run per server  

    • Jump Host runs as one of the node, running either CentOS or Ubuntu with KVM installed

    • Networks established using either Linux bridging or OVS

    • Availability of ISO for each installer to the Jump Host

  • Establish a proposed time limit for the resource, approximately 1 week, and allowing extensions of the time.

    • This might be able to be linked to the Pharos booking tool that is currently being developed.

    • Enhance the booking tool to “setup” the environment/handle extensions of the service.

Virtual Hardware Requirements

Minimum virtual pod requirements Nodes do not need to meet the exact Pharos requirements when virtual, utilizing around 8GB RAM  for 6 nodes per server. Setup with 4 virtual NICs.

Per node:

  • RAM: 8GB

  • CPU: 4 cores (Largest amount per openstack deployment)

  • Storage Space: 100Gb (Largest requirement among openstack deployments)

  • Network: 4 NICs (Users would be required to setup VLANs for addutional networks)

Hypervisor

KVM  Use KVM, with a template to create the Virtual Machines with automated scripts. KVM will allow for a completely FOSS testing environment as well.



  • No labels

3 Comments

  1. Jack Morgan

    In case if you need, here are the needed CPU, RAM and Disk needs for virtual OPNFV Deployments. The figures are per virtual node (master if valid, controller and compute) and it is not guaranteed to be exactly correct since there is no alignment or common place where this information is kept. I used the deployment settings or libvirt vm template files and so on. But if we want to handle the machines homogeneously without thinking what installer will be used by the people use LaaS, we need to aim for the installer that needs the most. So 50GB RAM, 24CPUs, 600GB disk needed per LaaS machine. (This figure excludes the base OS needs)

    apex: HA deployment needs 5 VMs at least. In total 10CPUs, 50GB of RAM and 200GB of Disk needed.
    CPU: 2
    RAM: 8GB for all nodes
    Disk: 40GB

    compass: compass master + nodes. HA deployment needs 6 VMs (5 openstack nodes and compass master). In total 24CPUs, 24GB of RAM and 600GB of Disk needed .
    CPU: 4
    RAM: 4GB for all nodes
    Disk: 100GB

    joid: maas master + nodes: HA deployment needs 6 VMs (5 openstack nodes and maas master). In total 24CPUs, 24GB of RAM and 600GB of Disk needed .
    CPU: 4
    RAM: 4GB
    Disk: 100GB

    fuel: fuel master + nodes: : HA deployment needs 6 VMs (5 openstack nodes and fuel master). In total 12CPUs, 42GB of RAM and 600GB of Disk needed .
    CPU: 2
    RAM: 8GB for openstack nodes, 2GB for fuel master
    Disk: 100GB

    1. Thank you very much, this will be helpful for templating vms when we get our hardware setup.

  2. No problem, anytime.

    I have 1 more concern I must share with you. I like the idea of having 2 VMs per machine and lending each VM to different people.

    But then what will happen is, in these VMs, there will be 6 or 7 other VMs, using nested virtualization. And then people will create VMs on deployed OpenStack/OPNFV and perhaps try some VNFs when we reach to that point. Those OpenStack VMs will not have nested virtualization and they will be dead slow. Networking stuff will become complicated as well since you will have libvirt networks on each of these first level of VMs, and then the OPNFV  deployment will have their own networks in those VMs and then OpenStack and ODL/ONOS networking and whatnot might make things more complicated than one might want - especially newcomers.