Pharos "lab-as-a-Service (LaaS)" project
Community labs provide physical and virtual resources for project development and testing as well as production resources (along with the Linux Foundation lab). The number of OPNFV projects needing access to bare-metal resources are expected to increase as well as need more scaled and diverse deployment environments. This means that community lab resources will likely continue to be in heavy demand from approved projects and the OPNFV production pipe-line.
The OPNFV LaaS project will use some form of Cloud environment to provide individual developers with an easy way of getting started ... i.e. to "try-out" an OPNFV deployment and begin to develop and test features with minimal overhead or knowledge about configuring and deploying an OPNFV instance.
Pharos community is collecting requirements for this project ... please add your edits/comment to this Wiki page or use the mailing list ...
email@example.com with [pharos] included in the subject line.
Requirements for OPNFV LaaS include ...
- Automated reservation and access (i.e. does not depend on emails and providing credentials manually)
- See what resources are booked and what are free
- Admin dash-board
- Usage statistics (including who accessed and how long)
- Control access manually if needed
- On-line documentation (overview, get started, searchable help, FAQ, etc.)
- OPNFV "hello world" application activated with a button or simple command
- Graphical user interface
- Compute storage and network configuration of the virtual configured environment
- Easy for user to recreate default setup
Proposed design of the IOL Lab at UNH Durham
Goals / Deliverables Phase 1
- Set of scripts to provision / create a virtual instances of a Pharos Pos, consisting of 6 VMs (Jump Host & 5 nodes)
- Integration of script with resource request / dashboard / Jenkins, allowing for full automation
- Working system for 6 pods, available to community developers through the dashboard
Design / Architecture Phase 1
Static Setup – Virtual Machines and network are pre-configured, to function as a “Virtual Pharos Pod.” A fixed number of “Virtual Pods” would be operated over the set of hardware, which access / assignments handles in a similar fashion to existing infrastructure. Each “Virtual Pod” would be longer lived, i.e. not torn down after usage, but could be “re-initialized” to a “known state” from a previously saved image / snapshot. This would be in contrast to a difficult to engineer and maintain dynamic setup.
- Simple setup and maintenance
With a static setup 6 identical virtual machines can be run per server
Jump Host runs as one of the node, running either CentOS or Ubuntu with KVM installed
Networks established using either Linux bridging or OVS
Availability of ISO for each installer to the Jump Host
Establish a proposed time limit for the resource, approximately 1 week, and allowing extensions of the time.
This might be able to be linked to the Pharos booking tool that is currently being developed.
Enhance the booking tool to “setup” the environment/handle extensions of the service.
Virtual Hardware Requirements
Minimum virtual pod requirements – Nodes do not need to meet the exact Pharos requirements when virtual, utilizing around 8GB RAM for 6 nodes per server. Setup with 4 virtual NICs.
CPU: 4 cores (Largest amount per openstack deployment)
Storage Space: 100Gb (Largest requirement among openstack deployments)
Network: 4 NICs (Users would be required to setup VLANs for addutional networks)
KVM – Use KVM, with a template to create the Virtual Machines with automated scripts. KVM will allow for a completely FOSS testing environment as well.