Page tree
Skip to end of metadata
Go to start of metadata

Copper sandbox ("Academy")

This is an effort to setup a basic functional assessment platform (a learning "academy") based upon OPNFV, installed as a single node (e.g. a laptop with lots of memory) or multi-node environment. This will provide a means to learn, analyze, and further develop:

  • the OPNFV install process, to find where/how we need to augment it for additional components (e.g. Congress)
  • the basic mechanisms supported by OPNFV components for VNF and service configuration
  • the mechanisms supported by the additional components for configuration policy management
  • use cases, demos, and techniques supporting the Copper scope

In case you just want to get started in a virtual environment, see the NEW! pages:

  • DevStack in a VM Notes. We are developing that page to help anyone that wants to get involved in Copper but doesn't need (or can't wait for) a full OPNFV deploy to do configuration policy development in Copper.
  • Fuel on a Laptop: all-in one virtual deployment notes

Bare-metal deploys are described below for a simple/cheap hardware environment:

  • 3 Intel NUC i7 (NUC5i7RYH), each with 16GB RAM, 1TB HDD, 250MB SSD
    • The diagram below shows three NUCs used for a single non-HA "POD", i.e. jumphost, controller, and compute node hosts, with a single NIC on jumphost and compute nodes, two NICs on controller node (onboard ethernet, USB ethernet adapter)
  • a router/gateway that isolates the OPNFV private network (single subnet) from the other LAN segments or the Internet, and any pesky DHCP servers that might be there
    • the private network can use any subnet; 192.168.10.0/24 is used below and in the procedures linked above

As shown in the diagrams below, an internet connection is assumed to be available. In some cases (e.g. hackfests etc) you may need to connect thru a wifi network. Because it may not be obvious how to do this with a fixed configuration, see Connecting a Lab to the Internet via WiFi.

NOTE: If you switch between JOID and Apex installers on a NUC POD, you may have to clean up the prior cloud-init image/data on disk SDB (the 250GB SSD), as it will interfere with deployment. The easiest way is to boot from a USB linux installer and choose "try without installing", then run gparted or some disk utility to delete all partitions on SDB.

See Congress test procedures for notes on testing Congress.

Current Release (Danube)

Updates are in progress. Stay posted!

Previous Release (Colorado)

Multi-Node JOID-based install on 3 bare metal nodes (non-HA)

Congress is now supported for JOID as a base install feature (all scenarios) and has been tested in virtual (single-node),  multi-node Pharos-compliant labs, and the NUC-based multi-node/non-HA configuration shown below.

See Joid install procedure for the basic install, and additional notes.

See Congress install procedure for installation of OpenStack Congress in an LXC container on the JOID-based controller node.

(Link to Powerpoint source of the diagram above)

Multi-Node Apex-based install on 3 bare metal nodes (non-HA)

Congress is now supported for Apex as a base install feature (all scenarios) and has been tested in virtual (single-node) and multi-node Pharos-compliant labs. However: Apex has not yet been verified on the multi-node/non-HA NUC configuration below. The last attempts to deploy Apex in this configuration failed due to wake-on-lan/PXE-boot issues, the root cause of which is not yet known.

Previous Release (Brahmaputra)

Multi-Node Apex-based install on 3 bare metal nodes (non-HA)

This is a work in progress. The target system is as described for JOID below.

See Apex install procedure for the basic install and notes on Congress installation.

Multi-Node JOID-based install on 3 bare metal nodes (non-HA)

See Joid install procedure for the basic install, and additional notes.

See Congress install procedure for installation of OpenStack Congress in an LXC container on the JOID-based controller node.

(Link to Powerpoint source of the diagram above)

Previous Release (Arno)

Multi-Node Foreman-based install on 3 bare metal nodes (non-HA)

See Foreman install procedure, install verification procedure, and additional notes.

The hardware used in this includes:

  • 3 Intel NUC i7 (NUC5i7RYH), each with 16GB RAM, 1TB HDD, 250MB SSD
    • The diagram below shows three NUCs used for a single non-HA "POD", i.e. jumphost, controller, and compute node hosts.
    • Note: the single ethernet port on the NUCs can be augmented thru USB-NIC adapters using the 4 USB ports
  • a router/gateway that isolates the OPNFV private network (single subnet) from the other LAN segments or the Internet, and any pesky DHCP servers that might be there
    • the private network can use any subnet; 192.168.1.0/24 is used below and in the procedures linked above

(Link to Powerpoint source of the diagram above)

Earlier efforts to build Academy

For the record, you can review the (mostly failed) earlier attempts ...

  • No labels