Page tree
Skip to end of metadata
Go to start of metadata

(Reference: Joe Kidder )

How Project AUTO (ONAP-Automated OPNFV (Auto)) uses LaaS today

  • AUTO currently uses LaaS single servers in the following way:
    1. Deploy virtual pods on LaaS x86 servers
    2. Deploy ONAP across 3 LaaS x86 servers configured as a kubernetes cluster connected over the common management network -
    3. Work on building ONAP components on LaaS arm64 servers
  •   What AUTO would like to do:
    1. Deploy multi-node OPNFV on either x86 or arm64 servers to use as VIM targets as well as potential platforms on which to deploy ONAP
    2. Deploy multi-node k8s clusters on multiple x86 or arm64 servers with contained/private networking rather than the common management net.
  • ARM: The arm64 community currently uses arm64 individual servers for
    1. launching VMs on top of bare metal (using libvirt directly or even a little bit of vagrant)
    2. Performing ONAP build experiments on single servers
    3. Trying basic k8s cluster deployment on multiple servers similar to what Auto does for ONAP k8s clusters, but without the ONAP part
  •  The arm64 community would love to deploy OPNFV on bare metal, Joe Kidder has lobbied with the armband team to support smaller OPNFV deployments - e.g. 2-node non-HA.  They have also considered multi-arch deployments of x86 controllers and arm64 computes.  These type of optimizations would go a long way to making the servers more available - i.e. more deployments at a given time.  I’m not sure what the timeline/priority is for these capabilities within the armband project
  • Background: Auto Project Desires and Patterns of LaaS Use:
    The Auto project is interested in 
    • integrating an ONAP management platform with a VIM that represents an OPNFV infrastructure
    • deploying ONAP on top of an OPNFV infrastructure
    • exploring use cases that start mixing and matching multiple OPNFV infrastructures as VIM targets as well as spreading ONAP components across these infrastructures (emulating what might happen with a large ONAP deployment in a NOC and then distributed ONAP components at the edge for local orchestration and monitoring

Also, making this more seamless is another goal.

  Ideally, AUTO would like to 

      1. press a button and get an OPNFV infra with an associated VIM.
      2. Press another button and get an ONAP deployment
      3. Press a third button (or embed it in options above) to integrate the two…
      4. …explore/test use cases

  Ideally, AUTO would, at a minimum, install everything on top of a single OPNFV deployment:

      1. Deploy OPNV
      2. Deploy ONAP on top of the above
      3. Use ONAP to deploy and manage VNFs on the above
  • No labels