Page tree
Skip to end of metadata
Go to start of metadata

Project Name:

  • Proposed name for the project: OpenRetriever
  • Proposed name for the repository: openretriever

Project description:

  • Project “OpenRetriever” creates and composes a set of scenarios which include containers like Docker/Rocket and so on.  Allow VNF to run on a set of containers and/or VMs. OpenRetriever can make the platform to manage container and virtual machine.

    • VNF which is running on cloud need to be a cloud native application, micro-service designed. Meanwhile, container is one of the most suitable technology to run micro-service application

    • VNF should scale up in short time. Container is a suitable technology to reduce the time from several minutes to several seconds. 

    • Container can make the VNF deployment much quicker. 
    • Container orchestration engines (COE) can reduce the effort to accomplish HA. If using Kubernetes, replication can fulfill the requirement. 
    • Some applications belong at the edge: WAN Acceleration, Content Cacheing, and depending on your philosophy, Container's light property is best choice for it.  
  • Three modes to let container integrated into Openstack.  
    • Mode I:
      • Use the embedded container orchestration way. Each container-cluser is as a VNF.  Vendor needs to maintain their VNFs and provide containerized software packaged. Openstack API only controls VM's life cycle. For containers, vendor can use their own technology (e.g. Chef, Puppet)or COE(e.g. Kubernetes, Cloudify) to manage them.

    • Mode II:
      • Use COE (Container Orchestration Engine) Way. For example kubernetes is a kind of COE. The most important thing is the north interface is based on COE Native API, which is totally different from Openstack API.  COE is depolyed by Magnum. COE is VIM, which is responsible for the life cycle management operations of the containers. For MANO, it needs to write a plugin to suit COE API. 
    • Mode III
      • Use COE & Zun way. There're two ways to solve Mode II API problems. One is to change COE API to Openstack API, the other one is to build another upper layer to standardize Openstack API and COE Native API.  In this mode, we use Magnum deploys COE, and use Zun to be an adapter. 
    • Mode IV
      • Use Scripts & Ansible way. We think kubernetes is as VIM and much closer with VNFM than OpenStack. we don't need to translate COE's api to OpenStack API.In this mode, we will use kube-kuyr plugin and multi-network CNI plugin to support VNF's application .
     
  • We start with Mode IV.  Considering Zun and Magnum is not a stable project now, we will skip Magnum and Zun in this stage.  

Scope:

  • OR focuses on how non-virtual-machine based VNF run in NFV. It includes container and unikernels. The target of this project is let VNF can run on any platform including OpenStack, Kubernetes, Mesos and so on. The project doesn’t cover the internal architecture of a VNF.

    1.  Add Kuryr and Magnum into installers

    2.  Container for NFV. The main function is to increase the performance of container and container platform.

    3.  Set up an environment which can support container and unikernel.

    4.  A new scheduler that can schedule a mix of all three types (virtual machine, container, unikernel) of instances.

    5.  Analyse the gap for OpenStack, Installer, Kubernetes, MANO.

  • Key work items include:
    • 1. Documentation:

          1) The requirement of OpenStack, Installer, Kubernetes, MANO

          2) The requirement of Nextgen VIM Scheduler.

          3) The user guide: Set up an environment which can support container and unikernels.

      2. Scripts : Common scripts to let container be integrated into OPNFV

      3. Testing:  Provide for Functest , Yardstick etc.

Dependencies:

  • Upstream projects:
    • OpenStack
    • Magnum
    • Kuryr
    • Kubernetes

Committers and Contributors:

Planned deliverables:

  • Meets all OPNFV requirements of OPNFV platform
  • Documentation and User Guide
  • Scripts : integrated container in Openstack

Proposed Release Schedule:

  • It will not be participating in the D-release, but will have a number of work items: collaborate with installers. 
  • For future release, it provides a container environment to fulfill the requirement of VNFs.. 

Key Project Facts

Project Name: Container Integrated For NFV  (OpenRetriever
Repo name: openretriever
Lifecycle State: Incubation
Primary Contact: Xuan Jia ( jiaxuan@chinamobile.com )
Project Lead:  Xuan Jia ( jiaxuan@chinamobile.com )
Jira Project Name: Container Integrated  For NFV 
Jira Project Prefix: [container] 
mailing list tag [openretriever

Link to TSC approval of the project:

http://meetbot.opnfv.org/meetings/opnfv-meeting/2016/opnfv-meeting.2016-12-13-14.59.html

Link to expand the scope of the project:

http://meetbot.opnfv.org/meetings/opnfv-meeting/2017/opnfv-meeting.2017-04-11-13.59.html

Link to NGVS proposal:

https://wiki.opnfv.org/pages/viewpage.action?pageId=10290307

Requrements generated by the NGVS sub-group:

Proposed NGVS stack aka Carrier-Grade Kubernetes: 

  • No labels

5 Comments

  1. Hi Xuan Jia please check Kubernetes  (on OpenStack and on bare metal) scenarios proposed in JOID for D release. JOID D release plan . Let me know if you would be interested in using Kubernetes based NFVI in your project. 

    Narinder Gupta

    1. Sorry, I forgot to reply this meesage. Thanks, I will investigate the gap between openretriever and JOID. 

      1. JOID focus now on deployment  of  K8 on bare metal as deploying K8 on OpenStack is a nested virtualization (docker on KVM). 

        If you are going to deploy on OpenStack it has been automated via conjure-up:

        sudo apt-add-repository ppa:juju/stable
        sudo apt-add-repository ppa:conjure-up/next
        sudo apt update
        sudo apt install conjure-up
        conjure-up canonical-kubernetes
        

        Conjure will prompt you for deployment options (AWS, GCE, Azure, etc.) and credentials.

        This bundle is for multi-node deployments, for individual deployments for
        developers, use the smaller
        kubernetes-core bundle via conjure-up kubernetes-core

  2. The proposal ppt mentions binding to Linux kernel 3.0. But with nested-containers and Kuryr, Neutron security groups can’t be implemented by Linux bridge iptables, but within OVS, requiring Linux kernel 4.3 and up. 

    1. Thanks for pointing it out. 

      For the proposal ppt, i just want to give an opinion container doesn't bind the system so much, as we want to make all the layer clearly and accord with the low coupling policy