Page tree
Skip to end of metadata
Go to start of metadata

Introduce

 

Kubernetes Architecture

 

kube-apiserver

kube-apiserver exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances

etcd

etcd is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster.

kube-controller-manager

kube-controller-manager runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

These controllers include:

  • Node Controller: Responsible for noticing and responding when nodes go down.
  • Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.

kube-scheduler

kube-scheduler watches newly created pods that have no node assigned, and selects a node for them to run on.

kubelet

kubelet is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) and:

  • Mounts the pod’s required volumes.
  • Downloads the pod’s secrets.
  • Runs the pod’s containers via docker (or, experimentally, rkt).
  • Periodically executes any requested container liveness probes.
  • Reports the status of the pod back to the rest of the system, by creating a mirror pod if necessary.
  • Reports the status of the node back to the rest of the system.

kube-proxy

kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.

docker

docker is used for running containers.

POD

A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping a database container and data container in the same pod.

Understand Kubernetes Networking in Compass configuration

k8s

NOTE: The interface of eth1 in kube-mater and kube-node must have the Internet access.

How to deploy Kubernetes

Quick Start:

Only 1 command to try virtual deployment, if you have Internet access. Just Paste it and Run.

 

Deploy highly available  Kubernetes cluster

Jumphost Requirements

The Jumphost requirements are outlined below:

1. Ubuntu 14.04 (Pre-installed).

2. Root access.

3. libvirt virtualization support.

4. Minimum 2 NICs.

- PXE installation Network (Receiving PXE request from nodes and providing OS provisioning)

- IPMI Network (Nodes power control and set boot PXE first via IPMI interface)

- External Network (Optional: Internet access)

5. 16 GB of RAM for a Bare Metal deployment, 64 GB of RAM for a Virtual deployment.

6. CPU cores: 32, Memory: 64 GB, Hard Disk: 500 GB, (Virtual Deployment needs 1 TB Hard Disk)

Network Requirements

Network requirements include:

1. No DHCP or TFTP server running on networks used by OPNFV.

2. 2-6 separate networks with connectivity between Jumphost and nodes.

- PXE installation Network

- IPMI Network

- Install&mgmt Network*

- Container Service Network*

3. Lights out OOB network access from Jumphost with IPMI node enabled (Bare Metal deployment only).

4. Container Service Network has Internet access, meaning a gateway and DNS availability.

Nodes Configuration (Virtual Deployment)

virtual machine setting
~~~~~~~~~~~~~~~~~~~~~~~
- VIRT_NUMBER -- the number of nodes for virtual deployment.

- VIRT_CPUS -- the number of CPUs allocated per virtual machine.

- VIRT_MEM -- the memory size(MB) allocated per virtual machine.

- VIRT_DISK -- the disk size allocated per virtual machine.

.. code-block:: bash

export VIRT_NUMBER=${VIRT_NUMBER:-5}
export VIRT_CPUS=${VIRT_CPU:-4}
export VIRT_MEM=${VIRT_MEM:-16384}
export VIRT_DISK=${VIRT_DISK:-200G}

Retrieving the installation Tarball

First of all, The installation tarball is needed for deploying your OPNFV environment, it included packages of compass docker images 
The daily build tarball can be retrieved via OPNFV artifacts repository:
http://artifacts.opnfv.org/compass4nfv.html
NOTE: Search the keyword "compass4nfv/Euphrates" to locate the ISO image.
E.g.
compass4nfv/Euphrates/opnfv-2017-09-18_08-15-13.tar.gz
The name of tarball includes the time of iso building, you can get the daily ISO according the building time.
The git url and sha1 of Compass4nfv are recorded in properties files, According these, the corresponding deployment scripts can be retrieved.

Getting the deployment scripts

To retrieve the repository of Compass4nfv on Jumphost use the following command:

- git clone https://gerrit.opnfv.org/gerrit/compass4nfv

NOTE: PLEASE DO NOT GIT CLONE COMPASS4NFV IN ROOT DIRECTORY(INCLUDE SUBFOLDERS).
E.g.
git clone https://gerrit.opnfv.org/gerrit/compass4nfv /home

roles setting

The below file is the inventory template of k8-nosdn-nofeature-ha scenario:

"./deploy/conf/vm_environment/k8-nosdn-nofeature-ha.yml"

 

**Set TYPE and FLAVOR**

E.g.

.. code-block:: yaml

TYPE: virtual
FLAVOR: cluster

 

**Assignment of different roles to servers**

E.g. Kubenetes only deployment roles setting

 

hosts:
- name: host1
roles:
- kube_master
- etcd
- ha

- name: host2
roles:
- kube_master
- etcd
- ha

- name: host3
roles:
- kube_master
- etcd
- ha

- name: host4
roles:
- kube_node

- name: host5
roles:
- kube_node


Start Deployment (Virtual Deployment)
1. Edit deploy.sh

1.1. Set OS version for deployment nodes.
E.g.
.. code-block:: bash

# Set OS version for target hosts
export OS_VERSION=centos7

NOTE:Only support centos7 for kubernetes now.


1.2. Set ISO image corresponding to your code
E.g.

.. code-block:: bash

# Set ISO image corresponding to your code
export ISO_URL=file:///home/compass/compass4nfv.tar.gz

1.3. Set scenario that you want to deploy

E.g.

.. code-block:: bash

# DHA is your dha.yml's path
export DHA=./deploy/conf/vm_environment/k8-nosdn-nofeature-ha.yml

# NETWORK is your network.yml's path
export NETWORK=./deploy/conf/vm_environment/huawei-virtual1/network.yml

# KUBERNETES_VERSION is a switch to deploy kubernetes. if you do not set this variable , openstack will be deployed.

export KUBERNETES_VERSION="v1.7.3"

2. Run ``deploy.sh``

.. code-block:: bash

./deploy.sh

How to use Kubernetes CLI

  •  Login the kube-master node

ssh root@10.1.0.222

( ip :10.1.0.222, user/pass: root/root);

  •  Run the Kubernetes  follow command to check if kubernetes  system pod is running

           kubectl -n kube-system get pods -o wide

          

  • Run the  follow command to.check if kubernetes network component  "calico" is running

calicoctl  node status

calicoctl get nodes --out=wide

calicoctl get ipPool

 

Kubernetes Example(Nginx Server Deployment using Kubernetes)

  1. Create the yaml file in the editor of your choice which will be used to deploy nginx pod

    $ vi nginx_pod.yaml
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: nginx
    spec:
      replicas: 2
      selector:
        app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
  2. Create the nginx pod using kubectl
    kubectl create -f nginx_pod.yaml

  3. In the above pod creation process, we have created two replicas of the nginx pod and its details can be listed as follow

    kubectl get pods

    kubectl get rc
  4. Deploy the nginx service using yaml file in order to expose the nginx pod on the host port “82”

    $ vi nginx_service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginxservice
      name: nginxservice
    spec:
      ports:
        # The port that this service should serve on.
        - port: 82
      # Label keys and values that must match in order to receive traffic for this service.
      selector:
        app: nginx
      type: LoadBalancer
  5. Create the nginx service using kubectl
    kubectl create -f nginx_service.yaml

  6. The nginx service can be listed as follow
     kubectl get services

How to use Kubernetes Dashboard

  1. Login the kube-master node
    ssh root@10.1.0.222
    ( ip :10.1.0.222, user/pass: root/root);
  2. Change the kubernetes dashboard service type from ClusterIP to NodePort using the follow cmd :
    kubectl edit  svc kubernetes-dashboard -n kube-system
  3. Find the username and password
    vim /etc/kubernetes/users/known_users.csv
  4. Enter the following url in your browser, the dashboard will appear (you'd better use the firefox browser).
    https://192.16.1.222:31746

 

  • No labels

1 Comment

  1. That is great! Really Cool!