Page tree
Skip to end of metadata
Go to start of metadata

 

Introduce

Compass was containerized from Euphrates release. It will be easy to deploy on any JumpServer. Containerized Compass uses five compass containers instead of a single VM.  

Each container stands for a micro service and compass-core function separates into these five micro services :

  • Compass-deck : RESTful API and DB Handlers for Compass
  • Compass-tasks : Registered tasks and MQ modules for Compass
  • Compass-cobbler : Cobbler container for Compass
  • Compass-db : Database for Compass
  • Compass-mq : Message Queue for Compass

Compass4nfv has several containers to satisfy OPNFV requirements.

  • Compass-tasks-osa : compass-task's adapter for deployment OpenStack via OpenStack-ansible
  • Compass-tasks-k8s : compass-task's adapter for deployment Kubernetes
  • Compass-repo-osa-ubuntu : optional container to support OPNFV offfline installation via OpenStack-ansible
  • Compass-repo-osa-centos : optional container to support OPNFV offfline installation via OpenStack-ansible

Done|Doing|Todo|

  compass_containers

Understand OpenStack Networking in Compass configuration

 

In Euphrates release, Compass4nfv deploy OpenStack via OpenStack-ansible.

To understand the OpenStack networking, different scenarios networking diagram will be listed below.

Virtual Deployment OpenStack networking

containerized compass virtual deploy network

 

Virtual Deployment OpenStack networking ( DPDK )

 

containerized compass virtual deploy network DPDK

 

Baremetal Deployment OpenStack networking


containerized compass baremetal deploy network

How to deploy OpenStack

Quick Start:

Only 1 command to try virtual deployment, if you have Internet access. Just Paste it and Run.

curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart.sh | bash

If you want to deploy noha with1 controller and 1 compute, run the following command

export SCENARIO=os-nosdn-nofeature-noha.yml
export VIRT_NUMBER=2
curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart.sh | bash

How to deploy OpenStack without internet access

If your environment cannot access internet, you can enable OFFLINE_DEPLOY to run an offline deployment. Just run the following command before deployment:

export OFFLINE_DEPLOY=Enable

Please attention that jumphost offline is not included here. For jumphost preparation, please accord to this: http://docs.opnfv.org/en/stable-danube/submodules/compass4nfv/docs/release/installation/offline-deploy.html

How to add your packages to the repo container used for offline deployment?

If you want to add your own feature packages to the repo container, please follow the steps below.

  • Git clone the repo build code
git clone https://github.com/Compass4NFV/compass-repo-osa-ubuntu.git
  • Add the package names or URLs to "feature_package.conf" in the repo you cloned, some examples are shown there.
cd compass-repo-osa-ubuntu
vim feature_package.conf
  • Build a new repo image with your modification locally
cd compass-repo-osa-ubuntu
repo_tag=locally # you can choose any tag you want
docker build --no-cache=true -t compass4nfv/compass-repo-osa-ubuntu:$repo_tag -f ./Dockerfile ./
  • Build a new tarball of compass
cd compass4nfv
sed -i 's/compass-repo-osa-ubuntu:euphrates/compass-repo-osa-ubuntu:locally/g' build/build.yaml
./build.sh
  • Configure to use the local repo image
cd compass4nfv
echo "export COMPASS_REPO="compass4nfv/compass-repo-osa-ubuntu:$repo_tag" >> deploy/conf/compass.conf
  • Add the offline package url into your plugin roles, take rt_kvm as an example:

and

  • Start a new deployment with the local image
  • After verification, just push it to the origin repo.
git push origin master

How to access OpenStack horizon in Virtual deployment

Because NAT bridge is used in virtual deployment, horizon can not be access directly in external IP address.

  • Config IPtables rules.

iptables -t nat -A PREROUTING -d $EX_IP -p tcp --dport  $port -j DNAT --to 192.16.1.222:443

 

 EX_IP is the server's ip address that can be access from external. you can use below command to query your external IP address.

external_nic=`ip route |grep '^default'|awk '{print $5F}'`

ip addr show $external_nic

 PORT is the one of the port [1- 65535] that does't be used in system

The default user is "admin"

  • Get Horizon password

sudo docker cp compass-tasks:/opt/openrc ./

sudo cat openrc | grep OS_PASSWORD

How to use OpenStack CLI

  •  Login one of the controllers

ssh root@10.1.0.50

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host2 ip :10.1.0.52, user/pass: root/root);

  •  Determine the utility container

 lxc-ls --line |grep utility

root@host1:~# lxc-ls --line | grep utility
host1_utility_container-caf64f73

 
  •  Access the utility container

lxc-attach -n host1_utility_container-xxxxxxxx xxxxx is a random string.

Example:  Try command below for lxc attach and corresponding command line -

root@host1:~#lxc-attach -n $(lxc-ls | grep utility)

       root@host1-utility-container-6dcc600b:~#

  •  Source the admin credentials

source /root/openrc

  •  Run the openstack command

openstack service list

root@host1-utility-container-caf64f73:~# openstack service list
+----------------------------------+------------+----------------+
| ID | Name | Type |
+----------------------------------+------------+----------------+
| 0569b469714e43f686c942f4c5c09c74 | placement | placement |
| 07ddcbbab0b941fbad8a850fa769e159 | heat-cfn | cloudformation |
| 1979d07026c946aaafe7010a7e1d17de | heat | orchestration |
| 348558debc7448ecbd00b408ae28f630 | neutron | network |
| 3df2a48c3a12444485b41774268c1c1d | keystone | identity |
| 4ae564aaf52644a2a9ff33dcf8a49fb2 | cinder | volume |
| 8572eabc4d104c539195b7d271f23d45 | nova | compute |
| 962d5f53c2a24461a2613e0ba297eebd | ceilometer | metering |
| a5bee093baf84c50a5c3af98fce7b1d5 | cinderv2 | volumev2 |
| b43cf8da9b1f476184763d14140e261a | glance | image |
| c56a602af17e439ab8e7fe1aef49dd6a | aodh | alarming |
| ca0b00060cb14653b55f0a104a3402b2 | cinderv3 | volumev3 |
| de98d8dd944d430bb77daf95e2ac136f | gnocchi | metric |
+----------------------------------+------------+----------------+

How to run Functest to test OpenStack

  • Copy openrc from Compass docker into local directory

sudo docker cp compass-tasks:/opt/openrc ./

sudo docker cp compass-tasks:/opt/os_cacert ./

  • Download Functest docker image

sudo docker pull opnfv/functest-healthcheck:euphrates

sudo docker run --privileged=true -id -e INSTALLER_TYPE=compass -e DEPLOY_SCENARIO=os-nosdn-nofeature-ha opnfv/functest-healthcheck:euphrates /bin/bash

  • Get Functest docker id
         sudo docker ps (verify visually)

functest_id=$(sudo docker ps | grep functest | awk '{print $1}')

  • Select functest testcase tier and run

sudo docker cp ./openrc $functest_id:/home/opnfv/functest/conf/openstack.creds

sudo docker cp ./os_cacert  $functest_id:/home/opnfv/functest/conf/os_cacert

sudo docker exec -it $functest_id bash

echo "export OS_CACERT=/home/opnfv/functest/conf/os_cacert" >> /home/opnfv/functest/conf/openstack.creds

functest env prepare

ls /home/opnfv/functest/images (verify you have cirros image to runs functest successfulness if not download them as follows)

you need a minimum cirros 3.5 X86_64 image which you can download using wget or curl cirros_image... (refer Internet) or use the following to download all images required for functest.

download_images.sh

functest tier list

- 0. healthcheck:
['connection_check', 'api_check', 'snaps_health_check']
- 1. smoke:
['vping_ssh', 'vping_userdata', 'tempest_smoke_serial', 'rally_sanity', 'refstack_defcore', 'snaps_smoke']
- 2. features:
['promise', 'doctor-notification']
- 3. components:
['tempest_full_parallel', 'tempest_custom', 'rally_full']
- 4. vnf:
[]

functest tier run healthcheck

*healthcheck, connection_check,vping_ssh and any one can be the parameter of functest

If you get ERROR in results, can verify errors by viewing on functest command line and figure out what's wrong with the tests or setup.

vi  home/opnfv/functest/results/functest.log

How to run yardstick to test OpenStack

  • From the jump host start the yardstick container
             sudo docker run --privileged=true -itd --name yardstick opnfv/yardstick
  • yardstick_id=$(sudo docker ps | grep yardstick | awk '{print $1}')
  • Copy openrc file into the yardstick container
    • Copy to jump host from the utility container
              sudo docker cp compass-tasks:/opt/openrc ./

                            sudo docker cp compass-tasks:/opt/os_cacert ./

    • Copy into the yardstick container from jump host
              sudo docker cp ./openrc $yardstick_id:/etc/yardstick/openstack.creds
              sudo docker cp ./os_cacert  $yardstick_id:/etc/yardstick/os_cacert 
  • sudo docker exec -ti $yardstick_id bash
  • echo "export OS_CACERT=/etc/yardstick/os_cacert" >> /etc/yardstick/openstack.creds
  • echo "export EXTERNAL_NETWORK=ext-net" >> /etc/yardstick/openstack.creds
  • Configure the yardstick container environment
    • Yardstick env prepare
              yardstick env prepare
  • Run yardstick
            cd yardstick
            yardstick task start samples/ping-serial.yaml

How to deploy Kubernetes

Quick Start:

Only 1 command to try virtual deployment, if you have Internet access. Just Paste it and Run.

curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart_k8s.sh | bash


How to use Kubernetes CLI

  •  Login one of the controllers

ssh root@10.1.0.50

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host2 ip :10.1.0.52, user/pass: root/root);

  •  Run the Kubernetes command

kubectl help

kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

  • follow the k8s example to create a ngnix service.

http://containertutorials.com/get_started_kubernetes/k8s_example.html

How to continue a failed deployment

If deployment is failed, the deployment log that including openstack-ansible part is located in the below directory:

compass4nfv/work/deploy/log/compass-deploy.log

 After checking the log, and you don't need to modify anything:

  1. First, log into the compass-task docker
    sudo docker exec -it compass-tasks bash
  2. cd /var/ansible/run/openstack_ocata-opnfv2/
  3. Edit the playbook, delete the roles which have been successfully deployed. Keep the failed role and the roles after it.
    vim HA-ansible-multinodes.yml
  4. Run the playbook:
    ansible-playbook -i inventories/inventory.py HA-ansible-multinodes.yml

How to debug your role on an existed OpenStack environment deployed by compass

if deployment is failed, the deployment log that including openstack-ansible part is located in the below directory:

compass4nfv/work/deploy/log/compass-deploy.log

 

  1. First, log into the compass-task docker
    sudo docker exec -it compass-tasks bash

  2. There are two situations here respect to the role is coupling with openstack or not

    Case1: If your role is coupling with OpenStack e.g. OpenDaylight, you need to use openstack-ansible to run your role.

    - Put your role into “/etc/ansible/roles/”
    - Create a playbook and put it into “/opt/openstack-ansible/playbooks/”, you can take “/opt/openstack-ansible/playbooks/setup-odl.yml” as an example
    - Run your playbook like this: “openstack-ansible /opt/openstack-ansible/playbooks/setup-odl.yml”

    Case2: If your role is not coupling with OpenStack, you can use ansible to run your role directly

    - Put your role into “/var/ansible/run/openstack_ocata-opnfv2/roles/”
    - “cd /var/ansible/run/openstack_ocata-opnfv2/”
    - Delete all the roles written in “HA-ansible-multinodes.yml” located in “/var/ansible/run/openstack_ocata-opnfv2/”, and then add your role in this file, like this :

    The “hosts” here means which hosts your role will run on. Keep “remote_user” as root. Put your role name under “roles”

    - In the directory: “/var/ansible/run/openstack_ocata-opnfv2/”, run “ansible-playbook -i inventories/inventory.py HA-ansible-multinodes.yml”

  3. If failed, update your role and run it again as above mentioned.

  • No labels