Page tree
Skip to end of metadata
Go to start of metadata

 

Introduce

Compass was containerized from Euphrates release. It will be easy to deploy on any JumpServer. Containerized Compass uses five compass containers instead of a single VM.  

Each container stands for a micro service and compass-core function separates into these five micro services :

  • Compass-deck : RESTful API and DB Handlers for Compass
  • Compass-tasks : Registered tasks and MQ modules for Compass
  • Compass-cobbler : Cobbler container for Compass
  • Compass-db : Database for Compass
  • Compass-mq : Message Queue for Compass

Compass4nfv has several containers to satisfy OPNFV requirements.

  • Compass-tasks-osa : compass-task's adapter for deployment OpenStack via OpenStack-ansible
  • Compass-tasks-k8s : compass-task's adapter for deployment Kubernetes
  • Compass-repo-osa-ubuntu : optional container to support OPNFV offfline installation via OpenStack-ansible
  • Compass-repo-osa-centos : optional container to support OPNFV offfline installation via OpenStack-ansible

Done|Doing|Todo|

  compass_containers

Understand OpenStack Networking in Compass configuration

 

In Euphrates release, Compass4nfv deploy OpenStack via OpenStack-ansible.

To understand the OpenStack networking, different scenarios networking diagram will be listed below.

Virtual Deployment OpenStack networking

containerized compass virtual deploy network

 

Virtual Deployment OpenStack networking ( DPDK )

 

containerized compass virtual deploy network DPDK

 

Baremetal Deployment OpenStack networking


containerized compass baremetal deploy network

How to deploy OpenStack

Quick Start:

Only 1 command to try virtual deployment, if you have Internet access. Just Paste it and Run.

curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart.sh | bash

If you want to deploy noha with1 controller and 1 compute, run the following command

export SCENARIO=os-nosdn-nofeature-noha.yml
export VIRT_NUMBER=2
curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart.sh | bash

How to deploy OpenStack without internet access

If your environment cannot access internet, you can enable OFFLINE_DEPLOY to run an offline deployment. Just run the following command before deployment:

export OFFLINE_DEPLOY=Enable

Please attention that jumphost offline is not included here. For jumphost preparation, please accord to this: http://docs.opnfv.org/en/stable-danube/submodules/compass4nfv/docs/release/installation/offline-deploy.html

How to add your packages to the repo container used for offline deployment?

If you want to add your own feature packages to the repo container, please follow the steps below.

  • Git clone the repo build code
git clone https://github.com/Compass4NFV/compass-repo-osa-ubuntu.git
  • Add the package names or URLs to "feature_package.conf" in the repo you cloned, some examples are shown there.
cd compass-repo-osa-ubuntu
vim feature_package.conf
  • Build a new repo image with your modification locally
cd compass-repo-osa-ubuntu
repo_tag=locally # you can choose any tag you want
docker build --no-cache=true -t compass4nfv/compass-repo-osa-ubuntu:$repo_tag -f ./Dockerfile ./
  • Build a new tarball of compass
cd compass4nfv
sed -i 's/compass-repo-osa-ubuntu:euphrates/compass-repo-osa-ubuntu:locally/g' build/build.yaml
./build.sh
  • Configure to use the local repo image
cd compass4nfv
echo "export COMPASS_REPO="compass4nfv/compass-repo-osa-ubuntu:$repo_tag" >> deploy/conf/compass.conf
  • Add the offline package url into your plugin roles, take rt_kvm as an example:

and

  • Start a new deployment with the local image
  • After verification, just push it to the origin repo.
git push origin master

How to access OpenStack horizon in Virtual deployment

Because NAT bridge is used in virtual deployment, horizon can not be access directly in external IP address.

  • Config IPtables rules.

iptables -t nat -A PREROUTING -d $EX_IP -p tcp --dport  $port -j DNAT --to 192.16.1.222:443

 

 EX_IP is the server's ip address that can be access from external. you can use below command to query your external IP address.

external_nic=`ip route |grep '^default'|awk '{print $5F}'`

ip addr show $external_nic

 PORT is the one of the port [1- 65535] that does't be used in system

The default user is "admin"

  • Get Horizon password

sudo docker cp compass-tasks:/opt/openrc ./

sudo cat openrc | grep OS_PASSWORD

How to use OpenStack CLI

  •  Login one of the controllers

ssh root@10.1.0.50

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host3 ip :10.1.0.52, user/pass: root/root);

  •  Determine the utility container

 lxc-ls --line |grep utility

root@host1:~# lxc-ls --line | grep utility
host1_utility_container-caf64f73

 
  •  Access the utility container

lxc-attach -n host1_utility_container-xxxxxxxx xxxxx is a random string.

Example:  Try command below for lxc attach and corresponding command line -

root@host1:~#lxc-attach -n $(lxc-ls | grep utility)

       root@host1-utility-container-6dcc600b:~#

  •  Source the admin credentials

source /root/openrc

  •  Run the openstack command

openstack service list

root@host1-utility-container-caf64f73:~# openstack service list
+----------------------------------+------------+----------------+
| ID | Name | Type |
+----------------------------------+------------+----------------+
| 0569b469714e43f686c942f4c5c09c74 | placement | placement |
| 07ddcbbab0b941fbad8a850fa769e159 | heat-cfn | cloudformation |
| 1979d07026c946aaafe7010a7e1d17de | heat | orchestration |
| 348558debc7448ecbd00b408ae28f630 | neutron | network |
| 3df2a48c3a12444485b41774268c1c1d | keystone | identity |
| 4ae564aaf52644a2a9ff33dcf8a49fb2 | cinder | volume |
| 8572eabc4d104c539195b7d271f23d45 | nova | compute |
| 962d5f53c2a24461a2613e0ba297eebd | ceilometer | metering |
| a5bee093baf84c50a5c3af98fce7b1d5 | cinderv2 | volumev2 |
| b43cf8da9b1f476184763d14140e261a | glance | image |
| c56a602af17e439ab8e7fe1aef49dd6a | aodh | alarming |
| ca0b00060cb14653b55f0a104a3402b2 | cinderv3 | volumev3 |
| de98d8dd944d430bb77daf95e2ac136f | gnocchi | metric |
+----------------------------------+------------+----------------+

How to run Functest to test OpenStack

  • Copy openrc from Compass docker into local directory

sudo docker cp compass-tasks:/opt/openrc ./

sudo docker cp compass-tasks:/opt/os_cacert ./

  • Download Functest docker image

sudo docker pull opnfv/functest-healthcheck:euphrates

sudo docker run --privileged=true -id -e INSTALLER_TYPE=compass -e DEPLOY_SCENARIO=os-nosdn-nofeature-ha opnfv/functest-healthcheck:euphrates /bin/bash

  • Get Functest docker id
         sudo docker ps (verify visually)

functest_id=$(sudo docker ps | grep functest | awk '{print $1}')

  • Select functest testcase tier and run

sudo docker cp ./openrc $functest_id:/home/opnfv/functest/conf/env_file

sudo docker cp ./os_cacert  $functest_id:/home/opnfv/functest/conf/os_cacert

sudo docker exec -it $functest_id bash

echo "export OS_CACERT=/home/opnfv/functest/conf/os_cacert" >> /home/opnfv/functest/conf/env_file

ls /home/opnfv/functest/images (verify you have cirros image to runs functest successfulness if not download them as follows)

you need a minimum cirros 4.0 X86_64 image which you can download using wget or curl cirros_image... (refer Internet) or use the following to download all images required for functest.

download_images.sh

functest tier list

- 0. healthcheck:
['connection_check', 'api_check', 'snaps_health_check']
- 1. smoke:
['vping_ssh', 'vping_userdata', 'tempest_smoke_serial', 'rally_sanity', 'refstack_defcore', 'snaps_smoke']
- 2. features:
['promise', 'doctor-notification']
- 3. components:
['tempest_full_parallel', 'tempest_custom', 'rally_full']
- 4. vnf:
[]

run_tests -t healthcheck

*healthcheck, connection_check,vping_ssh and any one can be the parameter of functest

If you get ERROR in results, can verify errors by viewing on functest command line and figure out what's wrong with the tests or setup.

vi  home/opnfv/functest/results/functest.log

How to run yardstick to test OpenStack

  • From the jump host start the yardstick container
             sudo docker run --privileged=true -itd --name yardstick opnfv/yardstick
  • yardstick_id=$(sudo docker ps | grep yardstick | awk '{print $1}')
  • Copy openrc file into the yardstick container
    • Copy to jump host from the utility container
              sudo docker cp compass-tasks:/opt/openrc ./

                            sudo docker cp compass-tasks:/opt/os_cacert ./

    • Copy into the yardstick container from jump host
              sudo docker cp ./openrc $yardstick_id:/etc/yardstick/openstack.creds
              sudo docker cp ./os_cacert  $yardstick_id:/etc/yardstick/os_cacert 
  • sudo docker exec -ti $yardstick_id bash
  • echo "export OS_CACERT=/etc/yardstick/os_cacert" >> /etc/yardstick/openstack.creds
  • echo "export EXTERNAL_NETWORK=ext-net" >> /etc/yardstick/openstack.creds
  • Configure the yardstick container environment
    • Yardstick env prepare
              yardstick env prepare
  • Run yardstick
            cd yardstick
            yardstick task start samples/ping-serial.yaml

How to deploy Kubernetes

Quick Start:

Only 1 command to try virtual deployment, if you have Internet access. Just Paste it and Run.

curl https://raw.githubusercontent.com/opnfv/compass4nfv/master/quickstart_k8s.sh | bash


How to use Kubernetes CLI

  •  Login one of the controllers

ssh root@10.1.0.50

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host3 ip :10.1.0.52, user/pass: root/root);

  •  Run the Kubernetes command

kubectl help

kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

  • follow the k8s example to create a ngnix service.

http://containertutorials.com/get_started_kubernetes/k8s_example.html

How to test Stor4nfv in Compass

Stor4NFV provides a storage solution based on Ceph and OpenSDS. Deploy a stor4nfv scenario.

Login the controller host:

ssh root@10.1.0.50 (k8-nosdn-stor4nfv-noha scenario)

ssh root@10.1.0.52 (k8-nosdn-stor4nfv-ha scenario)

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host3 ip :10.1.0.52, user/pass: root/root);

  • How to test the OpenSDS cluster:

a. Configure the OpenSDS CLI tool.

cp /opt/opensds-v0.1.5-linux-amd64/bin/osdsctl /usr/local/bin

export OPENSDS_ENDPOINT=http://127.0.0.1:50040

export OPENSDS_AUTH_STRATEGY=noauth

b. Check if the pool resource is available.

osdsctl pool list

c. Create a default profile, if it is not available.

osdsctl profile list

osdsctl profile create '{"name": "default", "description": "default policy"}'

d. Create a volume.

osdsctl volume create 1 --name=test-001

e. List all volumes.

osdsctl volume list

f. Delete the volume.

osdsctl volume delete <your_volume_id>

  • How to test the CSI volume plugin:

a. Change the workspace.

cd /opt/opensds-k8s-v0.1.0-linux-amd64

b. Configure opensds endpoint IP.

vim csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml

The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP.

c. Create opensds CSI pods.

kubectl create -f csi/deploy/kubernetes

After this, three pods can be found by ```kubectl get pods``` like below:

- csi-provisioner-opensdsplugin

- csi-attacher-opensdsplugin

- csi-nodeplugin-opensdsplugin

d. Create an example nginx application.

kubectl create -f csi/examples/kubernetes/nginx.yaml

This example will mount an OpenSDS volume into "/var/lib/www/html".

You can use the following command to inspect into nginx container to verify it.

docker exec -it <nginx container id> /bin/bash

e. Clean up steps:

kubectl delete -f csi/examples/kubernetes/nginx.yaml

kubectl delete -f csi/deploy/kubernetes

How to test Barometer in Compass

Deploy a barometer scenario.

  • test collectd metrics in Gnocchi:

Login one of the controllers (ssh root@10.1.0.50

      • Login one of the controllers

ssh root@10.1.0.50

(host1 ip :10.1.0.50, user/pass: root/root);

(host2 ip :10.1.0.51, user/pass: root/root);

(host3 ip :10.1.0.52, user/pass: root/root);

      • Access the utility container

lxc-attach -n $(lxc-ls | grep utility)

Source the admin credentials:

source /root/openrc

Run the openstack command:

openstack metric list
+--------------------------------------+---------------------+-----------------------------------------------------------+-----------+-------------+
| id | archive_policy/name | name | unit | resource_id |
+--------------------------------------+---------------------+-----------------------------------------------------------+-----------+-------------+
| 00d22100-8f26-43c5-ba11-b4062347b3a4 | low | 5b8f2a655e54-disk-vda@disk_merged | Ops/s | None |
| 0122812c-9057-4899-8417-efdf10d1da3c | low | 5b8f2a655e54-memory@memory.used | B | None |
| 02e0402b-4120-4b27-b82f-04c4080580c6 | low | 5b8f2a655e54-cpu-4@cpu.wait | jiffies | None |
| 0832bbaf-62bc-4dfa-a182-dc752d0a94da | low | 5b8f2a655e54-interface-ovs-system@if_errors | Errors/s | None |
| 087804ea-0f65-4597-899a-6b7093648e9f | low | 5b8f2a655e54-processes@fork_rate | forks/s | None |
| 09278b9e-e1d9-4af6-9a24-c39cf9ab8d14 | low | 5b8f2a655e54-disk-loop2p1@disk_time | s | None |
| 0f836fbb-83f3-4eed-a612-69d933db7a48 | low | 5b8f2a655e54-cpu-2@cpu.idle | jiffies | None |
| 1174f1e7-e7b7-41ec-aa68-ccfc1459cba1 | low | 5b8f2a655e54-interface-eth0@if_dropped | Packets/s | None |
+--------------------------------------+---------------------+-----------------------------------------------------------+-----------+-------------+


  • test collectd data sent to InfluxDB in Grafana:

Use a VNC Viewer to open a web browser to connect to grafana (http://<serverip>:3000/), using the public_vip.ip and port 3000.

Log in with admin/admin

After logging in, click on the Data Sources  link in the left menu. There will be a data source 'collectd' of type InfluxDB. This verifies that

a database named 'collectd' has been created. 

Click on the Dashboards link in the left menu, then the Home menu in the top to get a list of dashboards. There will be ten dashboards

imported, including average_load_dashboard and cpu_usage_dashboard. Select any of them to see incoming data sent by collectd.

This verifies that collectd, influxdb and grafana are working properly.

For more details, please use these links:

Barometer Containers

Installing and configuring InfluxDB and Grafana to display metrics with collectd

 

How to continue a failed deployment

If deployment is failed, the deployment log that including openstack-ansible part is located in the below directory:

compass4nfv/work/deploy/log/compass-deploy.log

 After checking the log, and you don't need to modify anything:

  1. First, log into the compass-task docker
    sudo docker exec -it compass-tasks bash
  2. cd /var/ansible/run/openstack_ocata-opnfv2/
  3. Edit the playbook, delete the roles which have been successfully deployed. Keep the failed role and the roles after it.
    vim HA-ansible-multinodes.yml
  4. Run the playbook:
    ansible-playbook -i inventories/inventory.py HA-ansible-multinodes.yml

How to debug your role on an existed OpenStack environment deployed by compass

if deployment is failed, the deployment log that including openstack-ansible part is located in the below directory:

compass4nfv/work/deploy/log/compass-deploy.log

 

  1. First, log into the compass-task docker
    sudo docker exec -it compass-tasks bash

  2. There are two situations here respect to the role is coupling with openstack or not

    Case1: If your role is coupling with OpenStack e.g. OpenDaylight, you need to use openstack-ansible to run your role.

    - Put your role into “/etc/ansible/roles/”
    - Create a playbook and put it into “/opt/openstack-ansible/playbooks/”, you can take “/opt/openstack-ansible/playbooks/setup-odl.yml” as an example
    - Run your playbook like this: “openstack-ansible /opt/openstack-ansible/playbooks/setup-odl.yml”

    Case2: If your role is not coupling with OpenStack, you can use ansible to run your role directly

    - Put your role into “/var/ansible/run/openstack_ocata-opnfv2/roles/”
    - “cd /var/ansible/run/openstack_ocata-opnfv2/”
    - Delete all the roles written in “HA-ansible-multinodes.yml” located in “/var/ansible/run/openstack_ocata-opnfv2/”, and then add your role in this file, like this :

    The “hosts” here means which hosts your role will run on. Keep “remote_user” as root. Put your role name under “roles”

    - In the directory: “/var/ansible/run/openstack_ocata-opnfv2/”, run “ansible-playbook -i inventories/inventory.py HA-ansible-multinodes.yml”

  3. If failed, update your role and run it again as above mentioned.

Deploy a stor4nfv scenario: k8-nosdn-stor4nfv-ha or k8-nosdn-stor4nfv-noha.

  • No labels