Page tree
Skip to end of metadata
Go to start of metadata

Intel POD 9 has been assigned to StorPerf for final integration testing in Pharos.

You will need a GPG key (https://fedoraproject.org/wiki/Creating_GPG_Keys) in order to get access to the POD, so anyone interested in helping, please be sure to get set up with Pharos access.

HostURLOS LoginFUEL Jump Address
Jump Host BMChttp://10.2.117.140/ssh root@10.2.117.141 
Controller Host BMChttp://10.2.117.142/ssh root@10.2.117.14310.9.1.5
Compute Host 1 BMChttp://10.2.117.144/ssh root@10.2.117.14510.9.1.4
Compute Host 2 BMChttp://10.2.117.146/ssh root@10.2.117.14710.9.1.6
FUEL Masterhttps://10.2.117.141:8443/ssh root@10.9.1.2 (password r00tme) 
Horizonhttp://10.2.117.141admin / admin 
StorPerf Graphite Browserhttp://10.2.117.141:8000/ssh storperft@10.2.117.141 -p 5022 

 

Configuration

StorPerf Pharos testing will use FUEL for its base installation and follow the directions in How to Install and Run StorPerf to set up.

First thing is to bring the Jump Host up to date:

  yum udpate
  yum install -y virt-install virt-viewer
  reboot


Jump host pod9-urb-jump has Xauth installed and ssh X forwarding enabled. FUEL ISO is downloaded and a new VM is created using that.

XAuth Setup:

  yum install xauth


Edit /etc/ssh/sshd_config to read:

  X11Forwarding yes
  X11UseLocalhost no


Restart sshd:

  service sshd reload

To access virt-viewer from any X window terminal

Launch X window terminal

ssh -XC 10.2.117.141

virt-viewer

virt-viewer UI will show up on your remote X window session

Installing FUEL Master

Download FUEL (Replace as needed)

  wget http://artifacts.opnfv.org/fuel/opnfv-2016-02-10_09-45-08.iso

Because FUEL master is running as a VM, we need to set up a network bridge for the admin interface.

  export DEV=enp1s0f1
  export BR=bradmin
  cp -p /etc/sysconfig/network-scripts/ifcfg-{$DEV,$BR}
  sed -i -e'/HWADDR/d' -e'/UUID/d' -e"s/$DEV/$BR/" -e's/Ethernet/Bridge/' /etc/sysconfig/network-scripts/ifcfg-$BR
  echo DELAY=0 >> /etc/sysconfig/network-scripts/ifcfg-$DEV
  echo 'BOOTPROTO="none"' >> /etc/sysconfig/network-scripts/ifcfg-$DEV
  echo BRIDGE=$BR >> /etc/sysconfig/network-scripts/ifcfg-$DEV

  service network restart 

Network Assignment

Inside FUEL master, the following interfaces will be used:

  Admin (FUEL PXE) Subnet IF1: 10.9.1.0/24 


Create fuel master:

  /usr/bin/qemu-img create -f qcow2 -o preallocation=metadata /var/lib/libvirt/images/fuel-master.qcow2 200G
  virt-install --name=fuel-master --ram=16384 --vcpus=8 --accelerate --disk path=/var/lib/libvirt/images/fuel-master.qcow2,format=qcow2,bus=virtio --cdrom opnfv-2016-02-10_09-45-08.iso --network bridge=bradmin,model=virtio --os-type=linux --os-variant=rhel6 --vnc


Note: The following must be done in the virt-viewer UI.

Network setup:

  eth0: 10.9.1.2       netmask 255.255.255.0   gw 10.9.1.1

 

Installing Controller and Compute Nodes

Power on the POD nodes assigned for compute and control.  Each node will PXE boot from FUEL master and register itself in the FUEL UI.

TODO: document procedure on installing Controller and Compute here.

Current POD 9 Deployment

The following screenshots show the configuration used:

  • 1 Controller, Ceph Node, 2 TB Ceph Storage
  • 2 Compute, Ceph Nodes, 2x2 TB Ceph Storage

This gives us 6 TB for Ceph, with a replication factor of 2, leaving a total of 3 TB Ceph storage spanning 3 nodes.

Running a Test in Pharos

Follow the procedure in Installing StorPerf.

Start the docker container

docker run -t --env-file admin-rc -p 5022:22 -p 5000:5000 -p 8000:8000 -v ~/carbon:/opt/graphite/storage/whisper --name storperf opnfv/storperf 

We now proceed with configuration from a system that has network access to the StorPerf container.  In POD 9, I use the jump host, and StorPerf at 10.9.15.138.

export STORPERF=10.9.15.138:5000

Configure StorPerf

The following will configure StorPerf to run tests with 4 agents and a volume size of 5 GB.

curl -X POST -H "Content-Type: application/json" -d '{"agent_count":"4", "agent_network":"StorPerf_Agent_Network", "volume_size":"5"}' http://$STORPERF/api/v1.0/configure

This should return JSON string similar to the following:

{
  "agent_count": 4,
  "agent_network": "StorPerf_Agent_Network",
  "stack_id": "5cc2e98d-6fcb-4a2a-bc40-96d199471b2c",
  "volume_size": 5
}

The StorPerf stack will then start being created on the Controller.  Once complete, you should see:

Orchestration

Instances

Volumes

 

Starting a Test

curl -X POST -H "Content-Type: application/json" -d '{"target":"/dev/vdb", "workload":"rw"}' http://$STORPERF/api/v1.0/job

 

Graphite Dashboard

There is a port forward rule on the jump host that will allow you to view the Graphite dashboard at: http://10.2.117.141:8000/

Logging in as admin/admin will allow you to load one of the predefined graphs.  The following graph shows a warm up by filling the volumes once with sequential, random data, specifically writing 80 GB to 8 10 GB volumes.

( From 2016-04-19, 7:50 PM Until 2016-04-21, 9:00 AM)

Another view of the data.

There are four arrows from left to right.  Leading up to the first arrow is 80gb of data being written to the volumes to initialize them.  Second arrow shows a warm up of another 80g being written.

SegmentTest DescriptionBlock SizeQueue Depth
1Initializing 80gb cinder volume81922
2Warm up by rewriting 80gb81922
3Sequential write of 160gb81921
4Sequential write of 160gb819216
5Sequential write of 160gb8192128

 

 

  • No labels