Intel POD 9 has been assigned to StorPerf for final integration testing in Pharos.
You will need a GPG key (https://fedoraproject.org/wiki/Creating_GPG_Keys) in order to get access to the POD, so anyone interested in helping, please be sure to get set up with Pharos access.
|Host||URL||OS Login||FUEL Jump Address|
|Jump Host BMC||http://10.2.117.140/||ssh email@example.com|
|Controller Host BMC||http://10.2.117.142/||ssh firstname.lastname@example.org||10.9.1.5|
|Compute Host 1 BMC||http://10.2.117.144/||ssh email@example.com||10.9.1.4|
|Compute Host 2 BMC||http://10.2.117.146/||ssh firstname.lastname@example.org||10.9.1.6|
|FUEL Master||https://10.2.117.141:8443/||ssh email@example.com (password r00tme)|
|Horizon||http://10.2.117.141||admin / admin|
|StorPerf Graphite Browser||http://10.2.117.141:8000/||ssh firstname.lastname@example.org -p 5022|
StorPerf Pharos testing will use FUEL for its base installation and follow the directions in How to Install and Run StorPerf to set up.
First thing is to bring the Jump Host up to date:
Jump host pod9-urb-jump has Xauth installed and ssh X forwarding enabled. FUEL ISO is downloaded and a new VM is created using that.
Edit /etc/ssh/sshd_config to read:
To access virt-viewer from any X window terminal
Launch X window terminal
ssh -XC 10.2.117.141
virt-viewer UI will show up on your remote X window session
Installing FUEL Master
Download FUEL (Replace as needed)
Because FUEL master is running as a VM, we need to set up a network bridge for the admin interface.
Inside FUEL master, the following interfaces will be used:
Create fuel master:
Note: The following must be done in the virt-viewer UI.
Installing Controller and Compute Nodes
Power on the POD nodes assigned for compute and control. Each node will PXE boot from FUEL master and register itself in the FUEL UI.
TODO: document procedure on installing Controller and Compute here.
Current POD 9 Deployment
The following screenshots show the configuration used:
- 1 Controller, Ceph Node, 2 TB Ceph Storage
- 2 Compute, Ceph Nodes, 2x2 TB Ceph Storage
This gives us 6 TB for Ceph, with a replication factor of 2, leaving a total of 3 TB Ceph storage spanning 3 nodes.
Running a Test in Pharos
Follow the procedure in Installing StorPerf.
Start the docker container
We now proceed with configuration from a system that has network access to the StorPerf container. In POD 9, I use the jump host, and StorPerf at 10.9.15.138.
The following will configure StorPerf to run tests with 4 agents and a volume size of 5 GB.
This should return JSON string similar to the following:
The StorPerf stack will then start being created on the Controller. Once complete, you should see:
Starting a Test
There is a port forward rule on the jump host that will allow you to view the Graphite dashboard at: http://10.2.117.141:8000/
Logging in as admin/admin will allow you to load one of the predefined graphs. The following graph shows a warm up by filling the volumes once with sequential, random data, specifically writing 80 GB to 8 10 GB volumes.
( From 2016-04-19, 7:50 PM Until 2016-04-21, 9:00 AM)
Another view of the data.
There are four arrows from left to right. Leading up to the first arrow is 80gb of data being written to the volumes to initialize them. Second arrow shows a warm up of another 80g being written.
|Segment||Test Description||Block Size||Queue Depth|
|1||Initializing 80gb cinder volume||8192||2|
|2||Warm up by rewriting 80gb||8192||2|
|3||Sequential write of 160gb||8192||1|
|4||Sequential write of 160gb||8192||16|
|5||Sequential write of 160gb||8192||128|