Plugfest 2016 Presentation
There are two proposed components to StorPerf:
- the Manager, which contains the results database, a web UI for review, the CLI to create VMs and test attached storage
- the Agent, which is installed on the VM and runs the actual performance test against the attached storage
The manager receives commands to start performance tests. It then:
- Creates a Heat Template with size of VM, and storage requirements.
- this is where the underlying type of storage is specified
- simultaneous runs can be specified and multiple VMs created
- ssh key supplied so manager can control it after launch
- network plumbing to be on same subnet as manager
- not sure if we will allow OS to be chosen or will have image ready with FIO already baked in
- Deploys template
- Heat creates VM, network port
- Cinder allocates storage
- Heat connects network and storage to VM
- Once StorPerf Agent VM is ready:
- Copies StorPerf agent software
- Remotely starts execution with test configuration
- Pre-conditions storage using FIO
- Starts FIO test run with parameters supplied
- Gathers periodic metrics from FIO
- Forwards metrics to Carbon for DB storage
Graphite can be used to:
- View test results in progress
- Historically report on tests saved in Carbon
- Extract test run data
When the StorPerf manager software (container on JumpHost, or in VM) starts up, it needs to talk back to the OpenStack controller so that it can use Heat to fire up all of the slave VMs and attach the Cinder Volumes to them. If the StorPerf Manager cannot talk to OpenStack, we cannot test the disks.
Food for thought
- how will you build the VMs? - in your dev environment
- what goes into the VM?
- create the heat templates
- upload to arm
- scripts to run tests and collect results
- they should move to ARM - http://artifacts.opnfv.org/ But how?
- Manual upload to ARM
- Automatic upload with a Jenkins job - ???
- Jenkins job to slave to deploy the VMs and run the tests on local lab environment
- There are two levels of tests being run. 1st is a unit level test performing a functional test done during the build. 2nd level is the actual performance testing.
- Jenkins job to collect the test results data files: hundreds of lines of statistics, IOPS, Latency, Bandwidth, Throughput, etc stored in a whisper database which can be exported into other formats
- This is one option: https://wiki.opnfv.org/wiki/jenkins#logging_and_graphing
- Extract baseline run from Graphite
- Compare to current run
- Report pass/fail to Yardstick/QTIP