Project: StorPerf - Storage Performance Benchmarking for NFVI
Open Bug List
|STORPERF-214||Starting a second job while first is in project causes erros||Sep 15, 2017||Oct 08, 2017||Taseer Ahmed||Open||Unresolved||5.0.0|
|STORPERF-186||Duplicate entries for _warm_up with status query||Jul 04, 2017||Sep 08, 2017||Mark Beierl||Resolved||Fixed||5.0.0|
|STORPERF-56||Cannot delete stack if create failed||Jun 16, 2016||Aug 25, 2017||Taimoor Alam||Open||Unresolved||5.0.0|
|STORPERF-225||Availability zone assignment does not work||Oct 12, 2017||Oct 12, 2017||Unassigned||Open||Unresolved|
F Release Backlog
|STORPERF-223||Gather more Cinder information||Unassigned||Open|
|STORPERF-217||Allow user to specify cinder volume type on stack create||Mark Beierl||Open|
|STORPERF-218||Support Multiple Stacks||Taseer Ahmed||Open|
|STORPERF-222||Add StorPerf role in XCI||Taseer Ahmed||Open|
|STORPERF-179||Move to OpenAPI 3.0||Shrenik Jain||In Progress|
Key Project Facts
This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.
Given the SNIA guidelines, testing of Cinder volumes or Glance ephemeral storage, regardless of back end driver. StorPerf makes no attempt to read the OpenStack configuration to determine what drivers are being used.
- Preconditioning of defined Logical Block Address range
- Testing across each combination of: Queue Depths (1, 2, 8) and Block sizes (2KB, 8KB, 16KB)
- For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).
This is planned for a future release.
Assuming an HTTP-based API, such as Swift for accessing object storage.
- Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
- Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
- Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
- Perform separate metadata concurrency test for SUT using List and Head operations
Especially looking for workload recommendations for testing in this area.
Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.
Block Storage Metrics
The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:
- Average IOPS for each workload
- Throughput bandwidth. Note that throughput data can also be calculated based on IOPS * block size.
- Avg Latency for each workload
Object Storage Metrics
This is planned for a future realase.
Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:
- Transactions per second (throughput can also be calculated from TPS * object size)
- Error rate
- Per-test average latency
Daniel Smith (lmcdasm)
Edgar StPierre (estpierre)
Iben Rodriguez (ibenr)
Mark Beierl (mbeierl)
qi liang (QiLiang)
Shrenik Jain (shrenikjain38)
Taimoor Alam (taimoor.alam)
Taseer Ahmed (linux_geek)
Tim RAULT (trault14)
- Shrenik Jain
- Saksham Agrawal
- Stephen Gooch
- Eddy Raineri
- Taimoor Alam, Technische Universität München email@example.com
- Sam Decker, Unaffiliated, Algonquin College Student
- Chanchal Chatterjee, EMC firstname.lastname@example.org
- Vishal Murgai, Cavium Networks Vishal.Murgai@caviumnetworks.com
- Vikram Dahm, Dell V_Dham@Dell.com
- Stephen Blinick email@example.com
- srinivas tadepalli, Tata Consultancy Services
- Nataraj Goud, Tata Consultancy Services firstname.lastname@example.org
- Dennis Qin, EMC
Aric Gardner (agardner)
Jose Lausuch (jose.lausuch)
Mark Beierl (mbeierl)