Page tree
Skip to end of metadata
Go to start of metadata

Project: StorPerf - Storage Performance Benchmarking for NFVI

Description

The purpose of StorPerf is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.

A key challenge to measuring disk performance is to know when the disk (or, for OpenStack, the virtual disk or volume) is performing at a consistent and repeatable level of performance.  Initial writes to a volume can perform poorly due to block allocation, and reads can appear instantaneous when reading empty blocks.  How do we know when the data reported is valid?  The Storage Network Industry Association (SNIA) has developed methods which enable manufacturers to set, and customers to compare, the performance specifications of Solid State Storage devices (Ref).  StorPerf applies this methodology to OpenStack Cinder and Glance services to provide a high level of confidence in the performance metrics in the shortest reasonable time.

Slides and Demos

StorPerf Overview - OPNFV Summit, 2017

StorPerf Demo - Danube Pre-Release

 

Project References

Meetings

Storperf Team Weekly Meeting
Every second Wednesday at 1500 UTC during the winter
(16:00 CET, 10:00 EST, 07:00 PST)

Every Wednesday at 1400 UTC during NA DST
Chaired by mbeierl (Mark Beierl)

Skype Meeting Link: https://meet.emc.com/mark.beierl/69MEZFLU 
IRC Channel #opnfv-meeting on Freenode
Find a local number 

 

 

Build Status

Open Bug List

Loading
Key Summary Created Updated Assignee P Status Resolution Fix Version/s
STORPERF-56 Cannot delete stack if create failed Jun 16, 2016 May 02, 2017 Unassigned Major Open Unresolved Danube 3.0

Euphrates Planning

Loading
Key Summary Assignee Status
STORPERF-92 Allow flavour to be set in stack create Mark Beierl In Progress
STORPERF-162 Create a new container for graphing modules Unassigned Open
STORPERF-129 SwaggerUI Container Replacement Shrenik Jain In Progress
STORPERF-142 Graphite Container Selection Saksham Agrawal In Progress
STORPERF-178 Add ability to specify availability zone Unassigned Open
STORPERF-177 Sizing guidelines for carbon db Unassigned Open
STORPERF-125 Break StorPerf down into a series of containers Unassigned Open
STORPERF-161 Removal of matplotlib from existing container Mark Beierl Resolved
STORPERF-175 Support for different agent OS Unassigned Open
STORPERF-155 Read back data from new Graphite Unassigned Open
STORPERF-174 Switch container base to Alpine Unassigned Open
STORPERF-160 Run workload test with new container Unassigned Open
STORPERF-164 Support for internal results navigation Unassigned Open
STORPERF-165 Selection of graph libraries and tools Unassigned Open
STORPERF-163 Results navigation Unassigned Open
STORPERF-136 Pass requests for Swagger from StorPerf flask. Unassigned Open
STORPERF-152 Investigate clock source alternatives Unassigned Open
STORPERF-143 Introduce Graphite Container Unassigned Open
STORPERF-113 MS2: (05/22) Test plan shared with the test team Unassigned Open
STORPERF-114 MS3: (05/30) Installer integration with OpenStack Unassigned Open
STORPERF-115 MS4: (06/05) Infrastructure updates completed Unassigned Open
STORPERF-116 MS5: (07/21) Scenario integration and Feature Freeze Unassigned Open
STORPERF-117 MS6: (08/11) Test Cases and Preliminary Documentation Completed Unassigned Open
STORPERF-118 MS7: (09/01) Stable branch window closed Unassigned Open
STORPERF-120 MS9: (09/20) JIRA issues assigned to the release closed or deferred Unassigned Open
STORPERF-121 MS10: (09/22) Documentation completed Unassigned Open
STORPERF-119 MS8: (09/18) Formal test execution completed Unassigned Open
STORPERF-122 MS11: (09/25) Release Unassigned Open
STORPERF-101 Mock up StorPerf steady state report Unassigned Open
STORPERF-94 ReST API for logs Unassigned Open
STORPERF-50 Latency Test Steady State Convergence Report Unassigned Open

Key Project Facts

Project: Storage Performance Benchmarking for NFVI (storperf)
Project Creation Date: 2015-09-15
Project Category: Integration and Testing
Lifecycle State: Incubation
Primary Contact: mark.beierl@emc.com
Project Lead: mark.beierl@emc.com
Jira Project Name: Storage Performance Benchmarking for NFVI
Jira Project Prefix: STORPERF
Mailing list tag [storperf]
Repository: storperf

Committers:
ferenc.f.farkas@ericsson.com
mark.beierl@emc.com
jose.lausuch@ericsson.com

Link to TSC approval of the project:
  http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-09-15-13.59.log.html

Link to approval of additional committers:
  http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2015-December/007109.html
  https://lists.opnfv.org/pipermail/opnfv-tsc/2017-April/003419.html

Test Cases

This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.

Block Storage

Given the SNIA guidelines, testing of Cinder volumes or Glance ephemeral storage, regardless of back end driver.  StorPerf makes no attempt to read the OpenStack configuration to determine what drivers are being used.

  1. Preconditioning of defined Logical Block Address range
  2. Testing across each combination of: Queue Depths (1, 2, 8) and Block sizes (2KB, 8KB, 16KB)
  3. For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).

Object Storage

This is planned for a future release.

Assuming an HTTP-based API, such as Swift for accessing object storage.

  1. Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
  2. Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
  3. Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
  4. Perform separate metadata concurrency test for SUT using List and Head operations

Especially looking for workload recommendations for testing in this area.

Metrics

Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.

Block Storage Metrics

The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:

  1. Average IOPS for each workload
  2. Throughput bandwidth.  Note that throughput data can also be calculated based on IOPS * block size.
  3. Avg Latency for each workload

Object Storage Metrics

This is planned for a future realase.

Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:

  1. Transactions per second (throughput can also be calculated from TPS * object size)
  2. Error rate
  3. Per-test average latency

Contributors

Group: opnfv-gerrit-storperf-contributors
Daniel Smith (lmcdasm)
daniel.smith@ericsson.com
Edgar StPierre (estpierre)
edgar.stpierre@dell.com
Iben Rodriguez (ibenr)
linuxfoundation@ibenit.com
Mark Beierl (mbeierl)
mark.beierl@dell.com
qi liang (QiLiang)
liangqi1@huawei.com
Tim RAULT (trault14)
tim.rault@cengn.ca

Emeritus Contributors

Committers

Group: opnfv-gerrit-storperf-submitters
Aric Gardner (agardner)
agardner@linuxfoundation.org
Edgar StPierre (estpierre)
edgar.stpierre@dell.com
Jose Lausuch (jose.lausuch)
jose.lausuch@ericsson.com
Mark Beierl (mbeierl)
mark.beierl@dell.com

Emeritus Committers

Page viewed 1128 times by 26 users since Mar 03, 2017

  • No labels