Page tree
Skip to end of metadata
Go to start of metadata

Project: StorPerf - Storage Performance Benchmarking for NFVI

Description

The purpose of StorPerf is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.

A key challenge to measuring disk performance is to know when the disk (or, for OpenStack, the virtual disk or volume) is performing at a consistent and repeatable level of performance.  Initial writes to a volume can perform poorly due to block allocation, and reads can appear instantaneous when reading empty blocks.  How do we know when the data reported is valid?  The Storage Network Industry Association (SNIA) has developed methods which enable manufacturers to set, and customers to compare, the performance specifications of Solid State Storage devices (Ref).  StorPerf applies this methodology to OpenStack Cinder and Glance services to provide a high level of confidence in the performance metrics in the shortest reasonable time.

Slides and Demos

StorPerf - Using OpenStack to Measure OpenStack Cinder Performance (OpenStack Days Canada)

StorPerf Not Quite Live Demo (OpenStack Days Canada)

Storage Performance Indicators - Powered by StorPerf and QTIP

This video gives a little background on how StorPerf provides reliable metrics for QTIP.

StorPerf Overview - OPNFV Summit, 2017

StorPerf Metrics Deep Dive

StorPerf Demo - Danube Pre-Release

 

Project References

Meetings

Storperf Team Weekly Meeting
Every Thursday at 1500 UTC during the winter
(16:00 CET, 10:00 EST, 07:00 PST)

Every Thursday at 1400 UTC during NA DST
Chaired by mbeierl (Mark Beierl)

Zoom: https://zoom.us/j/5014627785
IRC Channel #opnfv-meeting on Freenode

 


Build Status

Build Status

Open Bug List

Key Summary Created Updated Assignee P Status Resolution Fix Version/s
Loading...
Refresh

Euphrates Backlog

Key Summary Assignee Status
Loading...
Refresh

F Release Backlog

Key Summary Assignee Status
Loading...
Refresh

Key Project Facts

Project: Storage Performance Benchmarking for NFVI (storperf)
Project Creation Date: 2015-09-15
Project Category: Integration and Testing
Lifecycle State: Incubation
Primary Contact: mark.beierl@emc.com
Project Lead: mark.beierl@emc.com
Jira Project Name: Storage Performance Benchmarking for NFVI
Jira Project Prefix: STORPERF
Mailing list tag [storperf]
Repository: storperf

Committers:
mark.beierl@emc.com
jose.lausuch@ericsson.com
taseer94@gmail.com
shrenik.jain@research.iiit.ac.in

Link to TSC approval of the project:
  http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-09-15-13.59.log.html

Link to approval of additional committers:
  http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2015-December/007109.html
  https://lists.opnfv.org/pipermail/opnfv-tsc/2017-April/003419.html

Link to approval of inactive committers:
  https://lists.opnfv.org/pipermail/opnfv-tsc/2017-August/003680.html

Test Cases

This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.

Block Storage

Given the SNIA guidelines, testing of Cinder volumes or Glance ephemeral storage, regardless of back end driver.  StorPerf makes no attempt to read the OpenStack configuration to determine what drivers are being used.

  1. Preconditioning of defined Logical Block Address range
  2. Testing across each combination of: Queue Depths (1, 2, 8) and Block sizes (2KB, 8KB, 16KB)
  3. For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).

Object Storage

This is planned for a future release.

Assuming an HTTP-based API, such as Swift for accessing object storage.

  1. Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
  2. Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
  3. Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
  4. Perform separate metadata concurrency test for SUT using List and Head operations

Especially looking for workload recommendations for testing in this area.

Metrics

Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.

Block Storage Metrics

The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:

  1. Average IOPS for each workload
  2. Throughput bandwidth.  Note that throughput data can also be calculated based on IOPS * block size.
  3. Avg Latency for each workload

Object Storage Metrics

This is planned for a future realase.

Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:

  1. Transactions per second (throughput can also be calculated from TPS * object size)
  2. Error rate
  3. Per-test average latenc
    Group: opnfv-gerrit-storperf-contributors
    Daniel Smith (lmcdasm)
    daniel.smith@ericsson.com
    Edgar StPierre (estpierre)
    edgar.stpierre@dell.com
    Iben Rodriguez (ibenr)
    linuxfoundation@ibenit.com
    Mark Beierl (mbeierl)
    mark.beierl@dell.com
    qi liang (QiLiang)
    liangqi1@huawei.com
    Shrenik Jain (shrenikjain38)
    shrenik.jain@research.iiit.ac.in
    Taimoor Alam (taimoor.alam)
    taimoor.alam@tum.de
    Taseer Ahmed (linux_geek)
    taseer94@gmail.com
    Tim RAULT (trault14)
    tim.rault@cengn.ca

Contributors

Emeritus Contributors

Committers

Emeritus Committers

Page viewed times

  • No labels