Page tree
Skip to end of metadata
Go to start of metadata

Project: StorPerf - Storage Performance Benchmarking for NFVI

Description

The purpose of StorPerf is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.

StorPerf Demo - Danube Pre-Release

StorPerf Overview OPNFV Summit, 2016

The benchmarks developed for block and object storage will be sufficiently varied to provide a good preview of expected storage performance behavior for any type of VNF workload. The elements of the project include:

  • Test Case definition
  • Metrics definition
  • Test Process definition
  • Tool development

Some of these are expanded further below.

Project References

Meetings

Storperf Team Weekly Meeting
Every second Wednesday at 1500 UTC during the winter
(16:00 CET, 10:00 EST, 07:00 PST)
Every second Wednesday at 1400 UTC during NA DST
Chaired by mbeierl (Mark Beierl)
IRC Channel #opnfv-meeting on Freenode

https://global.gotomeeting.com/join/852700725


United States (Toll-free): 1 877 309 2073
United States : +1 (571) 317-3129
Access Code: 852-700-725

Build Status

Open Bug List

Loading
Key Summary Created Updated Assignee P Status Resolution Fix Version/s
STORPERF-56 Cannot delete stack if create failed Jun 16, 2016 Dec 12, 2016 Unassigned Major Open Unresolved Danube 1.0

Danube Planning

Key Project Facts

Project: Storage Performance Benchmarking for NFVI (storperf)
Project Creation Date: 2015-09-15
Project Category: Integration and Testing
Lifecycle State: Incubation
Primary Contact: mark.beierl@emc.com
Project Lead: mark.beierl@emc.com
Jira Project Name: Storage Performance Benchmarking for NFVI
Jira Project Prefix: STORPERF
Mailing list tag [storperf]
Repository: storperf

Committers:
edgar.stpierre@emc.com
ferenc.f.farkas@ericsson.com
mark.beierl@emc.com
jose.lausuch@ericsson.com

Link to TSC approval of the project:
  http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-09-15-13.59.log.html

Link to approval of additional committers:
  http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2015-December/007109.html

StorPerf Project Scope

StorPerf testing addresses both block storage and object stores, though using different test suites. There is limited value in testing locally attached storage, so this is primarily about testing distributed/external storage environments.

StorPerf is intended to run standalone test benchmark tools, plus provide integration with test frameworks such as Qtip and Yardstick.

Use Cases

There are three applicable use cases for these storage performance benchmarks:

  1. An OPNFV test lab manager wants to characterize expected storage behavior in a test NFVI deployment. This will include both a preconditioning phase for each storage environment as well as the broadest set of test cases across all identified storage services. This will provide VNF test applications with information about expected storage performance. This will integrate with existing test lab tool chains.
  2. A Service Provider wants to validate storage performance in an NFVI staging environment prior to production deployment. This will validate expected performance expectations using pass/fail conditions using the same preconditioning and test cases as for a test lab. This will integrate with project Bootstrap.
  3. A Service Provider wants to isolate performance problems in a production NFVI environment. This will use a much narrower set of test cases to minimize impact on the production environment. This will utilize a manual deployment and control of the test VMs.

Timeline

The high level plan for StorPerf is to deliver (minimally) test requirements and test process specifications in the Brahmaputra release timeframe. Block performance testing will lead object testing, and could also be delivered in Brahmaputra, though any such delivery would be asynchronous to, and largely independent of, the Brahmaputra release mechanism. In the C release, we will complete object store testing and integration with Qtip and Yardstick.

Project Planning: TBD

Test Cases

This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.

Block Storage

Assuming iSCSI-attached storage, though local direct attached storage, or Fibre Channel-attached storage could also be tested.

  1. Preconditioning of defined Logical Block Address range (period TBD)
  2. Testing across each combination of: Queue Depths (1, 16, 128) and Block sizes (4KB, 64KB, 1MB)
  3. For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).

Object Storage

Assuming an HTTP-based API, such as Swift for accessing object storage.

  1. Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
  2. Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
  3. Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
  4. Perform separate metadata concurrency test for SUT using List and Head operations

Especially looking for workload recommendations for testing in this area.

Metrics

Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.

Block Storage Metrics

The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:

  1. IOPS at a fixed max latency (TBD; we could also choose to report IOPS when the test hits the latency "wall"). Note that throughput data can be calculated based on IOPS * block size.
  2. Avg Latency for each workload at different IOPS levels

Object Storage Metrics

Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:

  1. Transactions per second (throughput can also be calculated from TPS * object size)
  2. Error rate
  3. Per-test average latency

See also future extensions below.

Future Project Extensions

These are 2nd+ release ideas for extending StorPerf.

  1. Definition of more extensive metrics to measure performance (e.g., I/O Latency variation for object streaming); some of these may require contributions to upstream open source test tools
  2. Time-to-first-write for newly provisioned block volumes. This is intended to measure the impact of zero-out functions performed by storage systems when a volume is provisioned.
  3. Full integration with Qtip and Jenkins for automated deployment and reporting
  4. Create separate deliverable (document) to capture typical/expected VF storage performance requirements using the same metrics, for those VFs that require block or object storage I/O. This can be used to define pass/fail criteria for test lab deployments.

Contributors

Group: opnfv-gerrit-storperf-contributors
Daniel Smith (lmcdasm)
daniel.smith@ericsson.com
Edgar StPierre (estpierre)
edgar.stpierre@dell.com
Iben Rodriguez (ibenr)
linuxfoundation@ibenit.com
Mark Beierl (mbeierl)
mark.beierl@dell.com
qi liang (QiLiang)
liangqi1@huawei.com
Tim RAULT (trault14)
tim.rault@cengn.ca

Committers

Group: opnfv-gerrit-storperf-submitters
Aric Gardner (agardner)
agardner@linuxfoundation.org
Edgar StPierre (estpierre)
edgar.stpierre@dell.com
Jose Lausuch (jose.lausuch)
jose.lausuch@ericsson.com
Mark Beierl (mbeierl)
mark.beierl@dell.com

Page viewed 220 times by 9 users since Mar 03, 2017

  • No labels