Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Project: StorPerf - Storage Performance Benchmarking for NFVI

Description

The purpose of StorPerf is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.

The benchmarks developed for block and object storage will be sufficiently varied to provide a good preview of expected storage performance behavior for any type of VNF workload. The elements of the project include:

  • Test Case definition
  • Metrics definition
  • Test Process definition
  • Tool development

Some of these are expanded further below.

Project References

  • StorPerf Architecture Proposal
  • StorPerf API
  • Packaging and Delivery
  • How to Install and Run StorPerf
  • Brahmaputra Pharos Lab
  • C Release Planning
  • 2016 PlugFest Planning

    A key challenge to measuring disk performance is to know when the disk (or, for OpenStack, the virtual disk or volume) is performing at a consistent and repeatable level of performance.  Initial writes to a volume can perform poorly due to block allocation, and reads can appear instantaneous when reading empty blocks.  How do we know when the data reported is valid?  The Storage Network Industry Association (SNIA) has developed methods which enable manufacturers to set, and customers to compare, the performance specifications of Solid State Storage devices (Ref).  StorPerf applies this methodology to OpenStack Cinder and Glance services to provide a high level of confidence in the performance metrics in the shortest reasonable time.

    See the Blog Post on Storage Performance Guidelines for more information.

    Slides and Demos

    StorPerf - Using OpenStack to Measure OpenStack Cinder Performance (OpenStack Days Canada)

    View file
    nameOPNFV StorPerf - Using OpenStack to Measure Cinder Performance - 2017-10-20.pptx
    height250

    StorPerf Not Quite Live Demo (OpenStack Days Canada)

    View file
    nameStorPerf Demo.pptx
    height250

    Storage Performance Indicators - Powered by StorPerf and QTIP

    This video gives a little background on how StorPerf provides reliable metrics for QTIP.

    Widget Connector
    width600
    urlhttps://www.youtube.com/watch?v=J--1Wa5xoIE
    height338

    StorPerf Overview - OPNFV Summit, 2017

    View file
    nameStorPerf- Using OpenStack to Measure OpenStack Cinder Performance.pptx
    height250

    StorPerf Metrics Deep Dive

    View file
    nameGraphite Deep Dive.pptx
    height250

    StorPerf Demo - Danube Pre-Release

    Widget Connector
    width600
    urlhttps://www.youtube.com/watch?v=9OonpmJVuA8
    height338

     

    Project References

    Meetings

    Excerpt Include
    meetings:Storperf Team Weekly Meeting
    meetings:Storperf Team Weekly Meeting

     

    Table of Contents


    Build Status

    Jenkins Job StatusjobNamestorperf-verify-masterhosthttps://build.opnfv.org/cisimpleThresholdModeltruedoesReflectTesttrueBuild Status

    Open Bug List

    Jira
    serverOPNFV
    columnskey,summary,created,updated,assignee,priority,status,resolution,fixversions
    maximumIssues20
    jqlQueryproject = STORPERF AND type = Bug and Status != CLOSED ORDER BY priority DESC, updated DESC
    serverId96acfcf2-db1a-3859-891e-03a53e9315b0

    Danube Planning

    Backlog

    Jira
    serverOPNFV JIRA
    columnskey,summary,assignee,status
    maximumIssues1000
    jqlQueryproject = STORPERF and type != Sub-task and Status != CLOSED and FixVersion = "Danube 17.0.0" ORDER BY priority DESC, updated DESC
    serverId96acfcf2-db1a-3859-891e-03a53e9315b0

    Key Project Facts

    View Git file
    pathINFO
    repository-id55
    languagetext
    branchmaster

    StorPerf Project Scope

    StorPerf testing addresses both block storage and object stores, though using different test suites. There is limited value in testing locally attached storage, so this is primarily about testing distributed/external storage environments.

    StorPerf is intended to run standalone test benchmark tools, plus provide integration with test frameworks such as Qtip and Yardstick.

    Use Cases

    There are three applicable use cases for these storage performance benchmarks:

    1. An OPNFV test lab manager wants to characterize expected storage behavior in a test NFVI deployment. This will include both a preconditioning phase for each storage environment as well as the broadest set of test cases across all identified storage services. This will provide VNF test applications with information about expected storage performance. This will integrate with existing test lab tool chains.
    2. A Service Provider wants to validate storage performance in an NFVI staging environment prior to production deployment. This will validate expected performance expectations using pass/fail conditions using the same preconditioning and test cases as for a test lab. This will integrate with project Bootstrap.
    3. A Service Provider wants to isolate performance problems in a production NFVI environment. This will use a much narrower set of test cases to minimize impact on the production environment. This will utilize a manual deployment and control of the test VMs.

    Timeline

    The high level plan for StorPerf is to deliver (minimally) test requirements and test process specifications in the Brahmaputra release timeframe. Block performance testing will lead object testing, and could also be delivered in Brahmaputra, though any such delivery would be asynchronous to, and largely independent of, the Brahmaputra release mechanism. In the C release, we will complete object store testing and integration with Qtip and Yardstick.

    Project Planning: TBD

    Test Cases

    This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.

    Block Storage

    Assuming iSCSI-attached storage, though local direct attached storage, or Fibre Channel-attached storage could also be testedGiven the SNIA guidelines, testing of Cinder volumes or Glance ephemeral storage, regardless of back end driver.  StorPerf makes no attempt to read the OpenStack configuration to determine what drivers are being used.

    1. Preconditioning of defined Logical Block Address range (period TBD)
    2. Testing across each combination of: Queue Depths (1, 162, 1288) and Block sizes (4KB2KB, 64KB8KB, 1MB16KB)
    3. For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).

    Object Storage

    This is planned for a future release.

    Assuming an HTTP-based API, such as Swift for accessing object storage.

    1. Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
    2. Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
    3. Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
    4. Perform separate metadata concurrency test for SUT using List and Head operations

    Especially looking for workload recommendations for testing in this area.

    Metrics

    Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.

    Block Storage Metrics

    The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:

    1. IOPS at a fixed max latency (TBD; we could also choose to report IOPS when the test hits the latency "wall"). Average IOPS for each workload
    2. Throughput bandwidth.  Note that throughput data can also be calculated based on IOPS * block size.
    3. Avg Latency for each workload at different IOPS levels

    Object Storage Metrics

    This is planned for a future realase.

    Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:

    1. Transactions per second (throughput can also be calculated from TPS * object size)
    2. Error rate
    3. Per-test average latency

    See also future extensions below.

    Future Project Extensions

    These are 2nd+ release ideas for extending StorPerf.

    1. Definition of more extensive metrics to measure performance (e.g., I/O Latency variation for object streaming); some of these may require contributions to upstream open source test tools
    2. Time-to-first-write for newly provisioned block volumes. This is intended to measure the impact of zero-out functions performed by storage systems when a volume is provisioned.
    3. Full integration with Qtip and Jenkins for automated deployment and reporting
    4. Create separate deliverable (document) to capture typical/expected VF storage performance requirements using the same metrics, for those VFs that require block or object storage I/O. This can be used to define pass/fail criteria for test lab deployments.

    Contributors

    User Listgroupsopnfv-gerrit-storperf-contributors

    Contributors

    Emeritus Contributors

    Committers

    User Listgroupsopnfv-gerrit-storperf-submitters

    Committers

    Emeritus Committers

    Viewtracker