Page tree
Skip to end of metadata
Go to start of metadata

Global figure

Draft

Version 2

 

Version 1

EcoSystem Copy

NOTE (DRAFT)

The ecosystem diagram is a common creation of test-wg. It is recommended to update the diagram following a review process similar to Gerrit, i.e.

Request => Review => Approve/Reject => Log

Moderator: Trevor Cooper

Approval

Vote +1/-1 here for the proposal.

Change Request

  •  Yujun Zhang change the description of QTIP to "Benchmarking as a Service"
  •  Yujun Zhang add clarification on the difference between red and green blocks in Performance Testing

Change Log

  • Trevor Cooper create first draft
  • Yujun Zhang add home page link to project wiki 
  • Trevor Cooper updated diagram to version 2 ... added feature projects which provide tests to Functest and Yardstick, Qtip description, Replaced destructive -> In Service as highest tier of testing. 

Test project visual identity

It is possible to ask LF designer to create visual identity for testing projects (could be reused on the global figure of the ecosystem)

the goal is to replace the current openclipart based figure in http://testresults.opnfv.org/reporting/danube.html

The table hereafter is used to discuss with LF designer

 expectations:

  • format: SVG, PNG files
  • size: big (2400px)/medium (800px)/small (300px)
Project

Git repo name

(to be used for the figure)

One word conceptCurrent illustrationProject DescriptionCI linkContact
BambooNot create yetPandaNoneBig Data & AnalyticsN.A yet

Frank Brockners

Donald Hunter

Bottlenecks

bottlenecks

producer

director

conductor

scheduler

NoneDetect system Bottlenecks in OPNFV solutionlinkYu Yang (Gabriel)

Functest

functest

gears

VIM and NFVI funcionnal testing

Umbrella project for functional testing

link

Jose Lausuch

Morgan Richomme

QTIPqtipgaugesNonePlatform Performance BenchmarkinglinkYujun Zhang

Storperf

storperf

discs

Storage Perfomance testinglinkMark Beierl
VINANot create yet topology analyticsNoneNFVI debugging, troubleshooting tools, SLAN.A yet

Frank Brockners

Koren Lev

VSperf

vswitchperf

switching

NoneData-plane performance testinglinkTrevor Cooper

Yardstick

yardstick

scientific rulers

Verification of the infrastructure compliance when running VNF applications.

link

Kubi

Jack Chan

Rex Lee

DovetaildovetailcertificationNoneTest OPNFV validation criteria for use of OPNFV trademarkslinkhongbo tian
ModelsmodelsblueprintVerify MANO stack functions enable service/VNF lifecycle automation per blueprint for the service/VNF.E planBryan Sullivan

 

 

 

  • No labels

9 Comments

  1. Trevor Cooper May I ask about the origin of the project description in this diagram?

    I suppose we should use the TITLE as displayed in home page of each project, e.g. Platform Performance Benchmarking for QTIP.

    cc Morgan Richomme

    1. The intention is to give more understanding as to what each test project actually does ... the official titles may not obviously represent and in some cases are outdated due to the natural evolution since the project was created. I think these descriptions should be reflected in the test overview document that is currently under review and still requires work to get up to date. Each performance test project can claim to do platform performance benchmarking but I don't think that is helpful to understand the key differentiation of the projects. For me benchmarking is a very loaded term and is usually read as industry standard benchmark. What do you think?

      1. Yes, benchmarking is a basic method for performance evaluation, see FAQ page in QTIP about my understanding on performance testing, benchmarking and baseline testing.

        IMHO, different project serves different purpose and may leverage different methodology. For example, for Infrastructure Verification, it focus more on the criteria of the performance requirement for specified VNF. The result is usually a pass/fail. While for benchmarking, it is more for comparison of performance metrics among SUTs. And we shall need a reliable baseline for benchmarking. Normally we would introduce a score as overall indicator to make the comparison easier. This is how QPI (QTIP Performance Indices) comes. 

        Of course that we can use benchmarking result as the criteria in Infrastructure Verification and for benchmarking results comes from performance testing.

        Anyway, this is a discussion about technology terms and I suggest we move to tech-discuss mailing list to continue it.

  2. Trevor Cooper Thanks for the explanation. No offense to the current diagram, I'm just a bit concerned that the project team should be the right one to provide the description and responsible for clarification. What do you think?

    In the purpose of what each test project actually does, I would describe QTIP as Benchmarking as a Service which fits better to current situation. Compute, network and storage QPI are actually just samples of using this service.

    1. For sure its a proposal and project teams should feel the labels are accurate, if not we should update. I think Qtip has evolved from its original proposal ... my brief understanding is it computes "indices of performance" using the metrics produced by Yardstick ... is that a correct interpretation? Can you explain more about "benchmarking as a service"?

      1. I think Qtip has evolved from its original proposal ...

        No, the purpose is still being Platform Performance Benchmarking. The service we built is for it.

        To avoid overlapping with other testing projects, we would look for existing test runners and cases in prior of reinventing the wheel in QTIP. This, however, has made the benchmarking framework the major working item for QTIP in Danube. So you could say the focus of the project evolves, but I would say the proposal does not.

        my brief understanding is it computes "indices of performance" using the metrics produced by Yardstick ...

        As yardstick has already provided abundant performance test cases and capable to drive other testing project like VSPERF and STORPERF, we recognize it as a preferred test runner and performance metrics sources in OPNFV. There is still lots work to be done for the integration.

        Can you explain more about "benchmarking as a service"?

        Sure. The services include benchmarking plan and metric specification loader, calculation with specified formula, reporting in various format (console, HTML, PDF...), result collecting (to the database in this diagram), dashboard for benchmarking result visualization and searching (integrated with test dashboard in this diagram). Since these works are quite common for all testing projects, we would like to provide such service to the community, just like we are relying on other projects for test running and performance data.

        1. Thanks Yujun ...

          vsperf, storperf and yardstick, also do "platform performance benchmarking" but with different focus and objectives ... for example vsperf benchmarks the data-path using IETF test specifications. The framework provides automation with control over network topology and versions of DPDK, vswitch, etc. which is useful for developers and qualification of releases. Integration with other test projects such as Yardstick is a longer term goal but there are different levels of integration possible that are more or less meaningful to different users so we have to be careful. Each test project has valid motivations and there may be some overlap although we strive to leverage and normalize as much as possible. I think the only issue right now is about finding suitable labels on the picture that differentiate each test project while accurately representing what the projects goals are. I am wondering if for Qtip something along the lines of this makes sense ... "Platform metric analysis and reporting" ... or "Platform performance data mining"?

          1. I never know that all testing projects are doing "platform performance benchmarking" at least from the project wiki pages. Maybe the scope has evolved and I didn't follow up. I remember I once clarified "Data analysis and data mining" could be more in scope of Bottleneck in testperf meeting.

            What QTIP is focusing is benchmarking from creation of the project and I don't see any reason to change it.

            1. I think additional input or a discussion would be helpful. I added a note to the agenda for this weeks Test WG meeting ... lets get some other input, if you can't make it we can defer any updates. I know the meeting time is really bad for you. thanks for all the feedback!