Page tree
Skip to end of metadata
Go to start of metadata

Overview (What is this)

As part of the CNTT reference certification, there is a desire to verify the hardware underlying the stack / infrastructure, with this processing being automated.  Initially, this verification would be carried out in labs responsible for the performance of the RC testing, especially in the future cases examining VNF performance, where hardware will have direct impact.  However, the tooling should also apply directly to larger deployments, verifying the hardware, settings, network wiring, etc., which will save valuable time for operators and users in standing up infrastructure.

As a picture is worth 1000 word, here's a rough drawing of how this could look.

Repo / Code

Place holder for the link, repo creation is in process for the CIRV project.

Requirements Gathering

Information Model

  1. Must be machine readable, in a fairly light weight format, to allow easy manipulation in python / similar.  
  2. Storage of the data should be in human readable format (i.e. like yaml compared to SQlite), or tools are created to support this interaction.
  3. Must support scaling to the number of servers / nodes that could be used in a real deployment (i.e. potentially 1000+).  
  4. Should support a means to "template" common parameters, such as RAM. CPU model / cores, where these values are defined 1 time, and referenced by the server/node definitions.
  5. The information model should support the idea of "profiles" to account for differences between nodes designated as basic vs. compute intensive vs. network intensive. I think this will help with the "CNTT Mins" input below.
  6. Consider the need of information model to use for software implementation, this model should not include too much installer specific details, or closely bounded to any specific implementation style,  and should be capable of easily translate into DF of each different installer.
  7. Should support a modular approach, instead of a monolithic file, to compartmentalize types of information, such as software specifics, network specifics, passwords/security, etc.
  8. Should support methods / design approach to allow for future additions or extensions in order to support CNTT RA2, without breaking current implementation (i.e. don't break backwards compatibility).
  9. Capture life cycle state for each hardware component (i.e. active, broken, EOL, etc.).

Available Upstream Tools Review

Provide a reveiw and summary for what the upstream currently supports and what work could be needed to augment it to support the above requirements.

Airship Manifest 1.0

Metal3.io (may be used for Airship Manifest 2.0)

TripleO HEAT Templates 

Key-value-pairs etcD

Inspection Tooling / Automation

  1. The HDV tools should take the inputs as "PDF 2.0" (created by the service provider for their deployment) and "CNTT Mins" (as required by CNTT RC), inspection output (i.e. read from the servers / hardware).
  2. The HDV tools will provide a report where items below the "CNTT Mins" causes as failure, and differences from the "PDF 2.0" could cause either a failure or warning, depending on how the tooling is being run. 
  3. Tools should support the automation generation of the "PDF" or installer specific files from the "PDF 2.0 Information Model."

Discussion Items & TODO

We need to make sure Chenliang and Jiaqiang's pull request (https://github.com/cntt-n/CNTT/pull/944/commits/fad547d14764506ed708333d395a47e092fba05c) captures all the items we discussed in the HDV track in Prague (https://wiki.lfnetworking.org/pages/viewpage.action?pageId=27525908).  Is anyone interested in volunteering to tackle that review (bonus points if you were in the Prague session and have a good memory)?

Should we deal / solve the scaling problem in the future?  


  • No labels