Page tree
Skip to end of metadata
Go to start of metadata

This page captures work in progress on what we call the "Operational Framework", which means a system via which data is collected and delivered to a monitoring system, e.g. for closed-loop control (e.g. fault, scaling, policy), workload placement, machine learning, presentation in operations dashboards, etc. Providing such a framework for OPNFV is not one of the main objectives for the VES project, however it will be an important tool for promoting the VES common event model as meeting the needs for data collected/handled by such operational frameworks and monitoring systems.

The goal for VES is to be able to leverage existing open source projects for this purpose. Some projects that are potentially useful for this include:

ProjectFeaturesPros/ConsVES Integration Plans
VESScheduled JSON/REST reports, intent to add simple controls for managing report scheduling. Current basis of the VES demos.


Field Mapping

There are several components in the potential delivery path for event data. These will likely require a translation/mapping plugin at the data model boundary, e.g.

  • At the VES agent, between Collectd plugins and the event reporting interface, e.g. the VES JSON/REST interface
  • At the VES Collector, between the event reporting interface and the database backend in which events will be stored (if the events are to be transformed at this stage, e.g. into a schema as required by the Dashboard below)
  • At Dashboards, between the database from which events are pulled and the Dashboard presentation, e.g. as database query statements

This was the subject of an email kicking off this aspect of the VES design, [Barometer][VES] Mapping monitored source data to backend interface/model items:

  • On the Barometer project (FKA SFQM) call today, I made the following suggestion that was welcomed by the team. Basically the Barometer project will be developing the “frontend” (i.e. agent) data source aspects of analytics collection (i.e. interfacing with hosts and VMs etc to collect analytics), and delivering it to “backends” (i.e. collectors such as VIMs/monitoring systems) via protocols such as REST/HTTP, Kafka, SNMP etc. I think this is a good development, as most of the frontend heavy lifting will be done in the collectd upstream, and specific projects in OPNFV such as Barometer and VES just need to take care of the mapping to the backend interfaces and data models.
  • To minimize development/maintenance of the mapping between the collectd plugin data sources and the specific backend protocol/data model, I suggested that we consider a mapping table as a static or manageable attribute of the configuration for the agent. For example, as you can see in Maryam’s team implementation of the, values for the data source items are mapped to fields in the JSON-based VES message structure in individual code statements. Even in this early demo version, there are many monitored data items, and this can be expected to expand greatly. It would be nice if we can automate as much of this as possible through a table with rows such as:
    • { plugin_name, plugin_instance, type_name, type_instance }, {… }
  • Theoretically, it should be possible in many cases to simply run thru such a table and copy the source data to the proper place in the target event structure. If calculations are needed on the source, they might still require a discrete statement, but many of the fields could be just copied. For target properties which are lists, that would need to be considered as well, but I think we could indicate that through some variable reference or “property[x]”.
  • Any input on this idea, experience implementing such mapping systems etc, is appreciated.

The table below addresses potential options for how this could be implemented.

Implementation ApproachDetails and NotesFeedback
  • No labels