Page tree
Skip to end of metadata
Go to start of metadata

 

This is a proposal for measuring projects' health in OPNFV on a regular basis (e.g. quarterly, twice a year, etc.).  The goal of the metrics is to highlight projects that are doing well/active plus identify project that may need help from the rest of the community or could even be candidates for "archiving".

Project activities could be measured on the following areas:

  • Git commits
  • JIRA tickets closed
  • Activities on the project wiki pages
  • Email discussions on project mailing list (e.g. on opnfv-tech-discuss or project specific mailing list)

Data for some of the activities above will come from OPNFV's Bitergia dashboard, but some of the activities may need to be measured manually (e.g. email discussions).  Different weights could be assigned to the above activities to come up with a "composite score" for each project.  As an example, Composite Score = (0.4*Git commits) + (0.3*closed JIRA tickets) + (0.2*project wiki edits) + (0.1*email threads).  

  • No labels

3 Comments

  1. Hi Ray,

    I think what we can measure easily are "activity metrics", and most of these are necessary for a thriving project. I don't oppose any of them.

    But I wonder if a project (where participation was low or was winding down) could "game" these activity metrics to appear healthy, and I think the answer is yes (with the possible exception of e-mail discussions, which would be much harder to fake with all subscribers and lurkers looking on).

    Perhaps there's a more direct way to identify low activity/archive-candidate projects: periodic conversations with each PTL (which some set of people would share in conducting, such as the Committers at large)?  Seems more humane, more likely to determine real "health", and less-likely to be gamed.

    Al

  2. There are some "small" projects that do not have very active lists despite otherwise being "healthy" projects. Also, the convergence of emails for all projects in one list may discourage interaction. Elsewhere there is some discussion of setting up things so projects can have separate mailing lists like fds.

  3. IMHO a "composite score" could be very misleading, especially for projects which do most of the work upstream. So unless we track all upstream repos, I would suggest to not compute a composite score but evaluate things qualitatively only.