Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitor Debt: Which metrics? #236

Open
aheusingfeld opened this issue Oct 14, 2017 · 9 comments
Open

Monitor Debt: Which metrics? #236

aheusingfeld opened this issue Oct 14, 2017 · 9 comments

Comments

@aheusingfeld
Copy link
Member

I would like to brainstorm with you a list of the metrics one should monitor in order to keep an eye on the (technical) debt of a software system. IMPOV there are different metrics for different phases of the software lifecycle, so let's just start collecting and sort it out afterwards.

I'll get started with 3 of them from the top of my head:

  1. Mean-time-to-repair - i.e. how long does it take to rollout a new release to production. REASON: indicator for the ability and speed to reduce debt
  2. Cyclomatic complexity - REASON: complexity makes code harder to understand and reason about. This most often results in wrong assumptions and bugs
  3. Resource usage (CPU, memory, network) per use case or per user - REASON: we want to keep the cost per user as low as possible. Increasing resource usage might literally create debt
@gernotstarke
Copy link
Member

  1. MTTR: great. Add "Mean-time-to test" or "mean-time-to-release" to it.

  2. good, but I suggest you read http://www.adamtornhill.com/articles/software-revolution/part1/index.html, as most (cyclomatic) complexity measures are overly language specific, and difficult to apply to polyglott systems (containing, e.g. functional JS)

  3. runtime metrics... it depends... can be useful, but that's technology-specific.

  4. coupling

  5. Developer-time-spent (how long did devs work on this code/component during the last sprint, release, month, year, ever?) How many commits?

Thats a HUGE topic, potentially worth writing an article about...

additionally, I suggest to correlate various metrics

@vanto
Copy link
Contributor

vanto commented Oct 16, 2017

Probably nitpicking: MTTR is not how long it takes to rollout a new release to production but rather how long it takes to make a system behaving correctly again. (the nitpick is that you may do a new release but this does not fix the problem) But I totally agree with Alex. Since availability is defined as A = MTBF / (MTBF + MTTR), these three variables correlate and are worth monitoring.

@jmewes
Copy link

jmewes commented Dec 4, 2018

The number of times you excuse yourself for flaws in the system when you introduce a new developer.

@jmewes
Copy link

jmewes commented Jan 26, 2019

Interesting point of view on this subject from an interview with Robert Martin:

Do you believe it is possible to objectively measure code quality analyzing code structure (number of lines, cyclomatic complexity, afferent and efferent coupling, code coverage)?

You can get some information from these metrics. But you cannot determine code or design quality from them.

Do you believe good design can be measured objectively?

Not with static analysis metrics. But I think good designers can pass appropriate judgements on good or bad design. In the end, there is only one metric that matters: Was the manpower required to build, deploy, and maintain the system appropriately minimized.

https://hashnode.com/post/i-am-robert-c-martin-uncle-bob-ask-me-anything-cjr7pnh8g000k2cs18o5nhulp/answer/cjrb0u282000hc5s24lc126cr

So a good measure for technical dept could be the time required to implement some sort of function points. If this number remains stable, all is fine. If this number goes down the might indicate technical dept.

@vanto
Copy link
Contributor

vanto commented Jan 26, 2019

The problem with this kind of metrics is however that management is very likely to misuse those numbers to insinuate developers a lack of productivity or commitment.

@tash
Copy link

tash commented Mar 17, 2019

There are several aspects to this, like measure of

  • complexity,
  • size and
  • (remediation or mitigation) effort
  • efficacy
  • (test-) coverage
    among others.

Technical debt sort of tries to adress the first three of the above while remaining management friendly.
Sonarqube for example implements the SQALE model, which works fairly well, especially in an agile setting: http://www.sqale.org/wp-content/uploads/2016/08/SQALE-Method-EN-V1-1.pdf

Besides technical debt, there's a plethora of metrics to choose from, while only a handful still see widespread use, like cyclomatic complexity. CC is the only software metric I know that has been argued in court wrt. to software quality - in the class action against toyota's cruise control system that killed several people if I remember correctly.

Efficacy is a difficult topic, as the cost infered from bad quality are hard to put exactly. Jones and as far as I remember also Fenton argued for an approach based defect removal efficiancy or some derivative thereof.
It has been "common" knowledge in software and nowadays systems engineering, that so called delayed issues are exponentially more expensive to fix. This is based in an old study noone seems to have read or actually studied, even Liggesmeyer put the famous exponential graph without any source in the german literature reference about software quality.
Well, turns out delayed issues costs are not exponential - at least not always: https://arxiv.org/pdf/1609.04886.pdf

Coverage is an intereseting one, as there still a lot of different opinions on best practices in the industry. General rule of thumb "Do at least 80% coverage and you are fine" are still popular, despite the fact that Skynet could sit in the remaining 20% and wait for its turn to destroy humankind.
Thus standards like IEC 61508 or 26262 are thankfully more specific: usually MC/DC coverage depending on SIL. MC/DC can still be gamed though, which apparently is occasionally a problem in offshoring of aerospace sw development.

@jmewes
Copy link

jmewes commented Apr 21, 2019

I haven't bought the book yet, but it looks like a good reading regarding this topic:

Software Design X-Rays

https://pragprog.com/book/atevol/software-design-x-rays

@gernotstarke
Copy link
Member

I read the book - it's well worth it!! Great ideas.

@jmewes
Copy link

jmewes commented May 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants