Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pattern Idea: Scoring Grid (#351) #352

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

johnbeech
Copy link

@johnbeech johnbeech commented Sep 9, 2021

Based on a conversation in the InnerSourceCommons slack, from the Sept/2021 community call:

Curious to know if we have a source code score/ranking format, which can tell us how much behind we are from being called InnerSource project?

I answered with a pattern I've been using titled "Software Project Scoring Grid" - and was prompted to write up this draft pattern based on my own knowledge and experience:

See: #351 for more details.

Looking for feedback from the InnerSource Patterns working group, and anyone else who has chance to read this PR.

I'm happy to talk publicly about the pattern, as it is something I have developed outside of formal work, and use privately, but will decline to name specific companies for which I have shared my version of the grid with.

@spier
Copy link
Member

spier commented Sep 9, 2021

@johnbeech I just let some automatic checks run over the markdown file, mostly just a linter, to check for markdown formatting issues. We use this to keep our markdown files somewhat consistent, but sometimes the linter gets ahead of itself and does a bit too much :)

Anyways, you will find some inline comments from the linter in here:
https://github.com/InnerSourceCommons/InnerSourcePatterns/pull/352/files

Hopefully those will be fairly simple to fix.

I hope to have time to review the content of the pattern in the coming days as well. Thanks for the work you put into it already!

@johnbeech
Copy link
Author

Hopefully those [linting errors] will be fairly simple to fix.

Super easy, barely an inconvenience :)

Running these commands fixed all but one of them:

npm i -g markdownlint-cli
markdownlint --fix "**/*.md"

I hope to have time to review the content of the pattern in the coming days as well. Thanks for the work you put into it already!

Brilliant, and thank you for the writing prompts!

@spier spier added the 📖 Type - Content Work Working on contents is the main focus of this issue / PR label Sep 10, 2021
Copy link
Collaborator

@robtuley robtuley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a couple of specific in-line comments but also have some general feedback: I'm unclear whether this is a pattern to help improve InnerSource repository setup (i.e. help contributors make better quality contributions), or whether it's a pattern to improve any and all aspects of engineering. I think both are valid things, but an InnerSource setup focus is probably more relevant specifically here?

I think it would benefit from a more specific focus on InnerSource setup in context/forces sections for example, where I see the example scoring grid under solutions already being InnerSource orientated.

We use this pattern in the 'InnerSource setup' context, and there are plenty of examples in our wider org in the more general engineering context (e.g. security scorecards, operational scorecards, engineering or test maturity assessments, delivery health dashboards).

@@ -0,0 +1,114 @@
## Title

Software Project Scoring Grid
Copy link
Collaborator

@robtuley robtuley Sep 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name suggests to me you are scoring code quality or similar, but in the pattern it's scoring setup to help contributors/users. The best name I could think of was "Repository Scoring Grid", or "Repository Convention Score" (like the "Repository Activity Score" pattern) but it depends what you want to convey.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rob also mentioned "scorecard" elsewhere. Does scorecard have a specific meaning in the industry? I feel like I have heart it a couple of times already, but might be wrong.

So an other alternative could be:

Repository Scorecard

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a couple of specific in-line comments but also have some general feedback: I'm unclear whether this is a pattern to help improve InnerSource repository setup (i.e. help contributors make better quality contributions), or whether it's a pattern to improve any and all aspects of engineering. I think both are valid things, but an InnerSource setup focus is probably more relevant specifically here?

I think it would benefit from a more specific focus on InnerSource setup in context/forces sections for example, where I see the example scoring grid under solutions already being InnerSource orientated.

So evidentially my scorecard comes from a place of Code Quality and Engineering Leadership - the scorecard concept was inspired by work done at a previous employer to roll out InnerSource at the team and repository level. From a Code Quality perspective, the overlap between InnerSource best practice and Community best practice (working with a team) are quite strong. I think it would make sense to rewrite the example scorecard to highlight InnerSource best practice, e.g. from the InnerSource Maturity Model pattern - I can give that a go since it would be more appropriate to the DNA of the idea.

We use this pattern in the 'InnerSource setup' context, and there are plenty of examples in our wider org in the more general engineering context (e.g. security scorecards, operational scorecards, engineering or test maturity assessments, delivery health dashboards).

Good point; I think it's worth calling out that the Scorecard pattern can be used to measure other things, as exampled by your list, and that the general pattern can be expanded if you want to promote other areas of best practice.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the word repository should be used consistently in the pattern - I was struggling between project/service/codebase - but repository ("git repo") seems like the right focal point.

As for title variations:

  • Repository Scoring Grid
  • Repository Score Grid
  • Repository Scorecard

A similar pattern common in the industry would be a Career Path Matrix or Job Levelling Matrix e.g. https://lattice.com/library/what-is-a-job-leveling-matrix

So an alternative name could be:

  • Repository Maturity Matrix

I think a scorecard is an artefact of scoring, based on the grid - so I think my original verb "Scoring" is correct because the grid is used for scoring, i.e. producing scorecards - so I think my title preference is "Repository Scoring Grid" - and the example should be refactored to focus on InnerSource best practice at the repository level based on practical things at the repository level that intersect with Maturity Model.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming is hard :)

Adding to the complexity that we already have a pattern called Maturity Model, which we would not want to get confused with this new pattern.

Copy link
Collaborator

@robtuley robtuley Sep 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A thought -- how about "Repository Scoring" ..?

It occurs to me the underlying pattern here is to guide the key stakeholders of a repository to improve in a structured measureable way by scoring, which also allows a healthy dose of reporting/gamification activity/prioritisation of effort when there exists a larger portfolio.

The fact you use a 'scoring grid' with dimensions and grades is one way of doing the scoring. There are others e.g. a scorecard I'd consider to be slightly different in that it's primary purpose is to have a single overall grade or score that is easy to communicate, and the 'card' bit means there is additional detail that tells you why so you can improve. Simple 'number of failed checks' is another common approach (e.g. linter type semantics).

So the question is really whether the pattern is the scoring grid, or whether the pattern is the scoring. If the latter then allow the grid to be one of a few different examples rather than leading the pattern in the title.

Copy link
Collaborator

@robtuley robtuley Sep 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when there exists a larger portfolio

On this topic, I wince a bit in this pattern as you reference "poor engineering leadership", perhaps because I have needed this pattern as an engineering leader chuckle. IMO this pattern becomes necessary (and it's use a signal of good engineering leadership) based on the size of the portfolio. As the number of repositories scales up... how do you govern any standards that you have? Well, this.

So it might well be poor engineering leadership, but it's also often a successful org in a growth stage where the growing scale requires previously unwritten rules to be formalised, or previously written rules/principles to be actually governed.


Organisations are littered with repositories that don't have README files, or are outdated, containing obvious errors; projects have no CI integration, no release instructions, badly defined licenses, no PR templates, and so on. Don't let perfect be the enemy of good - software projects get started with little to no thought about their long term maintenance, and a standard plan to refine these projects and make them good is needed for the sanity and health of the engineering teams.

A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You reference "new project" here, and "greenfield" in L9. In our experience this pattern is just as valuable when transitioning a mature, well maintained project from closed source -> inner source when suddenly a bunch of new things become important so might be worth generalising the problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, InnerSource project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.

spelling InnerSource

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides the spelling fix, this paragraph looks like it belongs into the Solution section rather than to Problem?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, and infact I'm applying the scoring to a mature 10 year old repository at the moment - so maybe "new project" can be reworded to "new, poorly maintained, or stale project".

Copy link
Member

@spier spier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for your great pattern and PR!

I left some comments inline, hoping that they will contribute to making the pattern even better.

I am curiously following along on the conversation with @robtuley about whether the things assessed in the scorecard of this pattern are strictly InnerSource-focus, or just good practices in Engineering in general (irrespective of whether an org is practicing InnerSource or not). Btw: An internal struggle that I have had a couple of times when reading and reviewing patterns.

One general question I have about the proposed pattern:

How do you see the relationship and level of overlap of this Software Project Scoring Grid pattern with the "Maturity Matrix" and "Good First Project"? I know that this is asking a lot, given that both of those are pretty long reads. However I think that when we clarify this, we will have an easier time to focus this pattern on the unique ideas it contributions, in comparison to the other patterns.

patterns/1-initial/scoring-grid.md Outdated Show resolved Hide resolved
patterns/1-initial/scoring-grid.md Outdated Show resolved Hide resolved
patterns/1-initial/scoring-grid.md Outdated Show resolved Hide resolved

## Solutions

Example Scoring Grid for Company B (Introduced January 2021) - should be applicable for any Open Source, Inner Source, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Example Scoring Grid for Company B (Introduced January 2021) - should be applicable for any Open Source, Inner Source, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.
Example Scoring Grid for Company B (introduced January 2021) - should be applicable for any Open Source, InnerSource, or private repository using Git tools that support CODEOWNERS, PRs, and PR Templates. Feel free to re-grade based on your company's best practice, or add additional areas based on weak points. Example grading areas might include CODE QUALITY, TESTING, CODE COVERAGE, TEST STRATEGY.

spelling InnerSource


Organisations are littered with repositories that don't have README files, or are outdated, containing obvious errors; projects have no CI integration, no release instructions, badly defined licenses, no PR templates, and so on. Don't let perfect be the enemy of good - software projects get started with little to no thought about their long term maintenance, and a standard plan to refine these projects and make them good is needed for the sanity and health of the engineering teams.

A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, inner source project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.
A *Software Project Scoring Grid*, as presented in this pattern, provides a guided pathway from "new project" to a "mature, well maintained, InnerSource project". By grading areas such as Readme, Contribution Guidelines, Licensing, Pull Requests, and Continuous Integration; a scoring grid can objectively highlight positive and negative aspects of a project, and provide tangible guidance to engineering teams on how to make their repositories more InnerSource friendly.

spelling InnerSource


Apply the grid against one or more repositories; deciding on a score. This can be done indiviually using an expert, or as team as a retrospective discussion.

Based on the score, look to the next column, and identify actions that would lead to an improved grade.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Based on the score, look to the next column, and identify actions that would lead to an improved grade.
Based on the score, look to the next column to the right, and identify actions that would lead to an improved grade.


Agree with the team to rescore on a regular basis, say weekly, monthly, quarterly based on available capacity.

Make time for teams to action improvements; ideally make the work part of their normal repsonsibilities, visible on shared work boards as work tickets.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Make time for teams to action improvements; ideally make the work part of their normal repsonsibilities, visible on shared work boards as work tickets.
Make time for teams to action improvements; ideally make the work part of their normal responsibilities, visible on shared work boards as work tickets.


Where has this been seen before?

- Company A - 100 strong co-located engineering team with well established InnerSource community - used as guidance for all teams, and published as part of internal InnerSource website - exact grid not published in this pattern
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Company A - 100 strong co-located engineering team with well established InnerSource community - used as guidance for all teams, and published as part of internal InnerSource website - exact grid not published in this pattern
- Company A (100 strong co-located engineering team with well established InnerSource community): Used as guidance for all teams, and published as part of internal InnerSource website (exact grid not published in this pattern)

Suggesting a slightly different format for this sentence (also using the same format for the next sentence).

Where has this been seen before?

- Company A - 100 strong co-located engineering team with well established InnerSource community - used as guidance for all teams, and published as part of internal InnerSource website - exact grid not published in this pattern
- Company B - 200 disparate engineers spread across multiple timezones, immature processes and practices - used by engineering managers to communicate a vision for "what good looks like", and used to prioritise engineering led initiatives balanced against product led initatives.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Company B - 200 disparate engineers spread across multiple timezones, immature processes and practices - used by engineering managers to communicate a vision for "what good looks like", and used to prioritise engineering led initiatives balanced against product led initatives.
- Company B (200 disparate engineers spread across multiple timezones, immature processes and practices): Used by engineering managers to communicate a vision for "what good looks like", and to balance engineering-led initiatives against product-led initiatives.

Rephrasing the last part a bit.


Software Project Scoring Grid

## Patlet
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Patlet is meant to allow readers to quickly scan a lot of Problem-Solution pairs, to find the things relevant for their orgs.

If we were to rewrite this Patlet as two fairly short sentences, how would that look like?
1st sentence Problem
2nd sentence Solution

@spier
Copy link
Member

spier commented Oct 8, 2021

Hi @johnbeech. I hope all the feedback here didn't discourage you from your pattern contribution? 😉

My ambition is to merge this pattern relatively soon, as it is still in an initial state anyways.
That would allow us to share it within the ISC but also outside more widely.
That way we can collect more feedback, and level up the pattern in the future.

Please let me know how we can best help you with the remaining bits.

Also I am curious about your thoughts about the questions below:

One general question I have about the proposed pattern:

How do you see the relationship and level of overlap of this Software Project Scoring Grid pattern with the "Maturity Matrix" and "Good First Project"? I know that this is asking a lot, given that both of those are pretty long reads. However I think that when we clarify this, we will have an easier time to focus this pattern on the unique ideas it contributions, in comparison to the other patterns.

@spier spier added the 1-initial Donuts, Early pattern ideas, ... (Please see our contribution handbook for details) label Jan 24, 2022
@spier
Copy link
Member

spier commented Nov 16, 2023

@NiallJoeMaher would be awesome to get your feedback on this PR.

If the approach that you are applying with your Score Cards is significantly different, it would be great to capture it in a new pattern instead.

For context for others: Niall shared his experience in the presentation Leveraging scorecards to scale InnerSource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1-initial Donuts, Early pattern ideas, ... (Please see our contribution handbook for details) 📖 Type - Content Work Working on contents is the main focus of this issue / PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants