-
Notifications
You must be signed in to change notification settings - Fork 722
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Meta] Versioning policy #2410
Comments
I agree we need clarity. Historically, we intended to follow semver in NUnit 3 and therefore we said so. However, we found many reasons not to do so as we went along. We are not alone in this and SemVer is far from universally accepted. Some interesting reading...
It's important to remember that SemVer is explicitly intended for library APIs. It has nothing to do with other things where we talk about compatibility and breaking changes, like command-line options, the precise operation of attributes or the default value of settings. Before we can get clarity on compatibility and versioning, we need clarity on exactly what things we are talking about breaking and we should probably establish a hierarchy among those things that tells us which we consider the most important. Sometimes we may need to choose between breaking two different things. |
I particularly like the second post you linked. I'm familiar with the first. |
Having just experienced the need to upgrade to 3.8 to get a performance fix but then to find 2 breaking changes between 3.7 and 3.8 I was somewhat surprised Doing so research, I found nunit had obsolete some attributes in an earlier version which it appears others (including specflow were using) I also found a removal of contains constraints (or something like that) so when I upgraded from 3.7 to 3.8 I needed to fix a whole load of tests. Had I been going from 3.7 to 4, I would expect breaking changes, but to bundle bug/performance fixes and breaking changes in the same release makes it very hard for consumers who want to keep up to date. |
Looking back on the two changes that affected you @BlythMeister, maybe they were poor decisions. Neither were particularly 'necessary' - both more for code sanity. Both were done with the idea should affect minimal users, and be very simple to fix - it's clear the specflow scenario wasn't one we thought of.
This, I'm more interested in. This change shouldn't have involved any breaking changes? The only visible change should have been the deprecation of the CollectionContainsConstraint consructor. What issues did you face? My main concern about SemVar is that we make minor breaking changes in pretty much every release. There's something of a precedence with NUnit versioning that 'major versions' are a once-every-x-years type thing. That's not to say we can't change that - but it would be a change from what we've historically done. (Although I do hate the 'always done it that way' excuse! 😄 ) I'm fairly ambivalent to what we choose. I don't think we should base this decision too much on the fact that we may have made a couple of shaky backwards-compat decisions in the 3.8 release however. |
@ChrisMaddock i agree with you in principle. with regard to the example code, we had
The .Using fluent method existed in 3.7, but was removed in 3.8.
Which was a simple enough change as a 1 off, but we had around 100 usages of that old syntax with different collection names, expected names as comparers (across different projects) |
@BlythMeister This is very unfortunate. Actually, @ChrisMaddock spotted the missing Personally, I think that we should (re)introduce it in 3.9 (and also we should have more tests, so that we do not remove functionality without noticing it). |
I like the sound of fixing it ;) |
Hotfix? |
I've always had this dream of doing something similar to what the corefx team does with public API: generate a pseudo-cs file containing all public (and protected) definitions and compare it to the currently approved pre-generated list committed to source control and fail the test if they are not identical. That way we are blocked from moving forward without modifying the approved file in the same PR which necessitates a compatibility discussion. Basically snapshot testing our externally visible API. This is exactly where my mind went when #2392 turned up. |
I like the idea. We need to be clear which of our several APIs we are talking about. |
I'd say the entirety of the framework API minus the Internal namespace would benefit from this. |
The framework presents three APIs, at least as we have talked and written about it up to now...
Number (3) is what the docs actually refer to as the Framework API. It would be good to test all of these but they have different considerations and priorities. IMO there's no downside in being precise. :-) |
We could even make it an msbuild step in all the framework csprojs so that the API changes show up in git and the feedback loop is quicker for anyone making changes. I've done this type of thing before. |
Dammit - I thought we'd caught everything there. 😞 @BlythMeister @mikkelbu - thanks for the analysis, I created #2412 from it. Afraid I don't have time to fix it myself at the moment, either of yourselves interested? |
I quite like the API reflection idea. For NUnit 4 - we should definitely get what should be public vs what should be internal set up right! 😄 |
@ChrisMaddock any chance the attribute can also make a comeback so as to not be breaking anymore on older specflow. |
@BlythMeister - I think that should be discussed further, but I'd be in favour of it, yes. I think when we removed it, we mainly considered usages in user-code that could be easily updated, and not 3rd party libraries. Would you make a new issue to do so? |
@ChrisMaddock done, thanks. |
Eric Lippert's Every public change is a breaking change drives home the point that it is always possible to come up with code that demonstrates a source-level break no matter how innocuous your API addition is. We're not talking black and white here but rather a sliding scale. Hopefully a scale that meets general expectations. That said, the .NET team put thought and experience over the years into this guideline: https://github.com/dotnet/corefx/blob/master/Documentation/coding-guidelines/breaking-change-rules.md I think we should do our best to place very close to that document on the sliding scale I mentioned. Like everything else humans do that's remotely at the complexity of language, assumptions are messy. |
@jnm2 Those are two good references that illustrate a point I've been trying to make and perhaps doing poorly. Teams or individuals have to do work to come up with such things. We can't just throw around the term "breaking change" without defining exactly what it means to us. A large portion of my previous consulting/coaching work was helping teams recover from a situation where somebody outside the team (a boss generally) had created a bunch of standards they were required to adopt. Many of those standards were good ones - at least they would have been if they had been generated by the team itself. Human nature being what it is, most of those teams found a way to follow the letter of the standards while violating the spirit. In a smaller number of cases, the team voluntarily adopted somebody else's standard but didn't put any thought into how it applied to them. Those teams were easier to rescue, since there was no outside pressure on them, but they needed recovery all the same. Teams have to have the time and spend the time to figure these things out as it applies to them. In this case, I see both in Eric's article and in the .NET team's guideline how thought went into defining what is meant by a breaking change. We haven't done that. I've been trying to urge us to do it but - and this is because of my interpretation of my own role here now - without actually defining it myself. That's not working so I'll give it a shot... indeed several shots.
My choice would be to start with level 4 and move to level 3 after we master 4. A pre-requisite for any of that is to define what the API or APIs are that we are protecting. |
By handling case 4, I think nunit will be in a better place. As demonstrated when I commented to open this issue, nunit 3.7 to 3.8 had (at least) 2 changes which I can see are for the benefit of the framework, but caused my (many amounts of) code not to compile. |
@BlythMeister I agree with your issue. However, in your case, you were broken by public methods that were intended for public consumption. As I've stated before, NUnit historically had many public types and methods not intended for public consumption. This was primarily for testing purposes (before We need to change this, but we need to do it in an orderly way. Unfortunately, many of the changes that will be needed to get there are actually breaking changes. My own view, is that any breaking change caused by taking methods that are in our |
For what it's worth, other frameworks leave a huge public surface area under a series of Internal namespaces. I don't think it's more than a minor annoyance (for people whose IDEs suggest autocompleting into new namespaces). It's not worth the cludge that I think InternalsVisibleTo is. |
I don't mean nested namespaces. That's common. I'm talking about the namespace we decided to call "Internal" figuring people would understand it was subject to change. To be honest, as a guy who types everything, I gave no thought to IDEs that suggest adding new using statements. Let's ban such IDEs! 😈 |
And in case it wasn't clear, I am in agreement with you. 😁 Haha, but not on this! I'd stop using an IDE if I couldn't Ctrl+. to autocomplete what I think are mindless repetitive tasks 😁 😈 |
Here's an example of a non-API breaking change: #652 (comment) Do/should we consider that a breaking change? Are we constrained to consider it a breaking change just because that's how it will inevitably be perceived when people's existing tests start failing? Whatever the answer is, I'd like there to be a way to set users' expectations ahead of time as much as possible. |
I think we should, yes. In fact - that type of breaking change is less preferable than a compiler breaking change, as it would take more diagnosis. |
I had hopes that this thread would lead to a definition of "breaking change" that we could all get behind. We have spent time defining "public API" - although it's not fully worked out yet - but we still are using the term "breaking change" without any clear meaning of what it is, at least that's how it looks to me. The reason it seems important to me to define this is that every bug fix can be seen as a breaking change since it changes behavior. In particular, if some user has come to rely on the behavior exhibited by that bug, it will break that user. This is not far-fetched, since we have seen users complain about certain bug fixes. A good definition of breaking change would be one that gives us guidance in recognizing where a particular change falls on the continuum from a straight bug fix that nobody can fault to a revolutionary change in a publicly supported and documented API. What are the factors that allow us to place a change along this continuum? The particular change that we are discussing, by the way, appears to me as a bug fix. The change in order of execution was announced in 3.0. Either we didn't do it or we regressed - I don't know which. At the time of 3.0, it was (would have been) a breaking change, but that was OK because we were doing a major upgrade. At this point, when we have changed the behavior to work as it was intended to (and I'm pretty sure documented) I would call it a straight bug fix. I'm pretty sure there are arguments that go the other way as well. I just think we need to expose those arguments rather than just calling out Breaking Change! without reflection. |
It looks like what we all agree on is that the bug fix in 3.7 which broke people's tests should have been announced as such: a thing which has a chance of breaking your tests if you are in a specific spelled-out scenario. I agree that ‘breaking changes’ is an overloaded term. To be very clear and literal: people's code broke. Code breaking means that it begins failing at the task that it used to perform. If code breaks when the only change is that a newer framework version is used, it doesn't matter what we label the phenomenon for ourselves, but we do need to tell them what scenarios are affected and options for dealing with that. People need to see that information before installing the new version: in the NuGet release notes in the IDE, in the GitHub releases page they download from, and least importantly in terms of visibility, the release notes contained in our repository. (Regardless of which version numbers changed from the last release.) |
Every fix has a chance of breaking someone's code. What if we sorted them roughly by impact and played it safe by putting them all in a Changes section? My dream from a user perspective would be something like: NUnit 3.9.0 This release [motivation statement]. New features
Changes
[more info] issue links can be neglected for plain text like the NuGet VS integration (if that thing is still plain text). |
I would be willing to help spend time doing this for every merge into master so that it's ready for release. Hey! We could even play with the idea of making changes to the changelog in every behavior-affecting PR (excluding typo fixes, CI test fixes, etc). |
@jnm2 I'm surprised at your "we all agree" comment, following immediately after mine, when I obviously didn't agree with you at all. That's not a fair way to make your case. Here is an imaginary scenario... Let's say that we created an assertion Of course, the example is absurd. I intend it as a reductio ad absurdum of the argument that any change that breaks user tests is a breaking change. Of course it is - in the literal sense of the words - but that's not really a very useful definition for us. What we need is a definition of the (kind of) breaking changes that we are promising we won't make. Documenting them after it's all done is closing the barn door after the horse is gone. |
Imo, any change that would cause s consumers code to no longer compile is a breaking change. Secondly, any change which causes tests to no longer run. Thirdly (and controversially) any change which could cause a user's test to yield a different result. However, I would say situations where a "fix" is made due to an incorrect implementation should be fixed, but clearly flagged in the change log. |
Wow, I misread your comment and I apologize. Mea culpa. So what do we agree on? 😜 |
In the spirit of the robustness principle, can we err on the safe side in warning people of where their code might cease working because of an upgrade? What is to be gained by making sure certain changes aren't announced as potentially breaking? |
@jnm2 You started this thread as a discussion if versioning - of semver in particular. In semver, the term breaking change has a particular meaning. If every change becomes "breaking" in the context of semver, then we will start doing constant major releases. The reason we are not at something like NUnit 57.0 is because we use an implicit definition of breaking that is much less rigid than the one you are proposing. Of course we should warn users of the effect of the changes we make. I think we generally do that. But you didn't introduce your proposal as a way to improve documentation but as about our versioning policy. That's what I'm still talking about in case you're now on a new topic. |
True, this is about versioning. I'm not asking for semver, but I am asking that we end up on the same page. This is really all Rob's decision– no pressure, Rob 😁– and quite honestly the versioning strategy doesn't matter as much to me as the use of the words "potentially breaking" near the top to grab people's attention and a clear list of scenarios that you can look over to see if you are affected, visible to you before upgrading, wherever you obtain NUnit. For the purposes of versioning, and that purpose only, I'm happy to disregard bugfixes as breaking changes. I think we all agree that there is such a thing as a breaking change and that at a minimum, it includes causing people's code to stop compiling if they rely on non-obsolete API without
Since our change logs will be so impeccable from any consumer's point of view 😀, I'm not really sure I have an opinion on this. |
Starting point for defining breaking change solely in the context of deciding how to version and choosing which release to merge the changes in:
What would you modify in the above draft? |
At one point, I started a draft document on versioning under governance. When it became clear that versioning was a project rather than an organization issue I stopped working on it. Now I'm wondering. Maybe it would help if we had a working document that described how we have been doing versioning in the past. I'm not sure if we should publish it, however, since it might confuse users. Any suggestions as to how/where to do that? Would it be useful? @jnm2 I'm kind of a stickler for sticking to the issue in github issues. This is not just a personality quirk on my part, it's a real problem. At least IMO, every issue should deal with one thing so we will know how to tell when it is done. From your last post, I'm gathering that you want to pull us back to versioning in this issue. I am in favor of that. If you want to start a new discussion somewhere about how we document changes in the release notes or the changes file, let's figure out where to do that. Which is a good question, I think... Where do we have "meta-discussions" about this project? (I guess I'm now starting a meta-meta-discussion.) We could do it under governance if we created per-project labels, but I think governance is better suited for things that need to be formalized. Ideally, an issue about how ProjectX is versioned should be part of ProjectX I would think. Maybe we just need the equivalent of [meta] as a label for the project. I could see something like More importantly, I guess, we need to decide how to document decisions we make about things like versioning. Should there be a versioning.md file? Should it go in the wiki? I don't have any favorite answer myself but there should be some way we can actually save the results of what promises to be a long and complex discussion so we don't have to do it over again in six months. End of meta-meta! |
Thoughts on our current versioning practice...
This has been a look back. I'll comment in a bit on @jnm2's proposal. |
@jnm2 Your outline seems like a definition of breaking change without saying how we would use it for versioning. By that I mean, its something we need but we need more as well. Comments on your points...
In our docs, the "Framework API" means the classes and methods used by the engine or another runner to execute tests. Here, I believe you are talking about the methods used by people who write tests. That's also an API, we just haven't called it that before. You may be tempted to set rules for all the APIs at once, but I don't think that's a good idea. Different people use them for different reasons, so they may need different rules. For example, what we have up to now called the Framework API simply can't be changed, except for fixing bugs or adding new features without changing existing ones. There's no judgement allowed, since any other change would break our own engine and any third-party runners. The test-writing "API" can have rules that are a bit looser if we want it to.
However, as stated at the beginning of this comment, it's really hard to evaluate the definition without knowing how it will be used. If we are going to follow a rule that says "all breaking changes require a major version upgrade" then I would want a different definition of breaking change than one that said "all breaking changes require either a major or a minor upgrade." In the latter case, I'd want some definition to tell me how to decide between major and minor. So I think we have to do this top down by first deciding on our versioning policy. FWIW I'm not in favor of strict SemVer either. |
SemVer sounds good. Many people have used it, of course, while many others have run into problems. So far, we are in the latter group, having made a tentative decision to go with SemVer and then realizing we couldn't do it unless we started releasing a new major release every six months or so. Here are the problems I know about that stood (and probably still stand) in our way:
I continued to work the same way on NUnit 3 in 2007, simply because it was how things had always been done and also because I was supporting older platforms. After 2010 or so, other folks joined the team and some of them had different ideas about what "breaking" meant. At the extreme, it could be taken to mean "any class or method that a user might have used, whether we intended it to be used or not." As a compromise, we came up with the Internal namespace, which was intended to tell people that they used those methods at their own risk. I have to say that I have been surprised at how many programmers, even those making rather sophisticated use of the "Internal" methods, appear not to have noticed that they were considered "Internal" and therefore subject to change. With that background, my own conclusion is that we should only make methods visible in future that we intend to support. However, that's not the end of the story.
Should a break in any of those APIs trigger a major release? I don't know, but I doubt it. To be certain, I think we would have to more clearly define what is included in each of them. If we want the triggering to be different for each, we have to state it clearly since visibility, in and of itself, is no longer sufficient because each of these needs to be visible in some way.
These are just my thoughts. I'm personally trying to do better in defining what is public and what isn't in my work on TestCentric but NUnit has history and it's hard to get away from it! |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Motivation
It has seemed to me that every time one of us alludes to the fact that we follow semver, another of us reminds us of the fact that we don't. I've wanted to understand this better for some time.
We recently decided to make some API-breaking changes in 3.8 which caused @BlythMeister some issues:
#1324, #2239, #2360
I was unable to locate a definitive policy at https://github.com/nunit/docs/wiki so I am sympathetic with @BlythMeister's expectation that we followed semver.
In this particular case I think we were clear enough about the breaking changes in our GitHub release description but our NuGet package does not contain release notes or any hint that 3.8 brings breaking changes with it.
History
In #981, @rprouse suggests:
@CharliePoole replies that this is what we agree on:
#932 assumes we do semver
#1054 (comment) implies we do semver
nunit/nunit-console#23 (comment) says we have never followed semver
#2219 (comment) says we're not sure we do semver
#2338 (comment) says we don't follow semver
nunit/nunit3-vs-adapter#313 (comment) assumes we do semver
#2397 (comment) says we clearly don't follow semver any longer
To say nothing of the numerous times we talk about saving breaking changes for v4, such as
#2397 (comment).
Here we have a conversation discussion breaking changes vs Breaking Changes:
nunit/nunit-console#23 (comment) Something we should explicitly agree on and document.
(Oddly all the issues in nunit that contain the words "breaking change" are also issues that "involves:jnm2"... o.O)
Proposal
The prevalence of semver should cause us to reconsider and respect the mental model of most users.
Whatever we do, particularly if it is counter to the mainstream, crystal clear communication is essential.
Even if our policy is that we never put breaking changes in release notes and that we flip a coin to decide
which part and when to increment, everyone that depends on NUnit needs to know precisely what we are committing to. This includes API contract and behavioral contract, in isolation as well as combination of key projects.
We should catch people's attention with a link to the versioning policy at the top of every release notes and GitHub release page and readme and wiki and documentation page where it could possibly be relevant. Ideally we should also commit to calling out breaking changes at the top separately from the rest of the release notes and on the GitHub release page, as Rob has been doing very accurately, but in the NuGet package description we should also add a note at the top that there are breaking changes in the release. Most of the time, this is all people see.
Opinion: It's not possible to be too clear. The scope of the versioning policy should either be for all key projects, or one for each key project individually.
The text was updated successfully, but these errors were encountered: