Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot Testing ala Jest #2392

Closed
lilasquared opened this issue Aug 28, 2017 · 20 comments
Closed

Snapshot Testing ala Jest #2392

lilasquared opened this issue Aug 28, 2017 · 20 comments

Comments

@lilasquared
Copy link

Hello,
I was wondering if there was any desire from the community to introduce a new feature to NUnit - Snapshot Testing. This type of testing had gained traction from the Jest framework and you can read about it here. Essentially whatver blob you want to ensure does not change as part of a unit test you can take a "snapshot" of through some manner of serialization. This snapshot is then added to your source code. When the underlying blob changes the snapshot test fails and you have the option to update the snapshot or fix whatever broke. One of the main advantages to this is a nice visual diff when something in the blob changes, as it is part of source control. Thoughts?

@jnm2
Copy link
Contributor

jnm2 commented Aug 29, 2017

I've wanted to generate and diff public API files and fail on unapproved changes. Seems like this would be along the same lines.

It would help to have examples of what it looks like to manually write snapshot tests today in NUnit for the kinds of snapshot testing you have in mind, along with what you'd prefer to write. It seems like the kind of thing that should start out as a community extension perhaps.

@jnm2 jnm2 added the is:idea label Aug 29, 2017
@ChrisMaddock
Copy link
Member

I agree with Joseph - I think this is a great possibility for an extension. On the NUnit side, we should make sure our interfaces could support such a tool.

@rprouse
Copy link
Member

rprouse commented Aug 29, 2017

This is an interesting idea, but like the others, I would like to see examples of what it would look like in the current NUnit and what you propose it would look like with this feature.

@rprouse
Copy link
Member

rprouse commented Aug 29, 2017

So, the first time you check a snapshot, if it doesn't exist, a snapshot is taken and added to a file that is used by your tests for subsequent runs. We would have trouble adding those snapshots to the project, so they wouldn't be automatically added to source control for people using TFS, but git would not be a problem.

We would also have to decide how to serialize the snapshot and what data types it supports. The obvious choice would be JSON, but we don't want to take a dependency on an external library for JSON serialization. @lilasquared what would you propose?

@lilasquared
Copy link
Author

@rprouse I will try and create an example to share.

@jnm2 I thought about the idea of it being an extension initially, but creating / updating the snapshot files might be tricky from an extension standpoint. Jest is kicked off from the command line and when a test fails it gives you the option to update snapshots one by one or to bulk update on next run. I don't know how this type of functionality would work from within the nunit runners and visual studio. Because of that I don't know a good way to make this into an extension.

As for the serialization - I haven't put much thought into it as JSON was the obvious choice, although I can see why having an external dependency might be an issue. It could be implemented to accept an already serialized form of data for the snapshot and then rely on the consumer to determine how they want to serialize (xml, json, .ToString() method, etc). I could then see the NUnit.Snapshot.Json extension existing which adds the json dependency and implements a .MatchesSnapshotJson() method or something of that nature.

@CharliePoole
Copy link
Contributor

An example would really help here. I'm not even clear whether this is a feature for the framework, the engine, the runners or some combination. 😃

@lilasquared
Copy link
Author

lilasquared commented Aug 29, 2017

@CharliePoole it may require a combination! The runners would need the ability to update the snapshots that were generated by the tests themselves.

It would be a good idea to checkout the examples from Jest as well. I will try and come up with one that makes sense but their examples and documentation are pretty good. One of the benefits for it is for viewing the rendered result of a react component. When the component changes you get a really nice visual from the snapshot to see exactly what is changing.

@CharliePoole
Copy link
Contributor

Would it be correct to say that this only applies to interactive runners?

@lilasquared
Copy link
Author

@CharliePoole if I understand your question then yes. I would not expect some automated system running tests to be updating the snapshots. The snapshots would be updated during development by the developer.

@CharliePoole
Copy link
Contributor

@lilasquared In that case, maybe there are two features here: snapshot testing and snapshot updating.

@lilasquared
Copy link
Author

It could be split that way but it would be impossible to do snapshot testing without updating. The testing portion would be part of the framework and the updating would be part of the runner. If that's what you're talking about then yeah its two features

@CharliePoole
Copy link
Contributor

Yes, that's what I meant. Divide and conquer. It's possible that some of the runner part could be in the engine, making runner support that much easier.

@lilasquared
Copy link
Author

lilasquared commented Aug 29, 2017

@rprouse @CharliePoole I have created some super dead simple examples as well as a very naive implementation using a SnapshotConstraint that does some super simple work to generate a snapshot. check it out and let me know if you would like some more complex examples.

edit: and I put them here https://github.com/lilasquared/NUnitSnapshotExample

@CharliePoole
Copy link
Contributor

@lilasquared Cool! It's a nice encapsulation of what we used to call the "Gold Master". Do people still say that? 😄

The technique obviously has three aspects.

  1. Comparing against the master. NUnit framework could do that via a constraint, as you have done.
  2. Saving the master if it doesn't already exist. That has to be done in the framework as well.
  3. Updating the master if the expected result has changed. That's the interactive part. It needs to be in a runner and that runner has two ways to go:
    3.1 Rerun the test, passing an argument that tells it to the new master, even if it doesn't match.
    3.2 Save the new master directly, which requires it to know how the framework works and how it saves such masters.

My preference in the third step is 3.1, because it would allow the runner to work with different frameworks without knowing exactly how or where they save snapshots. NUnit already does this quite a bit, passing framework settings in the TestPackage that are honored by the NUnit 3 framework but ignored by any other.

@lilasquared
Copy link
Author

lilasquared commented Aug 31, 2017

@CharliePoole Thanks for the feedback! I think for parts 1 and 2 I have a good idea of what needs to be done and how to do it. I've updated the examples with a bit more robust and extendable functionality, as well as separated out how I imagine the json serialization might work as an extension. It would take me some time to figure out how to integrate it into NUnit.Framework and for it to be consistent with the design patterns in use there.
Part 3 is going to be tricky for me to try and do as well unless someone more familiar with it would have an interest in helping. One question I have is you mentioned the TestPackage settings, but I don't see how I can access those settings from the context of a constraint. I would assume part 3.1 is implying that the constraint, which is responsible for saving the new snapshots, would also be responsible for updating them given the proper command argument from the runner.

We also talked about this as an extension and that is essentially what I created in the example. It is functional however the snapshots would have to be deleted manually each time and the test re-run to save the new one. I think integrating it into the runner somehow makes more sense.

@jnm2
Copy link
Contributor

jnm2 commented Sep 1, 2017

This is really pushing it but it would be cool if I had an option to store the snapshot in a C# file alongside code. The extension would have to take a dependency on Roslyn in order to update the snapshot. For now, a pipe dream. =)

@lilasquared
Copy link
Author

lilasquared commented Sep 1, 2017

@jnm2 That is more or less how the jest implementation works but with a javascript file. Javascript makes it super easy to do this and export the snapshots from the auto generated file. It definitely would be super cool. It could be an extension much like the Json Extension that I have shown which just implements its own ISnapshotCache

@DarynHolmes
Copy link

DarynHolmes commented Mar 15, 2018

Just noticed the links, will follow those...

@fgather
Copy link

fgather commented Jan 19, 2019

Hi,
since this ticket here went stale, I recently added NUnit support to https://github.com/theramis/Snapper

@rprouse
Copy link
Member

rprouse commented May 18, 2020

I am closing old Idea issues that have not had comments or made progress in several years. If anyone comes back with a compelling argument for these issues, we can reopen.

@rprouse rprouse closed this as completed May 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants