Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2e tests overhaul #2024

Draft
wants to merge 1 commit into
base: v1.x-2022-07
Choose a base branch
from
Draft

E2e tests overhaul #2024

wants to merge 1 commit into from

Conversation

cartogram
Copy link
Contributor

@cartogram cartogram commented Aug 22, 2022

This PR suggests a new approach to our e2e tests for the Hydrogen framework and showcases a few basic test-suites. This is a Proof of Concept, want feedback before devoting too much effort into it. While the broad characteristics of this overhaul are defined here and outlined below, there will be more methods and functions added as we move all of the e2e tests over.

Dynamic fixtures

Each app runs in an isolated sandbox that is created/removed before/after each test is run. This is different than our current set up, where all fixtures are checked into git and live in the packages/playground directory.

Before: Static fixtures After: Dynamic fixtures
Screenshot 2022-08-29 at 16 12 18 Screenshot 2022-08-29 at 16 13 36

Our current setup has been problematic because:

  • The packages folder should only contain actual packages from our monorepo and the additional files add unnecessary bloat. More files tends to lead to more confusion when trying to understand a single e2e test. This is solved with Dynamic Fixtures because these files will not exist in the repo at all unless a test is running or the {persist: true} config is given (this is for debugging only).
  • Almost impossible to know instinctively what e2e tests corresponds to which fixture. This is solved with Dynamic Fixtures because only relevant fixtures are generated in the same folder as the tests that use them.

Generating the fixtures dynamically also follows the pattern of other frameworks (next, remix, astro, etc...).

In-source testing for routes and components

An even further extension of the above point, many of our e2e tests are route or component specific and now we can define tests beneath the code they are testing. This is different than our current set up, where all the tests live in a single suite for each fixture and all the routes defined within that fixtures routes folder.

Screenshot 2022-08-29 at 16 23 16

Our current setup has been problematic because:

  • It is difficult to understand what code an individual test is referring to. (there are dozens of routes). Defining a test in the same file as the source code makes this a no-brainer.

Additionally, tests can even share variables, etc... between the source and test. This also follows a trend in remix's Route Module API, where a route-level component has multiple exports (meta, links etc... ) a tests export falls right inline with this way of collocating everything in a top-level route file.

Contained setup and teardown

We export our own describe function that defines each suite and takes care of the nitty setup/teardown of each test. This is different than our current set up, where we reference at least 4 different files for the building of code, starting servers, and other tasks required for each e2e test suit to run properly.

The below files are not easy to reason about (this has come up in conversations with many on the team). The changes in this PR would remove all of them.

Screenshot 2022-08-29 at 16 25 58

Our current setup has been problematic because:

  • It is very difficult to follow how the full environment is constructed. By keeping this logic contained to a single function block that sets up the different primitives and injects them into the suite we can more easily understand and debug the full e2e environment.

The one draw back to this approach is that we are essentially "wrapping" vitest primitives and it may become unclear where that line is drawn. I might suggest we go full in on this approach and essentially re-export * from vitest inside of our test-framework so that a developer doesn't need to think about it. We could consider a "hydrogen/testing" package that sets this all up and a user test might look like:

import {describe, it, vi, beforeEach} from '@shopify/hydrogen-testing'

describe('...', () => {
//...

Support for testing in multiple environments

Each suite runs in 3 environments: Node development using a vite dev server, Node production using our platform entry, and Worker production using our platform entry and worker runtime and we can opt-out of any of these at the suite level. This is different than our current set up, where we pass in a config to each test and use conditional blocks to return early if we don't want a test to run in a specific environment.

Our current setup has been problematic because:

  • Having control flow inside of a test is a bad practise
  • Passing a config into a separate shared file of test cases is indirect and makes tests harder to follow.

Screenshot 2022-08-29 at 16 34 04

The above conditional logic can be omitted if we are able to declaratively tell a test suite how and where to run.

Vitest

We have been moving away from Jest entirely in our repo, in favour of Vitest. Not much to add on this point because it is a hard requirement for vite 3 among other reasons (ESM support, etc).

cc @pepicrft @lemonmade @heimidal

Running these new tests locally

Checkout this PR and run yarn test-e2e-new.

Screenshot 2022-08-29 at 17 55 22

@cartogram cartogram requested a review from a team August 22, 2022 17:34
@@ -0,0 +1,639 @@
import {paramCase} from 'change-case';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added everything in a single file for easier reviewing, but eventually think this should be broken up.

}

async function createWorkerProdServer({root, port}: ServerConfig) {
const mf = new Miniflare({
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will replace with mini-oxygen in a follow up.

Copy link
Contributor

@frehner frehner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some thoughts, but not strongly held.


expect(text).toBe('Hello world');
});
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing this now in isolation - I like the colocation of component and test, but I don't like how it feels a little unclear (and magical?) how things are getting rendered and where it's coming from.

For example, it's not very clear that the default export above is getting rendered, or even how it's getting rendered.

Maybe this is an area where I wouldn't mind having to be more explicit, even if it means writing more code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow here, can you show an example of how you would prefer it to work?
The way I understand this, we render route X and we include the tests for route X in the same file... what's missing?

Copy link
Contributor

@frehner frehner Aug 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I'm just thinking about other test frameworks, where you call something like render(<Home/>), and was thinking that I like the explicitness of that.

Somewhat related: in the current situation, how would you change the props passed to <Home/> to test different code paths? Is that possible?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's possible since these are e2e tests and depends on what the browser sends 🤔 -- might be wrong though

</>
);
}
`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It kind of stinks that these have to be written as strings, as you lose features such as linting, formatting, typechecking, etc. (especially as part of the CI, which is helpful for keeping external contributions cleaner too)

I guess you don't have to create them as strings, though, right? They can be files in the filesystem? These were just done for ease of use?

Copy link
Contributor Author

@cartogram cartogram Aug 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah these can be files, I also think there is probably a tagged template literal (similar to gql) that may solve some of the issues you identified. Either one exists of we could create something fairly easily.

Copy link
Contributor

@frandiox frandiox Aug 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my main question as well... what's the benefit of having this as strings over having files? Do we have an example or use-case where this approach is clearly better? 🤔

Maybe we testing multiple routes so that we can colocate tests with the content of the files?

Copy link
Contributor

@frandiox frandiox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @cartogram! I think colocating tests in routes will make 90% of the tests easier to follow.

I have some questions about the whole meta framework:

  • It looks like this initializes the SandboxInstance once per describe and each environment, and the initialization is basically building or creating a dev server. I guess this means we need to be cautious when adding new describes because it can change the performance of the tests, and we should rather use it directly as much as possible. Is this correct?
  • Related to the point above, what would be the "recommended" way to organize tests now (maximizing perf, I guess)? At first glance, it's not clear to me because now we have much more flexibility than before. For example, do we have a project test/css that has 1 describe and includes tests for everything related to CSS, including Pure CSS and CSS Modules? Should those two be split in different describes or in different projects?
  • When do we write tests directly in test files instead of routes? When the tests are related to more than 1 single route? Any other use-case?
  • I guess this is unknown yet but, any idea on how this compares with the previous approach in terms of performance? Before we had 2 big builds and 1 dev process (per project). I guess it's different now but perhaps the builds are faster?
  • Is it possible to run these test projects locally for development? A common way to debug tests before was basically running the projects in dev but I'm not sure how to do that now that most of the files are inlined strings in test files.
  • Do you think that it will be possible at some point to run each describe/environment in a different process?

Comment on lines +159 to +160
react: '^18.2.0',
'react-dom': '^18.2.0',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could grab these from Hydrogen's package.json#peerDependencies?

@cartogram
Copy link
Contributor Author

cartogram commented Aug 29, 2022

Thanks for all the great questions @frandiox, some answers below:

It looks like this initializes the SandboxInstance once per describe and each environment, and the initialization is basically building or creating a dev server. I guess this means we need to be cautious when adding new describes because it can change the performance of the tests, and we should rather use it directly as much as possible. Is this correct?

Yes that's correct. There will be a balance between when we share a test suite (for performance) and we establish a new one (for isolation). I don't have an answer to this yet and will have to come out in practise, but important to remember that e2e is unlike unit testing in this way.

Related to the point above, what would be the "recommended" way to organize tests now (maximizing perf, I guess)? At first glance, it's not clear to me because now we have much more flexibility than before. For example, do we have a project test/css that has 1 describe and includes tests for everything related to CSS, including Pure CSS and CSS Modules? Should those two be split in different describes or in different projects?

Yes, this is a great observation. I've considered a few options to make this easy (ie: new hydrogen config === new test suite), but haven't come up with a clear answer just yet.

When do we write tests directly in test files instead of routes? When the tests are related to more than 1 single route? Any other use-case?

I think it would be great if we did as much as possible within the route files since that is a nice API, and I am trying to come up with a way to support this across the board (inclusive of multiple routes). Again, not a clear answer but it is an improvement I would like to make to this whole framework design.

Do you think that it will be possible at some point to run each describe/environment in a different process?

I do think this should possible and would be a big performance win.

@frandiox
Copy link
Contributor

@cartogram Would it make sense to optionally pass the hydrogen.config.js path to describe? Or perhaps we should just have different projects when testing things like "async config"?

@cartogram
Copy link
Contributor Author

@cartogram Would it make sense to optionally pass the hydrogen.config.js path to describe? Or perhaps we should just have different projects when testing things like "async config"?

I think given this model, things like async config would be a different fixture personally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants