Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

file based integration tests #5067

Merged
merged 20 commits into from May 16, 2024
Merged

file based integration tests #5067

merged 20 commits into from May 16, 2024

Conversation

Geal
Copy link
Contributor

@Geal Geal commented May 3, 2024

This defines a way to write integration tests using JSON files, to avoid the long linking times that we are suffering when writing unit or integration tests inside the Router. Writing and refining a test plan is a lot faster and lends itself more to a highly iterative way to work.

The way it works right now:

  • you define a plan.json file that contains a serie of actions
  • actions available:
    • start a router with a configuration, schema and some mocked subgraphs
    • stop the router
    • load a new configuration
    • load a new schema
    • make a request and verify that we get the expected response

This is currently an experimentation (with all tests failing on purpose), that we should adapt to our needs now that we can see it works. We could very well start using it ASAP and evolve it as needed, as we've done with other testing systems.

Things to decide on:

  • we expect every test as a folder under apollo_router/tests/samples. Do we want to support grouping under subfolders? 'per feature, etc)
  • this does not support Enterprise features for now. We could easily require an API key in envirment when we do our local tests (I think most of us already do it?) and have it available in CI
  • for now this is done as an integration test under apollo-router/tests. I think this could work as a separate executable that could be used outside of the Router's CI. Example: to run a set of request/response for a customer, in a non public environment

Checklist

Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review.

  • Changes are compatible1
  • Documentation2 completed
  • Performance impact assessed and acceptable
  • Tests added and passing3
    • Unit Tests
    • Integration Tests
    • Manual Tests

Exceptions

Note any exceptions here

Notes

Footnotes

  1. It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this.

  2. Configuration is an important part of many changes. Where applicable please try to document configuration examples.

  3. Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions.

Copy link
Contributor

github-actions bot commented May 3, 2024

@Geal, please consider creating a changeset entry in /.changesets/. These instructions describe the process and tooling.

@router-perf
Copy link

router-perf bot commented May 3, 2024

CI performance tests

  • step - Basic stress test that steps up the number of users over time
  • events_big_cap_high_rate_callback - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity using callback mode
  • large-request - Stress test with a 1 MB request payload
  • events - Stress test for events with a lot of users and deduplication ENABLED
  • xxlarge-request - Stress test with 100 MB request payload
  • events_without_dedup - Stress test for events with a lot of users and deduplication DISABLED
  • xlarge-request - Stress test with 10 MB request payload
  • step-jemalloc-tuning - Clone of the basic stress test for jemalloc tuning
  • events_callback - Stress test for events with a lot of users and deduplication ENABLED in callback mode
  • no-graphos - Basic stress test, no GraphOS.
  • reload - Reload test over a long period of time at a constant rate of users
  • events_big_cap_high_rate - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity
  • events_without_dedup_callback - Stress test for events with a lot of users and deduplication DISABLED using callback mode
  • const - Basic stress test that runs with a constant number of users

@Geal Geal changed the title experiment: file based tests file based integration tests May 7, 2024
@lrlna
Copy link
Member

lrlna commented May 8, 2024

Love this!

I do like the idea of grouping things by features and/or functionality. I'd love to see a separation of ok and err scenarios too, so we can easily add cases that we run into to either of the buckets. So something like this:

apollo-router/
├─ tests/
│  ├─ query_planner_cache/
│  │  ├─ ok/
│  │  │  ├─ README.md
│  │  │  ├─ configuration.yaml
│  │  │  ├─ supgraph.graphql
│  │  │  ├─ plan.json
│  │  ├─ err/
│  │  │  ├─ README.md
│  │  │  ├─ configuration.yaml
│  │  │  ├─ supergraph.graphql
│  │  │  ├─ plan.json
│  ├─ subscriptions/

Maybe in the future we can take it even further, and use snapshots to also store possible responses. And each "scenario" could have files for operations and responses that we can add and tweak as the router changes.

@Geal
Copy link
Contributor Author

Geal commented May 13, 2024

having subfolders to separate different kinds of tests sounds nice

@Geal Geal marked this pull request as ready for review May 14, 2024 16:23
@Geal
Copy link
Contributor Author

Geal commented May 14, 2024

I added support the subfolder because adding other files like the README interacted weirdly with the test runner, and subfolders definitely feel very nice to run this:

   Canceling due to test failure: 1 test still running
        PASS [   2.575s] apollo-router::samples /basic/query2
------------
     Summary [   2.576s] 2 tests run: 1 passed, 1 failed, 0 skipped
        FAIL [   0.256s] apollo-router::samples /basic/query1

I added a README.md file to explain what to expect and how to write tests. The tests are now passing (I made them explicitely fail before to check the output locally and in CI), so this is good to review and merge

Copy link
Contributor

@o0Ignition0o o0Ignition0o left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great start! love it!

@Geal Geal enabled auto-merge (squash) May 16, 2024 09:24
@Geal Geal merged commit 10a76bd into dev May 16, 2024
13 of 14 checks passed
@Geal Geal deleted the geal/file-based-tests branch May 16, 2024 17:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants