Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC Support async customHandler #3

Open
jnak opened this issue Dec 27, 2018 · 17 comments
Open

RFC Support async customHandler #3

jnak opened this issue Dec 27, 2018 · 17 comments

Comments

@jnak
Copy link

jnak commented Dec 27, 2018

Hello,

I've started using this library in our tests and I really like it so far. My main concern at this point is that we have to repeatedly mock data that is un-essential for a given test case. It is quite verbose and it's challenging to keep all nested fragment queries in sync with all the test cases across different files.

I'd like to be able to only specify core data for a given test and generate realistic mocks for the rest of the data. You can read more about this approach here.

To achieve this, I was thinking of adding support for async customHandler function so we can potentially call a graphql schema resolver function. Do you think that's the right approach? If so, I'd very happy to send a PR.

Thanks!
J

@zth
Copy link
Owner

zth commented Dec 27, 2018

Hi,

I think that sounds great! It'd be nice to integrate this with as little intervention as possible, as you're also on to in your comment. Building on your thought about customHandler, one idea is to add a new prop on queryMock, autoMock, and then resolve the mock if wanted here:

const nockReturnVal = queryMockConfig.customHandler
? queryMockConfig.customHandler(this.req)
: [queryMockConfig.status || 200, serverResponse];

That'd work I think, provided that the mock can resolve synchronously.

What do you think about that? A PR would be great, sure!

@jnak
Copy link
Author

jnak commented Dec 27, 2018

Unfortunately that would not work because the function needs to resolve asynchronously. This is because I'm going to be calling an embedded graphql server that returns a promise:

graphql(mockSchema, query).then((result) =>
  console.log('Got result', result));

From a code organization perspective, I think adding async support to customHandler is going to be as disruptive as adding a new autoMock async property. Assuming that's true, my preference would be to add async support to customHandler so we don't increase the API surface for a feature that we haven't experimented with yet. Adding async support to customHandler should also open up other interesting use cases that involve async logic. But that's only my 2 cents since you're the maintainer of this library.

You can count on me to do my best to refactor getNockRequestHandlerFn so it's easy to read, maintain and extend. I'll probably send an early PR to see if you're comfortable with the overall approach.

Btw, would you prefer the async function to be a function that takes a node-style error-first callback, or a function that returns a promise?

@zth
Copy link
Owner

zth commented Dec 28, 2018

Hey again,

Right! I do agree with you about making customHandler async, and I think that should be done as well, but I think automocking is a great feature and I'd like to experiment with adding it to core to make it easy to use. I pushed a branch with a few minor changes of where I thought it'd make sense to integrate automocking:

d2fc003

What do you think about that approach? Please feel free to play around with that. Like I said, we'll make customHandler async too eventually of course, that's a great idea.

I guess we'll want to add a way of adding custom mock resolvers as well when wanted, which I saw graphql-tools supported.

@jnak
Copy link
Author

jnak commented Dec 28, 2018

Sure. I completely agree automocking would be a great feature. I'm not sure yet of what the right API would look like for it since I didn't get a chance to experiment with it yet.

Here are my current assumptions of what I think I need in order to start using this feature:

  1. Being able to define automock resolvers shared across all mocks
  2. Being able to extend the shared automock resolver in a given module or test.
  3. Being able to return a specific set of data for a specific query that gets merged in with the mock resolvers above.

Some other considerations / open questions I have:

  • Do we want to couple graphql-query-test-mock with graphql-tools?
  • What does it mean to extend a resolver? Do they somehow get merged at the field level? Does one replace the other?
  • How do we actually merge query data with a mock resolver? The gist used in the article I shared is one way to go about it. But it seems to have limitations.
  • How do we control / configure the error payloads returned when some fields are set to null?

I would love to hear your thoughts on that before we go more into the specifics of the implementation.

Re customHandler, one benefit of adding async support now is that we could both start experimenting with this feature without having to make any of those decisions now. For example, I could start doing this in my tests:

mock. mockQuery({
  ...
  // automock is a function defined by me and can freely experiment with different API
  customHandler: automock({
    Query: () => ({
      todoItems: [
        { completed: true },
        { completed: false },
      ]
    })
  })
})

My automock would be quite simple at first. I would probably use the same logic as in the article I shared above since it supports my requirements (1, 2 and 3). But I'm sure it would evolve after a few weeks of experimentation. We could then regroup here and share the patterns that worked or didn't work for us.

What do you think?

@zth
Copy link
Owner

zth commented Dec 30, 2018

Hi again!

Sorry for not getting back to you sooner, holidays ;) I think making customHandler async as a start is a very reasonable thing, so I've done that and pushed to master/released 0.10.0 with that feature, so we can both start experimenting.

customHandler now receives the actual query text, operation name and variables, in addition to the full nock request. I figured that'd be enough to get started experimenting, right?

I look forward to seeing what you come up with in your experiments! Then we can spend some more time thinking about how to scale this approach, how it'd make the most sense, and if it needs to be implemented in another way.

@jnak
Copy link
Author

jnak commented Jan 3, 2019

That's great! I've already started using your changes and it's been helpful to have access to the query text and the variables.

Btw it looks very promising. I was able to shrink a test file by 70% without losing any confidence in my tests. The mergeResolver script works ok for now but it could be better. I will ping this thread by the end of January to share my findings.

Happy new year!

@zth
Copy link
Owner

zth commented Feb 2, 2019

Hey @jnak and a happy new year even though I missed it by a month ;)! I have been experimenting with this a bit lately, and I thought it'd be interesting to regroup and see where we're at. How has your experimenting gone? I was quite inspired by the things you've outlined here for the mocking, so I've built on those ideas when testing things.

I'll explain what I've tried by this snippet of code:

      
        query SomeQuery {
          viewer {
            id
            firstName
            lastName
            allDimensions(first: 2) {
              pageInfo {
                hasNextPage
                endCursor
              }
              edges {
                cursor
                node {
                  id
                  name
                  active
                }
              }
            }
          }
        }
     
    queryMock.mockQuery(
      autoMock({
        name: 'SomeQuery',
        data: {
          viewer: {
            firstName: 'Gabriel',
            allDimensions: {
              pageInfo: {
                endCursor: 'some-end-cursor'
              },
              $alterNode: allDimensions => {
                mockHelpers.changeConnectionNode(allDimensions, 0, {
                  name: 'First node',
                  active: false
                });
              }
            }
          }
        }
      })
    )
{
      "viewer": {
        "id": "VXNlcjoyOTQ5OTIxOTkwNjU3NDQy",
        "firstName": "Gabriel",
        "lastName": "Hello World",
        "allDimensions": {
          "pageInfo": {
            "hasNextPage": false,
            "endCursor": "some-end-cursor"
          },
          "edges": [
            {
              "cursor": "Hello World",
              "node": {
                "id": "RGltZW5zaW9uOjc2NzQwOTc3MDQxMDM4Mg==",
                "name": "First node",
                "active": false
              }
            },
            {
              "cursor": "Hello World",
              "node": {
                "id": "RGltZW5zaW9uOjgxOTk3NDEyODk3ODU2MzI=",
                "name": "Hello World",
                "active": true
              }
            }
          ]
        }
      }
    }

This resolves everything in SomeQuery automatically mocked, except for firstName on viewer which we set manually, as well as endCursor on pageInfo. So far so good, that's a simple deep merge of the mocked data + custom data. The problems start to appear when you want to change things in lists, or other things that cannot easily be merged through a simple deep merge.

As you can see I'm experimenting with an $alterNode prop, that takes a function that lets you alter that entire object (and whatever's nested on it) imperatively. This lets us do things like change just a few of the automocked props on a connection node without having to re-mock or provide the entire mock ourselves. Together with a few helpers for things like connections (like the changeConnectionNode helper in the example above), I think it could work.

What are your thoughts on something like this? Is it close to what you've tried? Naturally there's quite a few things left to solve (I've basically had to fork and alter addMockFunctionsToSchema from graphql-tools quite a lot for this + a few other things I haven't shown yet), but it might be a start.

@jnak
Copy link
Author

jnak commented Feb 9, 2019

Hey @zth,

Apologies for being silent this week. I wanted to spend more time experimenting with auto-mocking before replying.

I agree that there are still things to be figured regarding merging arrays. In your example, it seems that you are able to match the size of the edges array to the first parameter of allDimensions. How are you able to do this?

On my end, I currently solved this issue by tweaking MockList from graphql-tools. The mock function is passed the arrayIndex via the args parameter:

queryMock.mockQuery(
  name: 'SomeQuery',
  customHandler: autoMock({
    viewer: {
      firstName: 'Gabriel',
      allDimensions: (obj, {first}) => ({
        pageInfo: {
          endCursor: 'some-end-cursor'
        },
        edges: new MockList(first, (obj, {arrayIndex}) => {
          name: arrayIndex === 0 ? "First node" : undefined,
          active: arrayIndex === 0,
        })
      })
    }
  })
)

The main benefit of this pattern is that we keep the mocking API consistent, hence simple. All functions are resolve functions.

In case we don't need to make edges size parameterizable, it's even simpler:

queryMock.mockQuery(
  name: 'SomeQuery',
  customHandler: autoMock({
    viewer: {
      firstName: 'Gabriel',
      allDimensions: {
        pageInfo: {
          endCursor: 'some-end-cursor'
        },
        edges: [
          {
            name: 'First node',
            active: true,
          },
          {}
        ]
      })
    }
  })
)

Objects in arrays are merged with the base mock resolvers like any other regular types. You only specify the properties you need and the rest will be provided by the base mock.

Btw I ended up rebuilding a mergeResolverObjects function because the mergeResolver implementation from the article did not do a deep merge of the resolver maps.

So far I've been trying to tweak addMockFunctionsToSchema as little as possible because I didn't feel like maintaining a fork and I don't think they will be interested in merging in large changes. That being said, I'm debating wether I should wrap their implementation or start with a clean slate because I've found a few minor issues, inconsistencies and footguns in their API / implementation. I'm hoping to decide this early next week.

@zth
Copy link
Owner

zth commented Feb 13, 2019

Hey @jnak!

That's great! If we can deep merge the objects too even in the arrays (and not replace) I like your idea a whole lot better than that awkward $alterNodes.

I actually wasn't able to make it return the correct amount in the list, it's just a coincidence the query says 2 and I return 2. I worked some on that but I needed to change quite a lot in addMockFunctionsToSchema and I didn't really have time to continue.

I look forward to hearing what you come up with! I think this feature is really great, maintaining real mocks at scale really isn't feasible sadly, so I'm very interested in seeing this work out. I'll see when I get the time to continue experimenting myself too.

@jnak
Copy link
Author

jnak commented Feb 25, 2019

Hey @zth,

After careful consideration I decided to rebuild the mocking functionality from scratch. I'm planning to write a detailed explanation of why I didn't use graphql-tools for this.

For now, I can share the main design decisions behind my implementations:

  • Make it completely obvious how to write a mock function
  • Encourage teams to write sensible base mocks
  • Allow to deep merge mock values, objects and lists in a completely unambiguous manner
  • Override the base mocks with a straightforward query object
  • Raises helpful validation errors when writing tests

I haven't written documentation yet so for now you can look at the 40 or so test cases. I tried to cover all the different functionalities of the library. There are also detailed flow types that you might find useful.

The merging logic is the following:

  • Deep merge the mockQuery object with the base mock for the Query type
  • Deep merge the resulting merged query object with the base mock for the type
  • Deep merge the resulting merged query object with the base mock for the type. And so on.

We're going to be dog fooding this internally at my company from my personal Github branch. Once we feel comfortable with it, we'll probably publish this as a separate Npm package in case other people want to use it with a different testing library.

I realize some of my decisions are quite opinionated and might not be a good fit for everyone. That being said, I think it's big step forward from graphql-tools for our use case and I would love to have your feedback on it.

You can start using it by wrapping mockServer like this:

const server = mockServer(schemaDefinition, baseMocks);

...

function autoMock(queryMock) {
  return async function autoMockCustomHandler(request, config) {
    const results = await server(config.query, config.variables, queryMock);
    return [200, results];
  };
}

...

mock.mockQuery({
  ...
  customHandler: autoMock({...})
})

Here is the link to my branch.

Let me know what you think!

@zth
Copy link
Owner

zth commented Mar 6, 2019

I think this sounds just great and the right way to go with rewriting the mocking functionality. I haven't had time to test it yet, but I very much look forward to. I'll get back to you here as soon as I do.

Great work, looking forward to continuing this!

@jnak
Copy link
Author

jnak commented Mar 27, 2019

Quick update. I've spent a couple of more days polishing the library, fixing some edge cases, adding docs and setting a new repo for it. I'll make the repo public once the legal department of my company figure out the licence (most likely MIT). Hopefully early next week.

@zth
Copy link
Owner

zth commented Mar 29, 2019

Sounds great! I look forward to seeing the lib. I haven't had a chance to test your branch properly yet unfortunately, but I look forward to testing the lib.

@zth
Copy link
Owner

zth commented May 12, 2019

Hey @jnak! Just checking in, did you move forward with the lib?

@jnak
Copy link
Author

jnak commented May 12, 2019

Hey @zth, yes my company finally figured out the licences for open source projects. I was planning to publish it and add proper documentation next week but I just made it public on github (repo) in case you want to take a sneak peek today. There is no doc yet but the tests are pretty thorough. Let me know if you run into any issues.

@jnak
Copy link
Author

jnak commented May 17, 2019

Just a heads up that I pushed the library to npm and that I started adding some documentation. The doc is still pretty sparse but should be enough to get started. Cheers.

@zth
Copy link
Owner

zth commented May 22, 2019

Hey @jnak ! This looks really cool, lets continue the mocking-related discussion in your repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants