Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Grabbing race artifacts programmatically #10

Open
fzaninotto opened this issue May 15, 2017 · 5 comments
Open

[RFC] Grabbing race artifacts programmatically #10

fzaninotto opened this issue May 15, 2017 · 5 comments

Comments

@fzaninotto
Copy link

Currently, race creates two artifacts (a trace and a report) as files, but it's not possible to associate the race and the artifacts programmatically.

In order to automate perf testing (mostly based on reports), I'd love that the artifacts are passed as a result of the race function (in a Promise). Something like:

describe('Prime number finder', () => {
  it('should run under 10s for a 6 digits prime', () => {
    return race('prime race', () => prime(6))
        .then((report, trace) => {
            assert(report.profiling.functions.prime < 10000)
        })
  });
}) 

Note: In this use case, the first argument of the race() function is not useful, so my 2c would be to move it as second argument, and make it optional. Also, in this use case, I don't need the artifacts as files.

What do you think of this use case / syntax?

@lneicelis
Copy link

Interesting thought I like that, may be useful for quite a bit of use cases. However, this syntax is invalid since promise fulfillment callback can receive only one argument, so maybe it should receive result object with report and trace as properties.

@fzaninotto
Copy link
Author

Yep, my bad. So it should be:

describe('Prime number finder', () => {
  it('should run under 10s for a 6 digits prime', () => {
    return race('prime race', () => prime(6))
        .then(({ report, trace }) => {
            assert(report.profiling.functions.prime < 10000)
        })
  });
}) 

@ngryman
Copy link
Collaborator

ngryman commented May 15, 2017

@fzaninotto What you are asking (perf testing) is what's coming next:

Speed Racer will create snapshots of your reports, compare them and tell you what's slower/same/better. Do you think it would fulfil your needs?

To answer your question, for now SR is aimed to be a CLI tool, but 0.4.0 will expose an API so you can customize things the way you want :)

@fzaninotto
Copy link
Author

I'm not a fan of snapshot testing. It's like saying: "I don't know precisely why, but it seemed to work in the past". I prefer giving precise conditions (like "it should run under 10s in my example"), that are also meaningful when dumped as error. Besides, snapshot testing forces the developer to commit large data files (the result of previous tests) in the code repository, just for tests... Not my cup of tea.

@ngryman
Copy link
Collaborator

ngryman commented May 15, 2017

@fzaninotto Well that's a point of vue for general purpose snapshot testing.

In SR context, the way I see it is more like a reference than a source of truth: "it should be around this value" (aka. report.profiling.functions.prime < 10000). Perhaps the term snapshot is misleading...

I think it makes sense in perf testing as a perf test is like a regression test: you want to make sure new features / updates do not alter perf in a bad way. And the way to test it is ... always the same. So instead of having to write imperative code again and again, you take a snapshot as reference and let SR test it for you.

That said, if you want to go further, you could do it in 2 phases:

  • run SR and output reports in a temporary directory.
  • use your favorite test runner to load reports and test them with an assertion library.

0.4.0 will offer an API that let you run races from anywhere, so it will be even simpler and near what you are proposing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants