New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Grabbing race artifacts programmatically #10
Comments
Interesting thought I like that, may be useful for quite a bit of use cases. However, this syntax is invalid since promise fulfillment callback can receive only one argument, so maybe it should receive result object with report and trace as properties. |
Yep, my bad. So it should be: describe('Prime number finder', () => {
it('should run under 10s for a 6 digits prime', () => {
return race('prime race', () => prime(6))
.then(({ report, trace }) => {
assert(report.profiling.functions.prime < 10000)
})
});
}) |
@fzaninotto What you are asking (perf testing) is what's coming next: Speed Racer will create snapshots of your reports, compare them and tell you what's slower/same/better. Do you think it would fulfil your needs? To answer your question, for now SR is aimed to be a CLI tool, but |
I'm not a fan of snapshot testing. It's like saying: "I don't know precisely why, but it seemed to work in the past". I prefer giving precise conditions (like "it should run under 10s in my example"), that are also meaningful when dumped as error. Besides, snapshot testing forces the developer to commit large data files (the result of previous tests) in the code repository, just for tests... Not my cup of tea. |
@fzaninotto Well that's a point of vue for general purpose snapshot testing. In SR context, the way I see it is more like a reference than a source of truth: "it should be around this value" (aka. I think it makes sense in perf testing as a perf test is like a regression test: you want to make sure new features / updates do not alter perf in a bad way. And the way to test it is ... always the same. So instead of having to write imperative code again and again, you take a snapshot as reference and let SR test it for you. That said, if you want to go further, you could do it in 2 phases:
|
Currently,
race
creates two artifacts (a trace and a report) as files, but it's not possible to associate the race and the artifacts programmatically.In order to automate perf testing (mostly based on reports), I'd love that the artifacts are passed as a result of the
race
function (in a Promise). Something like:Note: In this use case, the first argument of the
race()
function is not useful, so my 2c would be to move it as second argument, and make it optional. Also, in this use case, I don't need the artifacts as files.What do you think of this use case / syntax?
The text was updated successfully, but these errors were encountered: