Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate benchmark for functions #73

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mindlesslcc
Copy link

No description provided.

@coveralls
Copy link

coveralls commented Jun 16, 2018

Coverage Status

Coverage decreased (-0.2%) to 95.44% when pulling 7136015 on mindlesslcc:master into 97d0977 on cweill:master.

@cweill cweill self-requested a review June 20, 2018 04:24
@cweill
Copy link
Owner

cweill commented Jun 20, 2018

Hi @mindlesslcc, this is an impressive PR. You'll have to give me until this weekend to review this, as I'm swamped this week.

In the meantime, perhaps you can help me answer a more philosophical question: should generating benchmarks be a feature in a binary designed for generating test code?

@mindlesslcc
Copy link
Author

Actually,benchmark is part of test, and sometimes I want to write some tests and benchmarks,and I think should give user a ability to generate benchmarks. For example, user can use a -b flag to specify generate benchmarks and It will not generate benchmarks default.

@cbednarski
Copy link

cbednarski commented May 25, 2019

@mindlesslcc I stumbled across this project randomly but I spent some time looking at this PR.

I suppose this is ironic considering that this project is about generating tests, but it seems that many of tests added in this PR are redundant. I spot-checked about 4 of the golden test files by hand and they are testing nearly identical output with s/t/b/ and s/test/benchmark/ between tests and benchmarks.

I would speculate these could be tested against the original goldens by reading them into a buffer and doing a string replacement (or perhaps they could be re-generated using replacement tokens), and then you would not need two copies of each file, and test output would only need to be updated in one place.

The same appears to be true for internal/render/templates/function.tmpl and internal/render/templates/benchmark.tmpl, which are nearly the same. They could likely be squashed into a single template including variable names (b, bb) and output type (test or benchmark) for simplicity.

If the two templates are merged, I believe the the majority of these tests would be obviated since they are simply supplying different variables to the same code path. The variable functionality could be verified once and then the 38 or so permutations can be skipped.

The size of the PR would shrink to maybe 10 files with straightforward changes and @cweill would have a much easier time reviewing it. Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants