-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a mechanism to filter test data to implement kaocha-like ":focus" and ":skip" support #964
Comments
Perhaps the place to check for this is the default test runner from
That could certainly be put in a shared ns that targets can re-use in some way where it makes sense. Adjusting I also disagree with doing this in the build config or some hook. The goal of I personally don't use I also do have plans for some testing support in the shadow-cljs UI directly. The idea is to just select some tests via either a regexp, single ns, single var or some other conditions (eg. metadata) and then have the option to run it directly in either the browser, node or whatever else may be possible. Not sure karma can work that way though. That is what the |
Also note that the limiting factor here seems to be Otherwise you can just run a single test in any REPL directly without any test build or runner whatsoever. Can't remember if |
Calling in Passing the filter by command line args may work for node-test, but not going to work for browser test. Having the browser UI support for filtering tests could be (more than) great! |
This makes total sense. Another idea is to provide a way to tell |
The problem however that needs solving is how do you get the params there. Filtering is already possible beyond that. |
If I understand correctly, $ git grep -C1 run-test-vars
src/main/shadow/test.cljs-97-
src/main/shadow/test.cljs:98:(defn run-test-vars
src/main/shadow/test.cljs-99- "tests all vars grouped by namespace, expects seq of test vars, can be obtained from env"
src/main/shadow/test.cljs-100- ([test-vars]
src/main/shadow/test.cljs:101: (run-test-vars (ct/empty-env) test-vars))
src/main/shadow/test.cljs-102- ([env vars]
--
src/main/shadow/test/node.cljs-85- (let [test-vars (find-matching-test-vars test-syms)]
src/main/shadow/test/node.cljs:86: (st/run-test-vars test-env test-vars))
src/main/shadow/test/node.cljs-87- |
Yes, you are missing my argument here. I'm not going to modify To challenge your entire issue premise here a little. You are focusing far too much on the implementation side of things without first explaining the why and how. The why I get but please expand on the how. How do users provide the additional info of "hey, please only run tests marked by I don't do a whole lot of testing so I don't have answers to those questions. My testing workflow is running tests from the REPL dynamically when working on them. Then running them all automated in some kind of CI setup. I have never done any testing using a test target in |
Sorry that I could have provided more information to make things easier to communicate.
In kaocha it's specified in the config file (typically named #kaocha/v1
{:tests [{:id :watch
:kaocha.filter/focus-meta [:focus]
:kaocha.filter/skip-meta [:skip]
:ns-patterns [".*-test$"]}]} And kaocha would load this file and do proper filtering based on the key specified as |
Please explain what you actually intend to happen without using |
|
And what is the rest of the workflow? Is it running in watch mode compiling all namespaces but only running marked tests or do you trigger the test execution manually? How does karma factor into any of this? Does it have a watch mode too? |
Yeah only marked tests are being executed. Watching or not is orthogonal to the filtering of tests cases or test namespaces.
Well if I have a flaky test that I want to skip, e.g. |
During TDD people very frequently want to:
Only the first is supported by the default shadow-cljs test runner by using the
:ns-regexp
option. Luckily we can acheive the others by using a custom:runner-ns
and do all the tweaks there, so I can write a custom runner-ns to only run a single test case by inspecting some meta data on the test vars (as inspired by kaocha) :However one issue emerges when I try to do this to a project that has multiple tests targets (:browser test and :karma test)
shadow.test.browser
,shadow.test.karma
), even though all I need to do is to add a single line to the default runner code. This is tedious, and I have to rebase the copied version of the runner if the default runner is updated in the future.(ns shadow.test.browser (defn start [] (-> (env/get-test-data) + (tweak-test-data) (env/reset-test-data!))
(btw the
:runner-ns
support for karma is broken at this moment, when I run karma it throws something like "shadow is undefined").src/main/shadow/test/env.cljs
file to the project src path and do the tweak there.This would work for any :target. However this solution is awkward that, if I have multiple projects I'll place one copy of the env.cljs file in each project, and in the future I may also have to update the env.cljs file in every such project to pick up the latest updates of the original env.js file.
So my question is: Could shadow-cljs provide a
hook
-like function that can be called afterenv/get-test-data
?This way I can place the tests tweaking code in a common library to be used by different projects, even by others in the community.
Any other suggestions to implement this without the hassles I mentioned above would be much appreciated!
I have made a repro repository here:
The text was updated successfully, but these errors were encountered: