Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a mechanism to filter test data to implement kaocha-like ":focus" and ":skip" support #964

Open
lucywang000 opened this issue Dec 3, 2021 · 12 comments

Comments

@lucywang000
Copy link
Contributor

lucywang000 commented Dec 3, 2021

During TDD people very frequently want to:

  • only run a specific test namespace
  • skip a specific test namespace
  • only run a specific test case
  • skip a specific test case

Only the first is supported by the default shadow-cljs test runner by using the :ns-regexp option. Luckily we can acheive the others by using a custom :runner-ns and do all the tweaks there, so I can write a custom runner-ns to only run a single test case by inspecting some meta data on the test vars (as inspired by kaocha) :

(deftest ^:focus test-foo
  ...)
(deftest test-bar ;; this one would be ignored
  ...)

However one issue emerges when I try to do this to a project that has multiple tests targets (:browser test and :karma test)

  • I have to copy all the default runner code (i.e. shadow.test.browser, shadow.test.karma), even though all I need to do is to add a single line to the default runner code. This is tedious, and I have to rebase the copied version of the runner if the default runner is updated in the future.
(ns shadow.test.browser

 (defn start []
   (-> (env/get-test-data)
+      (tweak-test-data)
       (env/reset-test-data!))

(btw the :runner-ns support for karma is broken at this moment, when I run karma it throws something like "shadow is undefined").

(ns shadow.test.env

 (defn reset-test-data! [test-data]
-  (swap! tests-ref assoc :namespaces test-data))
+  (swap! tests-ref assoc :namespaces (tweak-test-data test-data)))

This would work for any :target. However this solution is awkward that, if I have multiple projects I'll place one copy of the env.cljs file in each project, and in the future I may also have to update the env.cljs file in every such project to pick up the latest updates of the original env.js file.

So my question is: Could shadow-cljs provide a hook-like function that can be called after env/get-test-data?

{:build {:test 
             {:target :browser 
              :test-data-hook my-test-util/my-hook
              ...}}

This way I can place the tests tweaking code in a common library to be used by different projects, even by others in the community.

Any other suggestions to implement this without the hassles I mentioned above would be much appreciated!

I have made a repro repository here:

@thheller
Copy link
Owner

thheller commented Dec 3, 2021

Perhaps the place to check for this is the default test runner from :node-test. It already does some test filtering based on some command line arguments.

env/get-test-data is supposed to return all available test information so filtering before that is not intended. Each target can however filter after that just like this fn.

(defn find-matching-test-vars [test-syms]

That could certainly be put in a shared ns that targets can re-use in some way where it makes sense. Adjusting env is not something I'm open to regarding this.

I also disagree with doing this in the build config or some hook. The goal of env/get-test-data returning tests as data is escaping the macro heavy world of cljs.test and actually being able to decide what tests to run at runtime.

I personally don't use :karma at all so I'm unsure if it is possible to somehow pass arguments to the test runner that end up being available at runtime in the JS? For :browser-test I could imagine some kind of UI that lets you filter stuff (optionally with some sort of query args to make it bookmarkable, eg. http://localhost:12345?only=focus).

I also do have plans for some testing support in the shadow-cljs UI directly. The idea is to just select some tests via either a regexp, single ns, single var or some other conditions (eg. metadata) and then have the option to run it directly in either the browser, node or whatever else may be possible. Not sure karma can work that way though. That is what the UI-DRIVEN reference is about in some of those runners. I just never got arround to actually implementing it yet since running tests themselves is not a problem but capturing useful output is rather difficult (eg. diffs for expected/actual).

@thheller
Copy link
Owner

thheller commented Dec 3, 2021

Also note that the limiting factor here seems to be karma?

Otherwise you can just run a single test in any REPL directly without any test build or runner whatsoever. Can't remember if cljs.test directly had a function/macro for that but shadow.test has (shadow.test/run-test-vars [#'foo.bar/some-test]). Technically you can also run tests by running them as functions (foo.bar/some-test) but I believe that does very little proper reporting.

@lucywang000
Copy link
Contributor Author

Calling in (shadow.test/run-test-vars [#'foo.bar/some-test]) sounds great, and I believe with some editor integration this could be bound to a keyboard shortcut to locate the current test case around the cursor, format a run-test-vars command, and send it to the repl.

Passing the filter by command line args may work for node-test, but not going to work for browser test.

Having the browser UI support for filtering tests could be (more than) great!

@lucywang000
Copy link
Contributor Author

I also disagree with doing this in the build config or some hook. The goal of env/get-test-data returning tests as data is escaping the macro heavy world of cljs.test and actually being able to decide what tests to run at runtime.

This makes total sense.

Another idea is to provide a way to tell shadow.test/run-tests that some ns or test-var should be ignored.

@thheller
Copy link
Owner

thheller commented Dec 3, 2021

shadow.test/run-test-vars is on the only relevant function here. run-tests does something different and will not be adjusted to do something else. However I'm totally open to adding additional helper functions that does some common filtering to the vars before passing them to run-test-vars (eg. just like shadow.test.node does).

The problem however that needs solving is how do you get the params there. Filtering is already possible beyond that. :node-test already can, :browser-test could have some UI as mentioned. If you are specifically looking for :karma you'd need to figure out how to pass arguments to it.

@lucywang000
Copy link
Contributor Author

If I understand correctly, run-test-vars is only called by the :node-test target. The other targets like browser or karma calls run-all-tests which itself calls run-tests.

$ git grep -C1 run-test-vars
src/main/shadow/test.cljs-97-
src/main/shadow/test.cljs:98:(defn run-test-vars
src/main/shadow/test.cljs-99-  "tests all vars grouped by namespace, expects seq of test vars, can be obtained from env"
src/main/shadow/test.cljs-100-  ([test-vars]
src/main/shadow/test.cljs:101:   (run-test-vars (ct/empty-env) test-vars))
src/main/shadow/test.cljs-102-  ([env vars]
--
src/main/shadow/test/node.cljs-85-      (let [test-vars (find-matching-test-vars test-syms)]
src/main/shadow/test/node.cljs:86:        (st/run-test-vars test-env test-vars))
src/main/shadow/test/node.cljs-87-

@thheller
Copy link
Owner

thheller commented Dec 3, 2021

Yes, you are missing my argument here. I'm not going to modify shadow.test/run-all-tests because all default test runners use it. I'm willing to add a different mode that uses a different function in those runners instead.

To challenge your entire issue premise here a little. You are focusing far too much on the implementation side of things without first explaining the why and how. The why I get but please expand on the how. How do users provide the additional info of "hey, please only run tests marked by :focus?". "hey, only run test this namespace please?" I don't know if :skip is some kind of convention that other test runners use? Or is it kacho specific?

I don't do a whole lot of testing so I don't have answers to those questions. My testing workflow is running tests from the REPL dynamically when working on them. Then running them all automated in some kind of CI setup. I have never done any testing using a test target in watch mode personally. I don't have a clue how other testing workflows look like. Please provide more info in this area instead of focusing on what the code currently does or should do.

@lucywang000
Copy link
Contributor Author

Sorry that I could have provided more information to make things easier to communicate.

The why I get but please expand on the how. How do users provide the additional info of "hey, please only run tests marked by :focus?". "hey, only run test this namespace please?" I don't know if :skip is some kind of convention that other test runners use? Or is it kacho specific?

In kaocha it's specified in the config file (typically named tests.edn) like this:

#kaocha/v1
 {:tests [{:id :watch
           :kaocha.filter/focus-meta [:focus]
           :kaocha.filter/skip-meta [:skip]
           :ns-patterns [".*-test$"]}]}

And kaocha would load this file and do proper filtering based on the key specified as focus-meta and skip-meta.

@thheller
Copy link
Owner

thheller commented Dec 3, 2021

Please explain what you actually intend to happen without using kaocha. I don't know how it operates or what effects those config settings have. Like what is :focus actually doing? What is :skip doing?

@lucywang000
Copy link
Contributor Author

  • If there are tests that are annotated with ^:focus metadata, then only those tests would be executed. This is useful to only run the test that are relevant to the current piece of code being developed.
  • If there are tests that are marked with ^:skip metadata, then these tests would be ignored. This is useful to e.g. not execute flaky tests without comment out the whole test code, which could result in unnecessary huge git diff.

(deftest ^:focus foo-test) is like it.only/test.only in mocha/jest, (deftest ^:skip foo-test ...) is like it.skip/test.skip in mocha/jest

:focus/:skip metadata can also be used at namespace level with similar effects, in which case whole namespace is considered as a whole to be executed or ignored.

@thheller
Copy link
Owner

thheller commented Dec 4, 2021

And what is the rest of the workflow? Is it running in watch mode compiling all namespaces but only running marked tests or do you trigger the test execution manually? How does karma factor into any of this? Does it have a watch mode too?

@lucywang000
Copy link
Contributor Author

Is it running in watch mode compiling all namespaces but only running marked tests or do you trigger the test execution manually?

Yeah only marked tests are being executed. Watching or not is orthogonal to the filtering of tests cases or test namespaces.

How does karma factor into any of this? Does it have a watch mode too?

Well if I have a flaky test that I want to skip, e.g. (deftest ^:skip flaky-test ...), I want it to be ignored in both browser test (during interactive development) and in the karma test during CI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants