Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase coverage #60

Open
25 tasks
jlaehne opened this issue Nov 7, 2022 · 3 comments
Open
25 tasks

Increase coverage #60

jlaehne opened this issue Nov 7, 2022 · 3 comments

Comments

@jlaehne
Copy link
Contributor

jlaehne commented Nov 7, 2022

This issue is a progress tracker on the potential to improve the test coverage in RosettaSciIO:

Formats missing test files:

  • netCDF

Formats with coverage below 80%:

  • digital-micrograph [specific exceptions not covered as well]
  • digitalsurf (sur)
  • fei
  • image
  • ripple
  • tia

Formats with coverage below 90%:

  • bruker
  • dens
  • emd
  • mrc
  • msa
  • pantarhei (prz)
  • phenom
  • renishaw

Formats with coverage below 95%:

  • edax
  • empad
  • hspy
  • mrcz
  • nexus
  • semper
  • tiff
  • usid

Other files with low coverage:

  • utils/exceptions.py
  • utils/readfile.py

See https://app.codecov.io/gh/hyperspy/rosettasciio/tree/main/rsciio for coverage overview and to identify uncovered code in the plugins.

[Edit: Updated list on Dec. 2, 2023.]

@sem-geologist
Copy link
Contributor

as for Bruker, I looked countless times, there is no sensible way to increase it any more. Most of coverage there are missing for try-except constructions, which are placed at crucial points so if something will went south it will be much more clear what exactly is wrong with the file. I think it is much better than getting cryptic message and then digging through hexeditor or other means to find what is wrong with file. I had few times corrupted bcfs, thus these checks are inspired by real life examples, albeit files are much too big to use it for tests. Generating small corrupted bcfs at particular is behind any real options, and corrupting files by hexediting I think also is overdoing.

To get to the core of problem: Coverage should not be looked to so zealously. Especially I think this is overdoing when trying to trigger raising of exceptions placed for easier debugging or keeping the user informed whats wrong with his/her file. What next, maybe we should test if 2 + 2 is still 4? or maybe we should assert if assert with False raises AsssertError correctly? Currently there is tons of tests which covers obvious and just takes testing server time and resources. I think it is Ok to ask for coverage when exception does not stop the function completely but is used like if/else clause. However when raising exception is used to catch and break function from progressing further, what point is in testing such construct? would not # pragma: no cover be better used for that?

@hakonanes
Copy link
Contributor

I agree that one should not strive for high coverage for the coverage's sake.

I still like to keep e.g. kikuchipy's coverage at 100%, though. This gives us a certain confidence that if we break existing functionality by making a change in the code, the tests catch this. The more far-reaching the change, the less clear it can be where a break might occur... Another good reason for high coverage is that a new version of an upstream package might break existing functionality, even when we make no change. Tests should catch this as well.

I think # pragma: no cover should be used on any line where there is no intention of covering it or if it is not possible to. Like in the cases you mention, @sem-geologist. Any dip below 100% then signifies a line that can be covered, and thus should for the reasons mentioned above.

This is the ideal scenario in my view. I'm not asking people to spend time on this now, though, as I myself cannot... However, I have distant plans on upstreaming kikuchipy's plugins to this package. So I like to stay in the loop.

@jlaehne
Copy link
Contributor Author

jlaehne commented May 4, 2023

Very valid points. This issue was not intended to force us to reach a 100% coverage, but to have an idea where there is particular potential for improvement and identify plugins that might break in the future due to insufficient testing. Indeed, it can neither be the priority to get everything covered no matter whether it makes sense or not, nor do we have the capacities to systematically go through every line of code to check. But maybe it can help to identify where improvements do make sense.

@ericpre ericpre mentioned this issue Jun 1, 2023
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants