New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError: dcmread: Expected a file ... but got NoneType
test failures on =pydicom-2.3.1
#1800
Comments
At a first glance it looks like the test data cannot be accessed - maybe this warning:
has something to do with it. |
@mrbean-bremen yes, I'm packaging it for Gentoo Linux. Is there perhaps a static testdata archive somewhere? The log above is using the PyPI tarball, I also tried it with the GH one, but I get the same errors. |
Yes, there is both testdata in the package (the one mentioned in the warning), and a static test data archive in a separate repository ( |
Instead of trying to completely understand what is going on here, I have been wondering anyway about being more explicit for optional packages in pydicom's new pyproject.toml, which might help solve this. Specifically, I'm thinking to options so one could I'm hoping to do a pydicom 2.4 release in the next week or so. If you could wait for that, we could ensure that it is configured correctly to work for your build.
I'll try to have a look at that too later today if @mrbean-bremen doesn't get to it first |
@darcymason yes, I can temporarily disable the tests. Can you ping me on this with the new release? |
I've had a look at this and there is are classes of tests of the data manager code itself that skip if |
@TheChymera, sure, we will likely enlist your help pre-release, to make sure it is okay before publishing. |
Thanks @darcymason - I got distracted with other not pydicom-related stuff and didn't get to it.
Yes, you are right. It is probably the best to also skip all tests relying on this data, maybe using an auto-use fixture that detects if the files are available. |
@darcymason - I've started to try this out locally. Depending on an environment variable, I will try to put something together over the weekend. |
Sure, if you are willing to come up with something, that would be great.
I'm not sure how this should work -- it seems making the default skip a large number of tests might be awkward. How do we set that for our CI, etc.? Does that mean adding an environment variable change to all the workflows? How does that work for a user who just installed and wants to run the tests? -- in partial answer to my own question, I think we could have a [dev] extras option in the install, or [test], and we could add that to the contributing guidelines (and/or installation documentation) |
Forget to mention this in my previous comment. I did wonder whether we should simplify that anyway. I don't see the point of setting a global variable, e.g. |
So, another thought...what if the tests simply required I'm not sure if that can work in this issue's build from a source tarball scenario ... but if there were a way to do something parallel to |
Agreed.
Yes, I ended up doing exactly this for many cases (specifically the pixel handler tests, as they use the names to parametrize the tests). I started by adding fixtures for the common cases, but I'm now not so sure if this is even needed.
You mean no tests at all are run? This would make some sense, as with my current implementation, only the tests with external data are skipped, which is a rather arbirary collection of tests. I'm inclined to throw away my local changes in favour of an easier way that will just skip all tests if |
Yes, that is the suggestion. Throw an error message stating that testing requires |
@TheChymera, I'm attaching a source tarball from my latest attempts with the |
Full build log: https://ppb.chymera.eu/084827.log
Any ideas what's going on?
The text was updated successfully, but these errors were encountered: