-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: improve test coverage and speed #9
Comments
Covarage is at 58 percent or so. Clearly room for improvement. |
One important goal is to figure out how to increase coverage while downloading fewer data ... currently we risk to hit disk space limits. I therefore turned off the Python 2 build and downloaded fewer raw data, since testing the raw IO does not depend much on the |
@Eric89GXL I created this https://github.com/dengemann/make-mne-hcp-testing-data.git to make light-weight testing data. With the parameters found there we can shrink the data from 23GB to 500MB. However I'm fighting with some failing tests. There is a section on "weird bug" for reconstructed times that exposes some of the issue. It seems we run into problems with the current way mne python takes care of constructing epochs.times from tmin, data.shape[2] and sfreq. All is converted to the sample domain via rounding and then converted back to times. With the data we find in the matfile we would expect somethink like np.linspace(tmin, tmax, data[shape]) which works reasonably well. Is this an MNE issue or do you see any obvious other issue here? |
@Eric89GXL the current "fix" would be to overwrite epochs.times with times from the matfile after construction of the Epochs. |
|
... and doing the |
... and if you actually did it that way, you'd probably end up needing to do a |
@Eric89GXL any idea what could be the source of the trouble here? maybe indeed the way HCP has decimated their data (factor 4). |
They must decimate using some other method. Is their code available for you to look at? |
@dengemann I'm surprised that the hack I suggested did not work. Just looking at the private repo we have on github (for autoreject) I can find something like this: tmin = -1.5
tmin = tmin + 1 / evoked.info['sfreq']
evoked.crop(tmin=tmin) If the Once this works, we can look into the decimation problem. I understand that there are two confounding problems - decimation and |
Thanks @jasmainak, i'll move this to another discussion. In the meantime I forced our readers to use the matlab times vectors to be consistent with the HCP data. |
So the good news is: we've got fast travis builds (30 seconds). This means dev should be fun now! |
Now that things go fast, we can get back to the remaining aspect of this issue. Our test coverage has decreased as far as I can see, as a result of changing the tests. I think it's because I declared the anatomy tests as slow, they should add some 20 seconds or so. Maybe we can now live with that. The other good news is that we could spawn again different builds for different versions of python etc. contributions welcomed. |
Currently adding CIs may be difficult, but a first start would be to have unit tests that assume consistent local data paths. My suggestion would be to test with one subject that has all relevant data and assume that all data are unpacked correclty.
The text was updated successfully, but these errors were encountered: