Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing test_dust_pysm3 test test_d10_vs_d11 on Mac OS #121

Open
MonicaHicks opened this issue Jun 29, 2022 · 12 comments
Open

Failing test_dust_pysm3 test test_d10_vs_d11 on Mac OS #121

MonicaHicks opened this issue Jun 29, 2022 · 12 comments

Comments

@MonicaHicks
Copy link

I am failing test_d10_vs_d11 when passing the parameters for seeds and synalm_lmax in test_dust_pysm3.py; however, when I followed the instructions to modify data/presets.cfg and I commented out the parameters seeds and synalm_lmax in the call to ModifiedBlackBodyRealization, the test passed the first time, and then began failing sporadically..

I am on a MacBook Pro with python 3.9.1 and Big Sur OS.

I sometimes use Ctrl-C as a keyboard interrupt to early abort the test once I test_dust_pysm3 has completed; I don't know if this has any effect on the tests.

Before changing the configuration, the most common failure result was:
E AssertionError:
E Not equal to tolerance rtol=1e-05, atol=0.05
E
E x and y nan location mismatch:
E x: array([[66.72681 , 66.40845 , 66.58324 , ..., 77.87121 , 78.91874 ,
E 79.840225],
E [ 0.893474, -0.87911 , 0.884476, ..., 1.52126 , -1.485077,...
E y: array([[72.38888 , 70.338099, 73.356715, ..., 61.686455, 69.655437,
E 73.51729 ],
E [ 0.96929 , -0.931131, 0.974453, ..., 1.205081, -1.310762,...

After switching the configuration, the values of the matrices upon failure varied widely (sometimes all nan, sometimes with a value y[0][0] > 200).

Before switching the configuration, I tried:

  • setting output_d11 = output_d10 (this was an early sanity check to make sure it could pass)
  • changing from .. import units as u to from astropy import units as u (to address warning associated with u.GHz)
  • reverting to python 3.7
  • running tests in Conda using tox -e test
  • combining the assignment of d11 and output_d11 to be
    output_d11 = pysm3.models.ModifiedBlackBodyRealization( nside=nside, seeds=[8192,777,888], synalm_lmax=16384, **d11_configuration ).get_emission(freq)

The above list is non exhaustive, just examples of the types of changes I tried.

@zonca
Copy link
Member

zonca commented Jun 29, 2022

I assume you are using the main branch in its current state.

do the test pass on your machine with no modification of the code?

@MonicaHicks
Copy link
Author

MonicaHicks commented Jun 29, 2022

I'm currently running the tests with the values for seeds and synalm_lmax commented out in the test_dust_pysm3.py file and I haveseeds and synalm_lmax uncommented in the data/presets.cfg. Other than that, I have not modified the code.

Before I made any changes to the code, the tests were not all passing. Each modification I made, I would revert before trying something different. I followed the steps on https://pysm3.readthedocs.io/en/latest/index.html#installation for Development Install.

I initially was failing almost all of the tests after using pip install pysm3, however after using conda install -c conda-forge pysm3, I began passing most of the tests.

Without changing anything, I now consistently pass 84 tests (2 are skipped, test_dust_pysm3/test_d10_vs_d11 fails pretty regularly, and test_synch_pysm/test_s6_vs_s5 fails sporadically, but much less frequently than test_d10_vs_d11).

@zonca
Copy link
Member

zonca commented Jun 29, 2022

ok, let's focus for now exclusively on running without any modification of the code.
can you paste all the errors you get?

@MonicaHicks
Copy link
Author

monicahicks@DNa80dc9e pysm % pytest
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.9.1, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/monicahicks/pysm, configfile: pyproject.toml
collected 86 items / 2 skipped

pysm3/tests/test_ame.py .. [ 2%]
pysm3/tests/test_ame2.py .. [ 4%]
pysm3/tests/test_bandpass_convert_units.py ... [ 8%]
pysm3/tests/test_cmb.py ..... [ 13%]
pysm3/tests/test_co.py .... [ 18%]
pysm3/tests/test_dust.py ................ [ 37%]
pysm3/tests/test_dust_layers.py .... [ 41%]
pysm3/tests/test_dust_pysm3.py ....F [ 47%]
pysm3/tests/test_freefree.py ... [ 51%]
pysm3/tests/test_hd17.py ......... [ 61%]
pysm3/tests/test_interpolating.py . [ 62%]
pysm3/tests/test_read_map.py .. [ 65%]
pysm3/tests/test_synch_pysm3.py ....... [ 73%]
pysm3/tests/test_synchrotron.py ............ [ 87%]
pysm3/tests/test_unit_conversion_pysm2.py . [ 88%]
pysm3/tests/test_units.py .. [ 90%]
pysm3/tests/test_utils.py ........ [100%]

================================================================================================ FAILURES ================================================================================================
____________________________________________________________________________________________ test_d10_vs_d11 _____________________________________________________________________________________________

@pytest.mark.skipif(
    psutil.virtual_memory().total * u.byte < 20 * u.GB,
    reason="Running d11 at high lmax requires 20 GB of RAM",
)
def test_d10_vs_d11():
    nside = 2048

    freq = 857 * u.GHz

    output_d10 = pysm3.Sky(preset_strings=["d10"], nside=nside).get_emission(freq)
    d11_configuration = pysm3.sky.PRESET_MODELS["d11"].copy()
    del d11_configuration["class"]
    d11 = pysm3.models.ModifiedBlackBodyRealization(
        nside=nside, seeds = [8192, 777, 888], synalm_lmax = 16384, **d11_configuration
    )
    output_d11 = d11.get_emission(freq)

    rtol = 1e-5
  assert_quantity_allclose(output_d10, output_d11, rtol=rtol, atol=0.05 * u.uK_RJ)

pysm3/tests/test_dust_pysm3.py:101:


actual = <Quantity [[66.72681 , 66.40845 , 66.58324 , ..., 77.87121 ,
78.91874 , 79.840225 ],
...5 ],
[-0.55153984, 0.5485058 , -0.47438237, ..., -1.6623421 ,
1.7120881 , -1.7287322 ]] uK_RJ>
desired = <Quantity [[145.00563259, 124.27271805, 111.23658072, ..., 38.08428492,
33.15839255, 40.08294761],
... [ -1.19856448, 1.02644026, -0.79252189, ..., -0.81299763,
0.71934864, -0.8678919 ]] uK_RJ>
rtol = 1e-05, atol = <Quantity 0.05 uK_RJ>, kwargs = {}, np = <module 'numpy' from '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/numpy/init.py'>
_unquantify_allclose_arguments = <function _unquantify_allclose_arguments at 0x7ff4b1517dc0>

def assert_quantity_allclose(actual, desired, rtol=1.e-7, atol=None,
                             **kwargs):
    """
    Raise an assertion if two objects are not equal up to desired tolerance.

    This is a :class:`~astropy.units.Quantity`-aware version of
    :func:`numpy.testing.assert_allclose`.
    """
    import numpy as np
    from astropy.units.quantity import _unquantify_allclose_arguments
  np.testing.assert_allclose(*_unquantify_allclose_arguments(
        actual, desired, rtol, atol), **kwargs)

E AssertionError:
E Not equal to tolerance rtol=1e-05, atol=0.05
E
E x and y nan location mismatch:
E x: array([[66.72681 , 66.40845 , 66.58324 , ..., 77.87121 , 78.91874 ,
E 79.840225],
E [ 0.893474, -0.87911 , 0.884476, ..., 1.52126 , -1.485077,...
E y: array([[145.005633, 124.272718, 111.236581, ..., 38.084285, 33.158393,
E 40.082948],
E [ 1.94163 , -1.645113, 1.47764 , ..., 0.743999, -0.623968,...

/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/astropy/tests/helper.py:457: AssertionError
------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------
WARNING pysm3:template.py:205 No physical unit associated with file /Users/monicahicks/.astropy/cache/download/url/437b2626b1f29ce20de57ce934c44436/contents
============================================================================================ warnings summary ============================================================================================
pysm3/tests/test_co.py::test_co[False]
pysm3/tests/test_co.py::test_co[False]
pysm3/tests/test_co.py::test_co[True]
pysm3/tests/test_co.py::test_co[True]
pysm3/tests/test_utils.py::test_bandpass_integration_tophat
/Users/monicahicks/pysm/pysm3/utils/init.py:68: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
weights = np.ones(len(freqs), dtype=np.float)

pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11
pysm3/tests/test_synch_pysm3.py::test_s6_vs_s5
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/healpy/fitsfunc.py:643: ComplexWarning: Casting complex values to real discards the imaginary part
alm.real[i] = almr

pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11
pysm3/tests/test_synch_pysm3.py::test_s6_vs_s5
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/healpy/fitsfunc.py:644: ComplexWarning: Casting complex values to real discards the imaginary part
alm.imag[i] = almi

pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11
pysm3/tests/test_synch_pysm3.py::test_s6_vs_s5
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/healpy/sphtfunc.py:492: ComplexWarning: Casting complex values to real discards the imaginary part
(np.asarray(cl, dtype=np.float64) if cl is not None else None)

pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11
pysm3/tests/test_synch_pysm3.py::test_s6_vs_s5
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/healpy/sphtfunc.py:448: ComplexWarning: Casting complex values to real discards the imaginary part
cls_list = [np.asarray(cls, dtype=np.float64)]

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================================================================================== short test summary info =========================================================================================
FAILED pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11 - AssertionError:
=================================================================== 1 failed, 85 passed, 2 skipped, 13 warnings in 1024.52s (0:17:04) ====================================================================

@zonca
Copy link
Member

zonca commented Jun 29, 2022

the test is pysm3/tests/test_dust_pysm3.py, execute with:

pytest pysm3/tests/test_dust_pysm3.py

@zonca
Copy link
Member

zonca commented Jun 29, 2022

@bthorne93 , it would be useful if you can try on your Mac OS machine, either pip installing the pre release or pip installing from a git checkout

@zonca
Copy link
Member

zonca commented Jul 2, 2022

I activated testing on Mac OS in Github Actions, see #122, tests are passing there: https://github.com/galsci/pysm/runs/7158561564?check_suite_focus=true

image

@MonicaHicks does your machine have the M2 CPU?

@MonicaHicks
Copy link
Author

MonicaHicks commented Jul 2, 2022

I have Intel i9. I can replicate the above results on my other MacBook, but I believe the test you ran skipped test_d10_vs_d11(). On my other Mac, when it says ....s after test_dust_pysm3.py, it is because I have less than 20GB of RAM. It also looks like it skipped test_s6_vs_s5() in the test_synch_pysm3 tests, so I think it is an issue with available memory.

What I was planning to try next is to ssh into the Sherlock research cluster and see if I have any success running on the virtual machines. I am going to attend a workshop to get set up on the machines this Wednesday.

@zonca
Copy link
Member

zonca commented Jul 2, 2022

You're right. I forgot I implemented that.
I think I specifically implemented it because it required too much memory to run on GitHub Actions. I needed higher resolution to make a meaningful test.

@zonca
Copy link
Member

zonca commented Jul 3, 2022

Just double-checked that running it on Linux (on Jupyter@NERSC) worked fine, I was worried I could have introduced a bug without noticing affecting also Linux:

platform linux -- Python 3.7.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- /global/homes/z/zonca/condajupynersc/bin/python
cachedir: .pytest_cache
rootdir: /global/u2/z/zonca/p/software/pysm, configfile: pyproject.toml
plugins: cov-2.10.0, anyio-2.2.0, doctestplus-0.8.0
collected 5 items / 4 deselected / 1 selected                                                                                              

pysm3/tests/test_dust_pysm3.py::test_d10_vs_d11         PASSED                                                                               [100%]

@MonicaHicks
Copy link
Author

Okay, great. I'm confident I can get Linux running on my Mac.

@zonca
Copy link
Member

zonca commented Oct 5, 2022

Unfortunately I had to disable the Github Actions tests on Mac OS due to some issues with numba after having added the dependency on pixell in #125.

Anyway let's keep this open for reference, if someone with a MAC OS with lots of ram would like to investigate, please do and provide feedback.

@zonca zonca changed the title Failing test_dust_pysm3 test test_d10_vs_d11 Failing test_dust_pysm3 test test_d10_vs_d11 on Mac OS Oct 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants