Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison tools for UnitScalars/UnitArrays #85

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jonathanrocher
Copy link
Contributor

@jonathanrocher jonathanrocher commented Apr 17, 2019

Add assertion functions and functions returning booleans to test whether 2 unitted objects (scalars or arrays) are "almost equal".

from scimath.units.testing.assertion_utils import assert_unit_scalar_almost_equal

Feedback welcome! Maybe that extra nesting isn't all that useful after all...

@codecov-io
Copy link

Codecov Report

Merging #85 into master will decrease coverage by 0.03%.
The diff coverage is 58.92%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #85      +/-   ##
==========================================
- Coverage   60.87%   60.84%   -0.04%     
==========================================
  Files          75       77       +2     
  Lines        3208     3264      +56     
  Branches      368      376       +8     
==========================================
+ Hits         1953     1986      +33     
- Misses       1163     1186      +23     
  Partials       92       92
Impacted Files Coverage Δ
scimath/units/testing/assertion_utils.py 0% <0%> (ø)
scimath/units/compare_units.py 100% <100%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a493187...a609341. Read the comment docs.

@codecov-io
Copy link

codecov-io commented Apr 17, 2019

Codecov Report

Merging #85 into master will increase coverage by 0.44%.
The diff coverage is 86.2%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #85      +/-   ##
==========================================
+ Coverage   60.87%   61.32%   +0.44%     
==========================================
  Files          75       77       +2     
  Lines        3208     3266      +58     
  Branches      368      377       +9     
==========================================
+ Hits         1953     2003      +50     
- Misses       1163     1171       +8     
  Partials       92       92
Impacted Files Coverage Δ
scimath/units/compare_units.py 100% <100%> (ø)
scimath/units/assertion_utils.py 65.21% <65.21%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a493187...fbbc66e. Read the comment docs.

@jonathanrocher
Copy link
Contributor Author

CI happy (though coverage is down, I will fix that).

Ready for feedback. cc @JCorson

from scimath.units.unit import InvalidConversion


def unit_scalars_almost_equal(x1, x2, eps=1e-9):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we really want eps to be a quantity of the same unit class as x1 and x2 here, and I'd suggest removing the default value and making it a required argument. It just doesn't make sense conceptually to ask whether two lengths (for example) are equal up to two decimal places, while it absolutely makes sense to ask whether two lengths are equal to within a margin of 2cm, or 20nm, or 1/8 inch, or ...

It's particularly unclear what comparing to 1e-9 would mean in the case that x1 and x2 are comparable, but have different actual units, and the implementation here looks as though it would give asymmetric results:

>>> x = UnitScalar(1.0, units="um")
>>> y = UnitScalar(1002, units="nm")
>>> unit_scalars_almost_equal(x, y, eps=1e-2)
True
>>> unit_scalars_almost_equal(y, x, eps=1e-2)
False

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the valuable feedback @mdickinson . I totally agree with you on eps needing to be unitted, and for it to be a required argument. I would only suggest to allow for eps to be a float if the 2 values have the same unit: seems reasonable to you? I will push an update...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd go further, and only allow use of a float eps if both quantities being compared are actually unitless. Otherwise you're again comparing a unitful quantity with a unitless quantity, which seems like a category error to me. It would seem wrong to me if a given unit_scalars_almost_equal call passes for some length length1 and fails for a length2 that represents exactly the same length (so length1 == length2 gives True), but happens to use different units.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOW, I'd want the check to be checking a property of the length itself, not a property of the representation of that length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I see your point. At the same time, I would like to keep it simple to use. What you propose would lead to this for the usage

from scimath.units.testing.assertion_utils import assert_unit_scalar_almost_equal
from scimath.units.api import UnitScalar
assert_unit_scalar_almost_equal(x, y, eps=UnitScalar(1e-12, units="SOME UNIT HERE"))

is quite verbose. What I meant to do with the current implementation is:

assert_unit_scalar_almost_equal(x, y, eps=UnitScalar(1e-12, units=x.units))

What about automatically treating a float eps like UnitScalar(eps, units=x.units) inside unit_scalars_almost_equal? Or change eps to a non-dimensional "rtol" which would be compared to float(abs(x1-x2)/abs(x2)) (after unit conversion) similar to https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.allclose.html ?

@jonathanrocher
Copy link
Contributor Author

jonathanrocher commented Apr 28, 2019

Following my last proposal, I have converted the (meaningless) absolute comparison to a relative comparison, and renamed the eps comparison precision with rtol. What is being tested is now:

abs(x1-x2) < abs(rtol * x2)

similar to https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.allclose.html

Feedback welcome @mdickinson .

@mdickinson mdickinson self-requested a review April 29, 2019 10:32
@jonathanrocher
Copy link
Contributor Author

Bumping this back up...

1 similar comment
@jonathanrocher
Copy link
Contributor Author

Bumping this back up...

@mdickinson
Copy link
Member

I just noticed that this PR was still on my to-review list. Unfortunately I don't have time to spend on this in the forseeable future, so I'll unassign myself. But here are a few thoughts in passing:

  • We've moved away from nose generally in ETS, so the test functions shouldn't depend on nose.
  • More generally, there's a potential issue here with tying ourselves to a particular test framework. To avoid that issue, I'd recommend simply dropping the test helpers and just keeping the utility functions that return a bool. Those functions can then be used in either assert statements (for Pytest / nose-style testing) or self.assertTrue(...) calls (for unittest-style testing).
  • Do we need both unit_scalars_almost_equal and unit_arrays_almost_equal, or can they be combined into a single function that would also allow mixed-type scalar-to-array comparisons?
  • Should we also support an absolute tolerance? (To make sense, that absolute tolerance would of course need to have units compatible with the units of the two things being compared.)
  • Given that what we're implementing here (especially if we have an absolute tolerance in addition to the relative one) is essentially just a unit-aware version of numpy.allclose, can we find a name that conveys that? Perhaps we could simply re-use the name allclose, relying on the context to disambiguate.

@mdickinson mdickinson removed their request for review September 6, 2022 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants