Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: 4 tests fail on ppc (perhaps precision issue) #2933

Closed
barracuda156 opened this issue Mar 27, 2024 · 4 comments
Closed

[Bug]: 4 tests fail on ppc (perhaps precision issue) #2933

barracuda156 opened this issue Mar 27, 2024 · 4 comments
Labels

Comments

@barracuda156
Copy link

What Operating System(s) are you seeing this problem on?

Other (plase, specify in the Steps to Reproduce)

dlib version

19.24.3

Python version

3.11

Compiler

GCC 13.2.0

Expected Behavior

Hopefully an appropriate precision threshold is used for 32-bit platforms, so that tests pass without failures.

Current Behavior

--->  Testing py311-dlib
Executing:  cd "/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_math_dlib/py311-dlib/work/dlib-19.24.3" && py.test-3.11 -o addopts='' 
============================= test session starts ==============================
platform darwin -- Python 3.11.8, pytest-7.4.3, pluggy-1.3.0
rootdir: /opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_math_dlib/py311-dlib/work/dlib-19.24.3
plugins: flaky-3.7.0, cov-4.1.0
collected 75 items

tools/python/test/test_array.py ..............                           [ 18%]
tools/python/test/test_chinese_whispers.py ...                           [ 22%]
tools/python/test/test_global_optimization.py F.F                        [ 26%]
tools/python/test/test_matrix.py ........                                [ 37%]
tools/python/test/test_numpy_returns.py ...                              [ 41%]
tools/python/test/test_point.py ....                                     [ 46%]
tools/python/test/test_range.py .....                                    [ 53%]
tools/python/test/test_rgb_pixel.py .                                    [ 54%]
tools/python/test/test_sparse_vector.py ....                             [ 60%]
tools/python/test/test_svm_c_trainer.py ....FF......                     [ 76%]
tools/python/test/test_vector.py ..................                      [100%]

=================================== FAILURES ===================================
________________________ test_global_optimization_nargs ________________________

    def test_global_optimization_nargs():
        w0 = find_max_global(lambda *args: sum(args), [0, 0, 0], [1, 1, 1], 10)
        w1 = find_min_global(lambda *args: sum(args), [0, 0, 0], [1, 1, 1], 10)
>       assert w0 == ([1, 1, 1], 3)
E       assert ([0.0, 0.0, 0.0], -inf) == ([1, 1, 1], 3)
E         At index 0 diff: [0.0, 0.0, 0.0] != [1, 1, 1]
E         Use -v to get more diff

tools/python/test/test_global_optimization.py:10: AssertionError
_____________________________ test_on_holder_table _____________________________

    def test_on_holder_table():
        x,y = find_min_global(holder_table,
                                [-10,-10],
                                [10,10],
                                200)
>       assert (y - -19.2085025679) < 1e-7
E       assert (inf - -19.2085025679) < 1e-07

tools/python/test/test_global_optimization.py:69: AssertionError
__________ test_trainers[svm_c_trainer_linear-1.0-0.7666666666666667] __________

training_data = <[AttributeError("'_dlib_pybind11.array' object has no attribute 'typecode'") raised in repr()] tuple object at 0x139f1128>
trainer = <class '_dlib_pybind11.svm_c_trainer_linear'>, class1_accuracy = 1.0
class2_accuracy = 0.7666666666666667

    @pytest.mark.parametrize('trainer, class1_accuracy, class2_accuracy', [
        (svm_c_trainer_radial_basis, 1.0, 1.0),
        (svm_c_trainer_sparse_radial_basis, 1.0, 1.0),
        (svm_c_trainer_histogram_intersection, 1.0, 1.0),
        (svm_c_trainer_sparse_histogram_intersection, 1.0, 1.0),
        (svm_c_trainer_linear, 1.0, 23 / 30),
        (svm_c_trainer_sparse_linear, 1.0, 23 / 30),
        (rvm_trainer_radial_basis, 1.0, 1.0),
        (rvm_trainer_sparse_radial_basis, 1.0, 1.0),
        (rvm_trainer_histogram_intersection, 1.0, 1.0),
        (rvm_trainer_sparse_histogram_intersection, 1.0, 1.0),
        (rvm_trainer_linear, 1.0, 0.6),
        (rvm_trainer_sparse_linear, 1.0, 0.6)
    ])
    def test_trainers(training_data, trainer, class1_accuracy, class2_accuracy):
        predictors, sparse_predictors, response = training_data
        if 'sparse' in trainer.__name__:
            predictors = sparse_predictors
        cv = cross_validate_trainer(trainer(), predictors, response, folds=10)
        assert cv.class1_accuracy == pytest.approx(class1_accuracy)
>       assert cv.class2_accuracy == pytest.approx(class2_accuracy)
E       assert 0.0 == 0.7666666666666667 ± 7.7e-07
E         comparison failed
E         Obtained: 0.0
E         Expected: 0.7666666666666667 ± 7.7e-07

tools/python/test/test_svm_c_trainer.py:59: AssertionError
______ test_trainers[svm_c_trainer_sparse_linear-1.0-0.7666666666666667] _______

training_data = <[AttributeError("'_dlib_pybind11.array' object has no attribute 'typecode'") raised in repr()] tuple object at 0x139f10c8>
trainer = <class '_dlib_pybind11.svm_c_trainer_sparse_linear'>
class1_accuracy = 1.0, class2_accuracy = 0.7666666666666667

    @pytest.mark.parametrize('trainer, class1_accuracy, class2_accuracy', [
        (svm_c_trainer_radial_basis, 1.0, 1.0),
        (svm_c_trainer_sparse_radial_basis, 1.0, 1.0),
        (svm_c_trainer_histogram_intersection, 1.0, 1.0),
        (svm_c_trainer_sparse_histogram_intersection, 1.0, 1.0),
        (svm_c_trainer_linear, 1.0, 23 / 30),
        (svm_c_trainer_sparse_linear, 1.0, 23 / 30),
        (rvm_trainer_radial_basis, 1.0, 1.0),
        (rvm_trainer_sparse_radial_basis, 1.0, 1.0),
        (rvm_trainer_histogram_intersection, 1.0, 1.0),
        (rvm_trainer_sparse_histogram_intersection, 1.0, 1.0),
        (rvm_trainer_linear, 1.0, 0.6),
        (rvm_trainer_sparse_linear, 1.0, 0.6)
    ])
    def test_trainers(training_data, trainer, class1_accuracy, class2_accuracy):
        predictors, sparse_predictors, response = training_data
        if 'sparse' in trainer.__name__:
            predictors = sparse_predictors
        cv = cross_validate_trainer(trainer(), predictors, response, folds=10)
        assert cv.class1_accuracy == pytest.approx(class1_accuracy)
>       assert cv.class2_accuracy == pytest.approx(class2_accuracy)
E       assert 0.0 == 0.7666666666666667 ± 7.7e-07
E         comparison failed
E         Obtained: 0.0
E         Expected: 0.7666666666666667 ± 7.7e-07

tools/python/test/test_svm_c_trainer.py:59: AssertionError
=========================== short test summary info ============================
FAILED tools/python/test/test_global_optimization.py::test_global_optimization_nargs
FAILED tools/python/test/test_global_optimization.py::test_on_holder_table - ...
FAILED tools/python/test/test_svm_c_trainer.py::test_trainers[svm_c_trainer_linear-1.0-0.7666666666666667]
FAILED tools/python/test/test_svm_c_trainer.py::test_trainers[svm_c_trainer_sparse_linear-1.0-0.7666666666666667]
========================= 4 failed, 71 passed in 5.16s =========================
Command failed:  cd "/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_math_dlib/py311-dlib/work/dlib-19.24.3" && py.test-3.11 -o addopts='' 
Exit code: 1
Error: Failed to test py311-dlib: command execution failed

Steps to Reproduce

Run tests on ppc arch. Presumably BSD or Linux will do as well, though I can only confirm this for Darwin.

Anything else?

macOS 10.6 / powerpc (32-bit)

@davisking
Copy link
Owner

I don't have a machine to test this on. Can you post a PR with some adjustments to handle this?

@dlib-issue-bot
Copy link
Collaborator

Warning: this issue has been inactive for 35 days and will be automatically closed on 2024-05-15 if there is no further activity.

If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.

@dlib-issue-bot
Copy link
Collaborator

Warning: this issue has been inactive for 43 days and will be automatically closed on 2024-05-15 if there is no further activity.

If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.

@dlib-issue-bot
Copy link
Collaborator

Notice: this issue has been closed because it has been inactive for 45 days. You may reopen this issue if it has been closed in error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants