We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I found the following tests randomly failing in the GitHub Actions:
TestLightGBMTuner.test_tune_best_score_reproducibility
TestLightGBMTunerCV.test_tune_best_score_reproducibility
test_optimize_parallel_timeout
Tests should be deterministic.
We can fix assertions like:
optuna/tests/integration_tests/lightgbm_tuner_tests/test_optimize.py
Line 766 in 073abfc
by using pytest.approx that accepts numbers with a tolerance (default relative tolerance: 1e-6)
pytest.approx
1e-6
assert best_score_second_try == pytest.approx(best_score_first_try)
3.6.0.dev
3.10.11
macOS-14.2.1-arm64-arm-64bit
> assert first_trial.value == second_trial.value E AssertionError: assert 0.21086425862654534 == 0.21086425862654531 E + where 0.21086425862654534 = FrozenTrial(number=27, state=1, values=[0.21086425862654534], datetime_start=datetime.datetime(2024, 1, 27, 23, 47, 9,...alse, low=0.4, step=None), 'bagging_freq': IntDistribution(high=7, log=False, low=1, step=1)}, trial_id=27, value=None).value E + and 0.21086425862654531 = FrozenTrial(number=27, state=1, values=[0.21086425862654531], datetime_start=datetime.datetime(2024, 1, 27, 23, 47, 10...alse, low=0.4, step=None), 'bagging_freq': IntDistribution(high=7, log=False, low=1, step=1)}, trial_id=27, value=None).value
By the nature of the problem, there is no deterministic way to observe the problem.
Please take a look at this job log to see the example of the failed run.
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Expected behavior
I found the following tests randomly failing in the GitHub Actions:
TestLightGBMTuner.test_tune_best_score_reproducibility
TestLightGBMTunerCV.test_tune_best_score_reproducibility
test_optimize_parallel_timeout
Expected behavior
Tests should be deterministic.
Suggestion:
We can fix assertions like:
optuna/tests/integration_tests/lightgbm_tuner_tests/test_optimize.py
Line 766 in 073abfc
by using
pytest.approx
that accepts numbers with a tolerance (default relative tolerance:1e-6
)Environment
3.6.0.dev
3.10.11
macOS-14.2.1-arm64-arm-64bit
Error messages, stack traces, or logs
Steps to reproduce
By the nature of the problem, there is no deterministic way to observe the problem.
Please take a look at this job log to see the example of the failed run.
Additional context (optional)
No response
The text was updated successfully, but these errors were encountered: