Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LightGBMTuner does not reproduce with a machine epsilon error #5216

Open
nabenabe0928 opened this issue Jan 31, 2024 · 0 comments
Open

LightGBMTuner does not reproduce with a machine epsilon error #5216

nabenabe0928 opened this issue Jan 31, 2024 · 0 comments
Labels
bug Issue/PR about behavior that is broken. Not for typos/examples/CI/test but for Optuna itself. CI Continuous integration.

Comments

@nabenabe0928
Copy link
Collaborator

nabenabe0928 commented Jan 31, 2024

Expected behavior

The following test, which will be skipped after PR#5214 is merged, does not pass due to some errors caused by a machine epsilon.

$ python -m pytest tests/integration_tests/lightgbm_tuner_tests/test_optimize.py::TestLightGBMTuner::test_tune_best_score_reproducibility

I suspected that this was because of the version change in LightGBM, but it seems that is not the problem (from v4.2.0 --> v4.3.0 on 26 Jan 2024) as the same test failed even with v4.2.0.

The primary problem is that LightGBM does not give the same predictive value even with the same random seed.
This problem did not happen before, so this relates to the third-party dependency.

Note that the latest master branch, which does not have this problem is here and we did not do anything after this state.

Important

LGBMTuner cannot be reproducible if lightgbm does not overcome the machine epsilon issue as the search space of LGBMTuner and the early stopping in lightgbm depends on the reproducibility of lightgbm.
This issue has been happening since 28 Jan 2024 at the latest as far as we recognize.

Environment

On the CI environment.

Error messages, stack traces, or logs

Log File of the Failed Test
platform linux -- Python 3.9.13, pytest-7.4.3, pluggy-1.3.0
rootdir: /home/shuhei/pfn-work/optuna
configfile: pyproject.toml
plugins: anyio-4.1.0
collected 1 item                                                                                                                                                                                     

tests/integration_tests/lightgbm_tuner_tests/test_optimize.py F                                                                                                                                [100%]

============================================================================================== FAILURES ==============================================================================================
_______________________________________________________________________ TestLightGBMTuner.test_tune_best_score_reproducibility _______________________________________________________________________

self = <tests.integration_tests.lightgbm_tuner_tests.test_optimize.TestLightGBMTuner object at 0x7f315ed2c250>

    def test_tune_best_score_reproducibility(self) -> None:
        iris = sklearn.datasets.load_iris()
        X_trainval, X_test, y_trainval, y_test = train_test_split(
            iris.data, iris.target, random_state=0
        )
    
        train = lgb.Dataset(X_trainval, y_trainval)
        valid = lgb.Dataset(X_test, y_test)
        params = {
            "objective": "regression",
            "metric": "rmse",
            "random_seed": 0,
            "deterministic": True,
            "force_col_wise": True,
            "verbosity": -1,
        }
    
        tuner_first_try = lgb.LightGBMTuner(
            params,
            train,
            valid_sets=valid,
            callbacks=[early_stopping(stopping_rounds=3), log_evaluation(-1)],
            optuna_seed=10,
        )
        tuner_first_try.run()
        best_score_first_try = tuner_first_try.best_score
    
        tuner_second_try = lgb.LightGBMTuner(
            params,
            train,
            valid_sets=valid,
            callbacks=[early_stopping(stopping_rounds=3), log_evaluation(-1)],
            optuna_seed=10,
        )
        tuner_second_try.run()
        best_score_second_try = tuner_second_try.best_score
    
>       assert best_score_second_try == best_score_first_try
E       assert 0.1928478363651383 == 0.19284783636513833

tests/integration_tests/lightgbm_tuner_tests/test_optimize.py:766: AssertionError
---------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.201387
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.201387
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.210864
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.199213
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[51]    valid_0's rmse: 0.304719
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[55]    valid_0's rmse: 0.295272
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.213602
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[21]    valid_0's rmse: 0.258069
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.203599
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[65]    valid_0's rmse: 0.242412
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.193507
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[58]    valid_0's rmse: 0.240853
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[66]    valid_0's rmse: 0.278094
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192877
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.193732
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.194742
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192867
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[46]    valid_0's rmse: 0.194857
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192866
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[50]    valid_0's rmse: 0.198015
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192873
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.19301
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[52]    valid_0's rmse: 0.195846
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[40]    valid_0's rmse: 0.19441
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[131]   valid_0's rmse: 0.198726
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192925
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.194926
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[46]    valid_0's rmse: 0.199631
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[32]    valid_0's rmse: 0.20704
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[29]    valid_0's rmse: 0.443489
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[41]    valid_0's rmse: 0.19451
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[1]     valid_0's rmse: 0.766643
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.201387
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.201387
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.210864
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.199213
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[51]    valid_0's rmse: 0.304719
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[55]    valid_0's rmse: 0.295272
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.213602
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[21]    valid_0's rmse: 0.258069
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[38]    valid_0's rmse: 0.203599
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[65]    valid_0's rmse: 0.242412
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.193507
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[34]    valid_0's rmse: 0.198688
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[66]    valid_0's rmse: 0.240036
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[66]    valid_0's rmse: 0.278094
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192877
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[49]    valid_0's rmse: 0.193732
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[42]    valid_0's rmse: 0.194742
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192867
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[46]    valid_0's rmse: 0.194857
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192866
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192848
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[50]    valid_0's rmse: 0.198015
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192873
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.19301
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[52]    valid_0's rmse: 0.195846
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[40]    valid_0's rmse: 0.19441
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[131]   valid_0's rmse: 0.198726
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.192925
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[44]    valid_0's rmse: 0.194926
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[46]    valid_0's rmse: 0.199631
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[32]    valid_0's rmse: 0.20704
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[29]    valid_0's rmse: 0.443489
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[41]    valid_0's rmse: 0.19451
Training until validation scores don't improve for 3 rounds
Early stopping, best iteration is:
[2]     valid_0's rmse: 0.766643
---------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------
[I 2024-01-31 06:34:09,955] A new study created in memory with name: no-name-52c8ea93-d9c0-41c4-ad14-8b5de9b64c88
feature_fraction, val_score: 0.192848:   0%|          | 0/7 [00:00<?, ?it/s][I 2024-01-31 06:34:09,978] Trial 0 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  14%|#4        | 1/7 [00:00<00:00, 30.45it/s][I 2024-01-31 06:34:09,989] Trial 1 finished with value: 0.20138738585980612 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  29%|##8       | 2/7 [00:00<00:00, 52.86it/s][I 2024-01-31 06:34:09,994] Trial 2 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  43%|####2     | 3/7 [00:00<00:00, 72.44it/s][I 2024-01-31 06:34:09,997] Trial 3 finished with value: 0.1986880136279893 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  57%|#####7    | 4/7 [00:00<00:00, 80.41it/s][I 2024-01-31 06:34:10,006] Trial 4 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  71%|#######1  | 5/7 [00:00<00:00, 92.51it/s][I 2024-01-31 06:34:10,010] Trial 5 finished with value: 0.20138738585980615 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848:  86%|########5 | 6/7 [00:00<00:00, 103.73it/s][I 2024-01-31 06:34:10,014] Trial 6 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.19284783636513833.
feature_fraction, val_score: 0.192848: 100%|##########| 7/7 [00:00<00:00, 119.73it/s]
num_leaves, val_score: 0.192848:   0%|          | 0/20 [00:00<?, ?it/s][I 2024-01-31 06:34:10,022] Trial 7 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 198}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:   5%|5         | 1/20 [00:00<00:00, 82.51it/s] [I 2024-01-31 06:34:10,027] Trial 8 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 7}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  10%|#         | 2/20 [00:00<00:00, 123.16it/s][I 2024-01-31 06:34:10,031] Trial 9 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 163}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  15%|#5        | 3/20 [00:00<00:00, 121.65it/s][I 2024-01-31 06:34:10,040] Trial 10 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 32}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  20%|##        | 4/20 [00:00<00:00, 127.12it/s][I 2024-01-31 06:34:10,046] Trial 11 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 251}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  25%|##5       | 5/20 [00:00<00:00, 123.48it/s][I 2024-01-31 06:34:10,055] Trial 12 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 100}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  30%|###       | 6/20 [00:00<00:00, 126.94it/s][I 2024-01-31 06:34:10,062] Trial 13 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 89}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  35%|###5      | 7/20 [00:00<00:00, 117.28it/s][I 2024-01-31 06:34:10,075] Trial 14 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 233}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  40%|####      | 8/20 [00:00<00:00, 118.46it/s][I 2024-01-31 06:34:10,082] Trial 15 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 140}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  45%|####5     | 9/20 [00:00<00:00, 120.60it/s][I 2024-01-31 06:34:10,089] Trial 16 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 57}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  50%|#####     | 10/20 [00:00<00:00, 123.09it/s][I 2024-01-31 06:34:10,096] Trial 17 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 186}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 125.51it/s][I 2024-01-31 06:34:10,102] Trial 18 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 112}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  60%|######    | 12/20 [00:00<00:00, 127.54it/s][I 2024-01-31 06:34:10,109] Trial 19 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 66}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  70%|#######   | 14/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,116] Trial 20 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 214}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  70%|#######   | 14/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,122] Trial 21 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 134}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  75%|#######5  | 15/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,129] Trial 22 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 162}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  80%|########  | 16/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,135] Trial 23 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 8}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  85%|########5 | 17/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,142] Trial 24 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 256}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  90%|######### | 18/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,148] Trial 25 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 57}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848:  95%|#########5| 19/20 [00:00<00:00, 138.89it/s][I 2024-01-31 06:34:10,155] Trial 26 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 82}. Best is trial 7 with value: 0.19284783636513833.
num_leaves, val_score: 0.192848: 100%|##########| 20/20 [00:00<00:00, 141.98it/s]
bagging, val_score: 0.192848:   0%|          | 0/10 [00:00<?, ?it/s][I 2024-01-31 06:34:10,182] Trial 27 finished with value: 0.21086425862654531 and parameters: {'bagging_fraction': 0.666483705678306, 'bagging_freq': 3}. Best is trial 27 with value: 0.21086425862654531.
bagging, val_score: 0.192848:  10%|#         | 1/10 [00:00<00:00, 27.49it/s][I 2024-01-31 06:34:10,193] Trial 28 finished with value: 0.19921306576707654 and parameters: {'bagging_fraction': 0.9856618645841627, 'bagging_freq': 6}. Best is trial 28 with value: 0.19921306576707654.
bagging, val_score: 0.192848:  20%|##        | 2/10 [00:00<00:00, 38.94it/s][I 2024-01-31 06:34:10,207] Trial 29 finished with value: 0.30471902456921146 and parameters: {'bagging_fraction': 0.41584137907691615, 'bagging_freq': 1}. Best is trial 28 with value: 0.19921306576707654.
bagging, val_score: 0.192848:  30%|###       | 3/10 [00:00<00:00, 50.03it/s][I 2024-01-31 06:34:10,216] Trial 30 finished with value: 0.19284783636513833 and parameters: {'bagging_fraction': 0.996909890130344, 'bagging_freq': 6}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  40%|####      | 4/10 [00:00<00:00, 58.69it/s][I 2024-01-31 06:34:10,224] Trial 31 finished with value: 0.29527229561662555 and parameters: {'bagging_fraction': 0.43785600833623395, 'bagging_freq': 1}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  50%|#####     | 5/10 [00:00<00:00, 65.57it/s][I 2024-01-31 06:34:10,232] Trial 32 finished with value: 0.21360191373048099 and parameters: {'bagging_fraction': 0.7566230664702958, 'bagging_freq': 4}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  60%|######    | 6/10 [00:00<00:00, 72.19it/s][I 2024-01-31 06:34:10,239] Trial 33 finished with value: 0.25806932265579285 and parameters: {'bagging_fraction': 0.6533369354119082, 'bagging_freq': 7}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  70%|#######   | 7/10 [00:00<00:00, 77.03it/s][I 2024-01-31 06:34:10,247] Trial 34 finished with value: 0.20359911028409614 and parameters: {'bagging_fraction': 0.8183561694298473, 'bagging_freq': 3}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  80%|########  | 8/10 [00:00<00:00, 80.38it/s][I 2024-01-31 06:34:10,256] Trial 35 finished with value: 0.2424119647336161 and parameters: {'bagging_fraction': 0.5424582609460753, 'bagging_freq': 5}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848: 100%|##########| 10/10 [00:00<00:00, 92.36it/s][I 2024-01-31 06:34:10,264] Trial 36 finished with value: 0.19350692822046836 and parameters: {'bagging_fraction': 0.8660871048039956, 'bagging_freq': 2}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848: 100%|##########| 10/10 [00:00<00:00, 92.02it/s]
feature_fraction_stage2, val_score: 0.192848:   0%|          | 0/6 [00:00<?, ?it/s][I 2024-01-31 06:34:10,270] Trial 37 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.616}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848:  17%|#6        | 1/6 [00:00<00:00, 118.83it/s][I 2024-01-31 06:34:10,274] Trial 38 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.552}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848:  33%|###3      | 2/6 [00:00<00:00, 171.83it/s][I 2024-01-31 06:34:10,277] Trial 39 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.6799999999999999}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848:  50%|#####     | 3/6 [00:00<00:00, 195.09it/s][I 2024-01-31 06:34:10,281] Trial 40 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.52}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848:  67%|######6   | 4/6 [00:00<00:00, 208.49it/s][I 2024-01-31 06:34:10,285] Trial 41 finished with value: 0.19284783636513833 and parameters: {'feature_fraction': 0.584}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848:  83%|########3 | 5/6 [00:00<00:00, 219.75it/s][I 2024-01-31 06:34:10,288] Trial 42 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.6479999999999999}. Best is trial 37 with value: 0.19284783636513833.
feature_fraction_stage2, val_score: 0.192848: 100%|##########| 6/6 [00:00<00:00, 258.08it/s]
regularization_factors, val_score: 0.192848:   5%|5         | 1/20 [00:00<00:02,  8.64it/s][I 2024-01-31 06:34:10,405] Trial 43 finished with value: 0.1928484729558045 and parameters: {'lambda_l1': 9.93700675779319e-05, 'lambda_l2': 7.811139080736277e-06}. Best is trial 43 with value: 0.1928484729558045.
regularization_factors, val_score: 0.192848:   5%|5         | 1/20 [00:00<00:02,  8.64it/s][I 2024-01-31 06:34:10,426] Trial 44 finished with value: 0.24085262361853782 and parameters: {'lambda_l1': 6.094358089994864, 'lambda_l2': 0.3727733298737283}. Best is trial 43 with value: 0.1928484729558045.
regularization_factors, val_score: 0.192848:  10%|#         | 2/20 [00:00<00:02,  8.64it/s][I 2024-01-31 06:34:10,436] Trial 45 finished with value: 0.1928478364226583 and parameters: {'lambda_l1': 1.7283061930274425e-08, 'lambda_l2': 1.230382552806005e-08}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  15%|#5        | 3/20 [00:00<00:01,  8.64it/s][I 2024-01-31 06:34:10,446] Trial 46 finished with value: 0.2780936731243131 and parameters: {'lambda_l1': 8.98769553497471, 'lambda_l2': 0.29667699871423225}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  20%|##        | 4/20 [00:00<00:01,  8.64it/s][I 2024-01-31 06:34:10,456] Trial 47 finished with value: 0.19284794722266166 and parameters: {'lambda_l1': 3.696920615808587e-08, 'lambda_l2': 0.00021535901169610452}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  25%|##5       | 5/20 [00:00<00:01,  8.64it/s][I 2024-01-31 06:34:10,477] Trial 48 finished with value: 0.19287740769559633 and parameters: {'lambda_l1': 0.002235352391236717, 'lambda_l2': 1.0997655205230201e-08}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  30%|###       | 6/20 [00:00<00:01,  8.64it/s][I 2024-01-31 06:34:10,488] Trial 49 finished with value: 0.19373223783354168 and parameters: {'lambda_l1': 6.310358476322484e-05, 'lambda_l2': 4.780969386107899}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  40%|####      | 8/20 [00:00<00:00, 30.51it/s][I 2024-01-31 06:34:10,576] Trial 50 finished with value: 0.19474184615128642 and parameters: {'lambda_l1': 0.018851353676161593, 'lambda_l2': 1.042593430695046e-05}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  40%|####      | 8/20 [00:00<00:00, 30.51it/s][I 2024-01-31 06:34:10,946] Trial 51 finished with value: 0.19286715995944217 and parameters: {'lambda_l1': 1.3704838249985267e-06, 'lambda_l2': 0.00905088815474374}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  45%|####5     | 9/20 [00:00<00:00, 30.51it/s][I 2024-01-31 06:34:10,956] Trial 52 finished with value: 0.19485719436488125 and parameters: {'lambda_l1': 0.09801816595494481, 'lambda_l2': 8.299352208122134e-07}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:10,967] Trial 53 finished with value: 0.192866459599029 and parameters: {'lambda_l1': 1.3573659312706606e-06, 'lambda_l2': 0.007349156185467266}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:10,977] Trial 54 finished with value: 0.19284848060983303 and parameters: {'lambda_l1': 2.312997501277467e-06, 'lambda_l2': 0.0012260614432908898}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  60%|######    | 12/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:10,987] Trial 55 finished with value: 0.19801464617122438 and parameters: {'lambda_l1': 0.5382322606043406, 'lambda_l2': 2.13990284021344e-07}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  65%|######5   | 13/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:10,998] Trial 56 finished with value: 0.19287287255658958 and parameters: {'lambda_l1': 0.0015251362984950996, 'lambda_l2': 6.727200615275088e-05}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  70%|#######   | 14/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:11,008] Trial 57 finished with value: 0.19300962952595982 and parameters: {'lambda_l1': 1.064908681463901e-05, 'lambda_l2': 0.0854951213123306}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  75%|#######5  | 15/20 [00:00<00:00, 14.79it/s][I 2024-01-31 06:34:11,028] Trial 58 finished with value: 0.19584593785821336 and parameters: {'lambda_l1': 1.1856834273439001e-07, 'lambda_l2': 8.38916156482854}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  85%|########5 | 17/20 [00:00<00:00, 23.16it/s][I 2024-01-31 06:34:11,082] Trial 59 finished with value: 0.1944099347070415 and parameters: {'lambda_l1': 0.012553983402332158, 'lambda_l2': 2.164485723165938e-07}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  85%|########5 | 17/20 [00:00<00:00, 23.16it/s][I 2024-01-31 06:34:11,157] Trial 60 finished with value: 0.19872604315486722 and parameters: {'lambda_l1': 1.3554891296507747, 'lambda_l2': 0.0035982736075678905}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  90%|######### | 18/20 [00:00<00:00, 23.16it/s][I 2024-01-31 06:34:11,184] Trial 61 finished with value: 0.1929246815592086 and parameters: {'lambda_l1': 0.0003667994109844849, 'lambda_l2': 0.038895327598350946}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848:  95%|#########5| 19/20 [00:00<00:00, 23.16it/s][I 2024-01-31 06:34:11,221] Trial 62 finished with value: 0.19492601769082882 and parameters: {'lambda_l1': 0.10844471995802324, 'lambda_l2': 2.225356496667839e-05}. Best is trial 45 with value: 0.1928478364226583.
regularization_factors, val_score: 0.192848: 100%|##########| 20/20 [00:00<00:00, 21.44it/s]
min_child_samples, val_score: 0.192848:   0%|          | 0/5 [00:00<?, ?it/s][I 2024-01-31 06:34:11,229] Trial 63 finished with value: 0.1996308027771085 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  20%|##        | 1/5 [00:00<00:00, 65.61it/s] [I 2024-01-31 06:34:11,238] Trial 64 finished with value: 0.20704025068959248 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  40%|####      | 2/5 [00:00<00:00, 105.93it/s][I 2024-01-31 06:34:11,241] Trial 65 finished with value: 0.44348930161433014 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  60%|######    | 3/5 [00:00<00:00, 118.48it/s][I 2024-01-31 06:34:11,248] Trial 66 finished with value: 0.1945101875678247 and parameters: {'min_child_samples': 10}. Best is trial 66 with value: 0.1945101875678247.
min_child_samples, val_score: 0.192848:  80%|########  | 4/5 [00:00<00:00, 142.07it/s][I 2024-01-31 06:34:11,251] Trial 67 finished with value: 0.7666431214180529 and parameters: {'min_child_samples': 100}. Best is trial 66 with value: 0.1945101875678247.
min_child_samples, val_score: 0.192848: 100%|##########| 5/5 [00:00<00:00, 173.66it/s]
[I 2024-01-31 06:34:11,251] A new study created in memory with name: no-name-338e2f81-615f-4a3a-b7d5-05e4c3f2742e
feature_fraction, val_score: 0.192848:   0%|          | 0/7 [00:00<?, ?it/s][I 2024-01-31 06:34:11,257] Trial 0 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.6}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  14%|#4        | 1/7 [00:00<00:00, 110.02it/s][I 2024-01-31 06:34:11,261] Trial 1 finished with value: 0.20138738585980612 and parameters: {'feature_fraction': 1.0}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  29%|##8       | 2/7 [00:00<00:00, 131.20it/s][I 2024-01-31 06:34:11,267] Trial 2 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.4}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  43%|####2     | 3/7 [00:00<00:00, 156.34it/s][I 2024-01-31 06:34:11,271] Trial 3 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.7}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  57%|#####7    | 4/7 [00:00<00:00, 172.73it/s][I 2024-01-31 06:34:11,275] Trial 4 finished with value: 0.19868801362798932 and parameters: {'feature_fraction': 0.8}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  71%|#######1  | 5/7 [00:00<00:00, 179.32it/s][I 2024-01-31 06:34:11,280] Trial 5 finished with value: 0.20138738585980612 and parameters: {'feature_fraction': 0.8999999999999999}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848:  86%|########5 | 6/7 [00:00<00:00, 177.52it/s][I 2024-01-31 06:34:11,286] Trial 6 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.5}. Best is trial 0 with value: 0.1928478363651383.
feature_fraction, val_score: 0.192848: 100%|##########| 7/7 [00:00<00:00, 202.58it/s]
num_leaves, val_score: 0.192848:   0%|          | 0/20 [00:00<?, ?it/s][I 2024-01-31 06:34:11,292] Trial 7 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 198}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:   5%|5         | 1/20 [00:00<00:00, 105.72it/s][I 2024-01-31 06:34:11,296] Trial 8 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 7}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  10%|#         | 2/20 [00:00<00:00, 129.76it/s][I 2024-01-31 06:34:11,302] Trial 9 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 163}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  15%|#5        | 3/20 [00:00<00:00, 131.37it/s][I 2024-01-31 06:34:11,310] Trial 10 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 32}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  20%|##        | 4/20 [00:00<00:00, 123.74it/s][I 2024-01-31 06:34:11,319] Trial 11 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 251}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  25%|##5       | 5/20 [00:00<00:00, 124.55it/s][I 2024-01-31 06:34:11,327] Trial 12 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 100}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  30%|###       | 6/20 [00:00<00:00, 120.32it/s][I 2024-01-31 06:34:11,337] Trial 13 finished with value: 0.19284783636513833 and parameters: {'num_leaves': 89}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  35%|###5      | 7/20 [00:00<00:00, 120.01it/s][I 2024-01-31 06:34:11,345] Trial 14 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 233}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  40%|####      | 8/20 [00:00<00:00, 120.51it/s][I 2024-01-31 06:34:11,353] Trial 15 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 140}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  45%|####5     | 9/20 [00:00<00:00, 120.18it/s][I 2024-01-31 06:34:11,362] Trial 16 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 57}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  50%|#####     | 10/20 [00:00<00:00, 120.13it/s][I 2024-01-31 06:34:11,370] Trial 17 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 186}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 119.71it/s][I 2024-01-31 06:34:11,379] Trial 18 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 112}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  65%|######5   | 13/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,387] Trial 19 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 66}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  65%|######5   | 13/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,395] Trial 20 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 214}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  70%|#######   | 14/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,404] Trial 21 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 134}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  75%|#######5  | 15/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,412] Trial 22 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 162}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  80%|########  | 16/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,421] Trial 23 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 8}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  85%|########5 | 17/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,429] Trial 24 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 256}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  90%|######### | 18/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,438] Trial 25 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 57}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848:  95%|#########5| 19/20 [00:00<00:00, 129.85it/s][I 2024-01-31 06:34:11,447] Trial 26 finished with value: 0.1928478363651383 and parameters: {'num_leaves': 82}. Best is trial 7 with value: 0.1928478363651383.
num_leaves, val_score: 0.192848: 100%|##########| 20/20 [00:00<00:00, 124.59it/s]
bagging, val_score: 0.192848:   0%|          | 0/10 [00:00<?, ?it/s][I 2024-01-31 06:34:11,520] Trial 27 finished with value: 0.21086425862654531 and parameters: {'bagging_fraction': 0.666483705678306, 'bagging_freq': 3}. Best is trial 27 with value: 0.21086425862654531.
bagging, val_score: 0.192848:  20%|##        | 2/10 [00:00<00:00, 17.13it/s][I 2024-01-31 06:34:11,565] Trial 28 finished with value: 0.19921306576707654 and parameters: {'bagging_fraction': 0.9856618645841627, 'bagging_freq': 6}. Best is trial 28 with value: 0.19921306576707654.
bagging, val_score: 0.192848:  20%|##        | 2/10 [00:00<00:00, 17.13it/s][I 2024-01-31 06:34:11,670] Trial 29 finished with value: 0.30471902456921146 and parameters: {'bagging_fraction': 0.41584137907691615, 'bagging_freq': 1}. Best is trial 28 with value: 0.19921306576707654.
bagging, val_score: 0.192848:  40%|####      | 4/10 [00:00<00:00, 16.22it/s][I 2024-01-31 06:34:11,693] Trial 30 finished with value: 0.19284783636513833 and parameters: {'bagging_fraction': 0.996909890130344, 'bagging_freq': 6}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  40%|####      | 4/10 [00:00<00:00, 16.22it/s][I 2024-01-31 06:34:11,712] Trial 31 finished with value: 0.29527229561662555 and parameters: {'bagging_fraction': 0.43785600833623395, 'bagging_freq': 1}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  50%|#####     | 5/10 [00:00<00:00, 16.22it/s][I 2024-01-31 06:34:11,789] Trial 32 finished with value: 0.21360191373048099 and parameters: {'bagging_fraction': 0.7566230664702958, 'bagging_freq': 4}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  70%|#######   | 7/10 [00:00<00:00, 17.38it/s][I 2024-01-31 06:34:11,855] Trial 33 finished with value: 0.25806932265579285 and parameters: {'bagging_fraction': 0.6533369354119082, 'bagging_freq': 7}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  70%|#######   | 7/10 [00:00<00:00, 17.38it/s][I 2024-01-31 06:34:11,876] Trial 34 finished with value: 0.20359911028409614 and parameters: {'bagging_fraction': 0.8183561694298473, 'bagging_freq': 3}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  80%|########  | 8/10 [00:00<00:00, 17.38it/s][I 2024-01-31 06:34:11,898] Trial 35 finished with value: 0.2424119647336161 and parameters: {'bagging_fraction': 0.5424582609460753, 'bagging_freq': 5}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848:  90%|######### | 9/10 [00:00<00:00, 17.38it/s][I 2024-01-31 06:34:11,928] Trial 36 finished with value: 0.19350692822046833 and parameters: {'bagging_fraction': 0.8660871048039956, 'bagging_freq': 2}. Best is trial 30 with value: 0.19284783636513833.
bagging, val_score: 0.192848: 100%|##########| 10/10 [00:00<00:00, 20.81it/s]
feature_fraction_stage2, val_score: 0.192848:   0%|          | 0/6 [00:00<?, ?it/s][I 2024-01-31 06:34:11,936] Trial 37 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.616}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848:  17%|#6        | 1/6 [00:00<00:00, 79.67it/s] [I 2024-01-31 06:34:11,942] Trial 38 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.552}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848:  33%|###3      | 2/6 [00:00<00:00, 120.52it/s][I 2024-01-31 06:34:11,946] Trial 39 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.6799999999999999}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848:  50%|#####     | 3/6 [00:00<00:00, 129.60it/s][I 2024-01-31 06:34:11,952] Trial 40 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.52}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848:  67%|######6   | 4/6 [00:00<00:00, 140.17it/s][I 2024-01-31 06:34:11,958] Trial 41 finished with value: 0.1928478363651383 and parameters: {'feature_fraction': 0.584}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848:  83%|########3 | 5/6 [00:00<00:00, 152.58it/s][I 2024-01-31 06:34:11,962] Trial 42 finished with value: 0.19868801362798935 and parameters: {'feature_fraction': 0.6479999999999999}. Best is trial 37 with value: 0.1928478363651383.
feature_fraction_stage2, val_score: 0.192848: 100%|##########| 6/6 [00:00<00:00, 178.48it/s]
regularization_factors, val_score: 0.192848:   0%|          | 0/20 [00:00<?, ?it/s][I 2024-01-31 06:34:11,988] Trial 43 finished with value: 0.1928484729558045 and parameters: {'lambda_l1': 9.93700675779319e-05, 'lambda_l2': 7.811139080736277e-06}. Best is trial 43 with value: 0.1928484729558045.
regularization_factors, val_score: 0.192848:   5%|5         | 1/20 [00:00<00:00, 27.92it/s][I 2024-01-31 06:34:11,999] Trial 44 finished with value: 0.240036178907336 and parameters: {'lambda_l1': 6.094358089994864, 'lambda_l2': 0.3727733298737283}. Best is trial 43 with value: 0.1928484729558045.
regularization_factors, val_score: 0.192848:  15%|#5        | 3/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,082] Trial 45 finished with value: 0.19284783642265832 and parameters: {'lambda_l1': 1.7283061930274425e-08, 'lambda_l2': 1.230382552806005e-08}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  15%|#5        | 3/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,093] Trial 46 finished with value: 0.2780936731243131 and parameters: {'lambda_l1': 8.98769553497471, 'lambda_l2': 0.29667699871423225}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  20%|##        | 4/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,104] Trial 47 finished with value: 0.19284794722266166 and parameters: {'lambda_l1': 3.696920615808587e-08, 'lambda_l2': 0.00021535901169610452}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  25%|##5       | 5/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,119] Trial 48 finished with value: 0.19287740769559633 and parameters: {'lambda_l1': 0.002235352391236717, 'lambda_l2': 1.0997655205230201e-08}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  30%|###       | 6/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,130] Trial 49 finished with value: 0.19373223783354165 and parameters: {'lambda_l1': 6.310358476322484e-05, 'lambda_l2': 4.780969386107899}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  35%|###5      | 7/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,139] Trial 50 finished with value: 0.19474184615128642 and parameters: {'lambda_l1': 0.018851353676161593, 'lambda_l2': 1.042593430695046e-05}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  40%|####      | 8/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,150] Trial 51 finished with value: 0.19286715995944215 and parameters: {'lambda_l1': 1.3704838249985267e-06, 'lambda_l2': 0.00905088815474374}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  45%|####5     | 9/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,160] Trial 52 finished with value: 0.19485719436488125 and parameters: {'lambda_l1': 0.09801816595494481, 'lambda_l2': 8.299352208122134e-07}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 45.12it/s][I 2024-01-31 06:34:12,223] Trial 53 finished with value: 0.19286645959902898 and parameters: {'lambda_l1': 1.3573659312706606e-06, 'lambda_l2': 0.007349156185467266}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  55%|#####5    | 11/20 [00:00<00:00, 45.12it/s][I 2024-01-31 06:34:12,377] Trial 54 finished with value: 0.19284848060983306 and parameters: {'lambda_l1': 2.312997501277467e-06, 'lambda_l2': 0.0012260614432908898}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  60%|######    | 12/20 [00:00<00:00, 45.12it/s][I 2024-01-31 06:34:12,388] Trial 55 finished with value: 0.19801464617122438 and parameters: {'lambda_l1': 0.5382322606043406, 'lambda_l2': 2.13990284021344e-07}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  65%|######5   | 13/20 [00:00<00:00, 45.12it/s][I 2024-01-31 06:34:12,408] Trial 56 finished with value: 0.19287287255658958 and parameters: {'lambda_l1': 0.0015251362984950996, 'lambda_l2': 6.727200615275088e-05}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  70%|#######   | 14/20 [00:00<00:00, 45.12it/s][I 2024-01-31 06:34:12,423] Trial 57 finished with value: 0.19300962952595982 and parameters: {'lambda_l1': 1.064908681463901e-05, 'lambda_l2': 0.0854951213123306}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  80%|########  | 16/20 [00:00<00:00, 33.02it/s][I 2024-01-31 06:34:12,432] Trial 58 finished with value: 0.19584593785821336 and parameters: {'lambda_l1': 1.1856834273439001e-07, 'lambda_l2': 8.38916156482854}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  80%|########  | 16/20 [00:00<00:00, 33.02it/s][I 2024-01-31 06:34:12,542] Trial 59 finished with value: 0.1944099347070415 and parameters: {'lambda_l1': 0.012553983402332158, 'lambda_l2': 2.164485723165938e-07}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  85%|########5 | 17/20 [00:00<00:00, 33.02it/s][I 2024-01-31 06:34:12,557] Trial 60 finished with value: 0.19872604315486722 and parameters: {'lambda_l1': 1.3554891296507747, 'lambda_l2': 0.0035982736075678905}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848:  90%|######### | 18/20 [00:00<00:00, 33.02it/s][I 2024-01-31 06:34:12,583] Trial 61 finished with value: 0.19292468155920858 and parameters: {'lambda_l1': 0.0003667994109844849, 'lambda_l2': 0.038895327598350946}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848: 100%|##########| 20/20 [00:00<00:00, 25.45it/s][I 2024-01-31 06:34:12,665] Trial 62 finished with value: 0.19492601769082882 and parameters: {'lambda_l1': 0.10844471995802324, 'lambda_l2': 2.225356496667839e-05}. Best is trial 45 with value: 0.19284783642265832.
regularization_factors, val_score: 0.192848: 100%|##########| 20/20 [00:00<00:00, 28.49it/s]
min_child_samples, val_score: 0.192848:   0%|          | 0/5 [00:00<?, ?it/s][I 2024-01-31 06:34:12,673] Trial 63 finished with value: 0.1996308027771085 and parameters: {'min_child_samples': 25}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  20%|##        | 1/5 [00:00<00:00, 74.02it/s] [I 2024-01-31 06:34:12,680] Trial 64 finished with value: 0.20704025068959248 and parameters: {'min_child_samples': 5}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  40%|####      | 2/5 [00:00<00:00, 117.19it/s][I 2024-01-31 06:34:12,683] Trial 65 finished with value: 0.44348930161433014 and parameters: {'min_child_samples': 50}. Best is trial 63 with value: 0.1996308027771085.
min_child_samples, val_score: 0.192848:  60%|######    | 3/5 [00:00<00:00, 128.00it/s][I 2024-01-31 06:34:12,690] Trial 66 finished with value: 0.19451018756782473 and parameters: {'min_child_samples': 10}. Best is trial 66 with value: 0.19451018756782473.
min_child_samples, val_score: 0.192848:  80%|########  | 4/5 [00:00<00:00, 157.16it/s][I 2024-01-31 06:34:12,692] Trial 67 finished with value: 0.7666431214180529 and parameters: {'min_child_samples': 100}. Best is trial 66 with value: 0.19451018756782473.
min_child_samples, val_score: 0.192848: 100%|##########| 5/5 [00:00<00:00, 191.97it/s]
====================================================================================== short test summary info =======================================================================================
FAILED tests/integration_tests/lightgbm_tuner_tests/test_optimize.py::TestLightGBMTuner::test_tune_best_score_reproducibility - assert 0.1928478363651383 == 0.19284783636513833
========================================================================================= 1 failed in 3.49s ==========================================================================================

Steps to reproduce

$ python -m pytest tests/integration_tests/lightgbm_tuner_tests/test_optimize.py::TestLightGBMTuner::test_tune_best_score_reproducibility

Additional context (optional)

No response

@nabenabe0928 nabenabe0928 added bug Issue/PR about behavior that is broken. Not for typos/examples/CI/test but for Optuna itself. CI Continuous integration. no-stale Exempt from stale bot labels Jan 31, 2024
@contramundum53 contramundum53 removed the no-stale Exempt from stale bot label Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issue/PR about behavior that is broken. Not for typos/examples/CI/test but for Optuna itself. CI Continuous integration.
Projects
None yet
Development

No branches or pull requests

2 participants