You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Performing hyper-parameter optimization on 'all' hyper-parameters gave a non-integer value for the batch size. Due to some bug, the search space is continuous rather than discrete for batch size.
Performing hyper-parameter optimization on 'all' hyper-parameters gave a non-integer value for the batch size. Due to some bug, the search space is continuous rather than discrete for batch size.
Here is the input hpopt.sh file:
`#!/bin/bash -l
conda activate chemprop
results_dir="."
data_path="/home/akshatz/bond_order_free/data_2_kcal_def/dataset/run1/data_run_1.csv"
split_path="/home/akshatz/bond_order_free/data_2_kcal_def/dataset/run1/splits.json"
chemprop hpopt
-t regression
--data-path $data_path
--splits-file $split_path
--hyperopt-n-initial-points 25
--raytune-num-samples 50
--epochs 100
--raytune-grace-period 100
--hyperopt-random-state-seed 12
--aggregation sum
--search-parameter-keywords all
--num-workers 20
--hpopt-save-dir $results_dir
--smiles-columns SMILES
--target-columns H298_kcal
--add-h
--keep-h
--raytune-use-gpu`
Here are the contents of the best_parameters.json file generated:
{
"train_loop_config": {
"activation": "TANH",
"final_lr_ratio": 0.009012526078574787,
"message_hidden_dim": 2200.0,
"max_lr": 8.85525109510601e-06,
"batch_size": 21.763120334996223,
"aggregation_norm": 23.0,
"depth": 3.0,
"ffn_num_layers": 2.0,
"dropout": 0.0,
"ffn_hidden_dim": 1100.0,
"init_lr_ratio": 0.00032278162414634416,
"aggregation": "mean",
"warmup_epochs": null
}
}
The text was updated successfully, but these errors were encountered: