Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Informer bug GPU #463

Open
isaacmg opened this issue Dec 7, 2021 · 1 comment
Open

Informer bug GPU #463

isaacmg opened this issue Dec 7, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@isaacmg
Copy link
Collaborator

isaacmg commented Dec 7, 2021

Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.


RuntimeError Traceback (most recent call last)
/tmp/ipykernel_35/811484386.py in
5 os.environ["WANDB_API_KEY"] = user_secrets.get_secret("WANDB_KEY")
6 # sweep_full = wandb.sweep(wandb_sweep_config_full, project="bitcoin_forecasts")
----> 7 train_function("PyTorch", make_config_file("asset_0.csv"))
8

/kaggle/working/flow-forecast/flood_forecast/trainer.py in train_function(model_type, params)
150 # TODO Move to other func
151 if params["dataset_params"]["class"] != "GeneralClassificationLoader":
--> 152 handle_model_evaluation1(trained_model, params, model_type)
153
154 else:

/kaggle/working/flow-forecast/flood_forecast/trainer.py in handle_model_evaluation1(trained_model, params, model_type)
30 params["metrics"],
31 params["inference_params"],
---> 32 {})
33 wandb.run.summary["test_accuracy"] = test_acc[0]
34 df_train_and_test = test_acc[1]

/kaggle/working/flow-forecast/flood_forecast/evaluator.py in evaluate_model(model, model_type, target_col, evaluation_metrics, inference_params, eval_log)
185 else:
186 deep_explain_model_summary_plot(
--> 187 model, test_data, inference_params["datetime_start"]
188 )
189 deep_explain_model_heatmap(model, test_data, inference_params["datetime_start"])

/kaggle/working/flow-forecast/flood_forecast/explain_model_output.py in deep_explain_model_summary_plot(model, csv_test_loader, datetime_start)
107 model.model = model.model.to("cpu")
108 deep_explainer = shap.DeepExplainer(model.model, history)
--> 109 shap_values = deep_explainer.shap_values(history)
110 s_values_list.append(shap_values)
111 else:

/opt/conda/lib/python3.7/site-packages/shap/explainers/_deep/init.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
122 were chosen as "top".
123 """
--> 124 return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)

/opt/conda/lib/python3.7/site-packages/shap/explainers/_deep/deep_pytorch.py in shap_values(self, X, ranked_outputs, output_rank_order, check_additivity)
183 # run attribution computation graph
184 feature_ind = model_output_ranks[j, i]
--> 185 sample_phis = self.gradient(feature_ind, joint_x)
186 # assign the attributions to the right part of the output arrays
187 if self.interim:

/opt/conda/lib/python3.7/site-packages/shap/explainers/_deep/deep_pytorch.py in gradient(self, idx, inputs)
121 grad = torch.autograd.grad(selected, x,
122 retain_graph=True if idx + 1 < len(X) else None,
--> 123 allow_unused=True)[0]
124 if grad is not None:
125 grad = grad.cpu().numpy()

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)
219
220 grad_outputs_ = tensor_or_tensors_to_tuple(grad_outputs, len(outputs))
--> 221 grad_outputs
= make_grads(outputs, grad_outputs)
222
223 if retain_graph is None:

/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py in _make_grads(outputs, grads)
48 if out.requires_grad:
49 if out.numel() != 1:
---> 50 raise RuntimeError("grad can be implicitly created only for scalar outputs")
51 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
52 else:

RuntimeError: grad can be implicitly created only for scalar outputs

@isaacmg isaacmg changed the title Informer bug 1Informer bug1 Dec 7, 2021
@isaacmg isaacmg changed the title 1Informer bug1 Informer bug 1 Dec 7, 2021
@isaacmg isaacmg changed the title Informer bug 1 1 Informer bug GPU Dec 7, 2021
@isaacmg isaacmg added the bug Something isn't working label Dec 7, 2021
@isaacmg
Copy link
Collaborator Author

isaacmg commented Dec 9, 2021

This only seems to be an issue on the Bitcoin forecasting notebook. I'm going to look into it in greater detail.

@isaacmg isaacmg changed the title 1 Informer bug GPU Informer bug GPU Nov 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant