Skip to content

Double backward error when a trainable variable is included into the initial condition? #1727

Answered by mynanshan
mynanshan asked this question in Q&A
Discussion options

You must be logged in to vote

I have finally solved this issue.

The problem stems from a decorator npfunc_range_autocache wrapped on the func evaluating the initial/boundary values. Under the PyTorch/Paddle backend, this decorator intends to store the initial/boundary values in the cache after its first calculation. In the rest of the process, the same values are just retrieved from the cache if requested. Therefore, starting from the 2nd iteration, the gradient of the initial/boundary values will be just old cached values, whose gradients have already been cleared. That's why I encountered the double backward error on the 2nd iteration.

To implement a PINN with initial values depending on trainable parameters, one ne…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by mynanshan
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant