Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to write specialized data-free loss functions and use NeuralPDE training strategies #703

Open
nicholaskl97 opened this issue Jul 17, 2023 · 2 comments

Comments

@nicholaskl97
Copy link

nicholaskl97 commented Jul 17, 2023

NeuralPDE develops its loss function in two steps:

  1. Develop a data-free loss function $\ell(x, \theta)$ from each equation/boundary condition in the PDESystem. This is the role of NeuralPDE's parser.
  2. Develop a full loss function $L(\theta)$ from the data-free loss functions, the domains, and the training strategy. This involves developing a training set from the strategy and domains, then merging it with the data free loss functions.

In some cases, a user may want to use 2 with their own data-free loss functions, specialized to their PDE. For example, as noted in #702, directional derivatives aren't currently optimized, so a current user may wish to write their own specialized data-free loss function, which makes this optimization, without rewriting the training strategies provided by NeuralPDE.

Adding this functionality could possibly look like exporting merge_strategy_with_loss_function, however that function relies on having a PINNRepresentation that was built up in step 1, which might not be the most user-friendly for someone doing step 1 themselves. Additionally, we'd want to make sure that function is well-documented and perhaps even add a demo of this functionality if we want people to be aware of it as an option.

@xtalax

@nicholaskl97
Copy link
Author

nicholaskl97 commented Feb 29, 2024

@xtalax, I just came up against this again and am wondering if there are any plans (possibly with your parser re-write) to add a public interface for merge_strategy_with_loss_function or similar.

For more context:

I'm working on SciML/NeuralLyapunov.jl , and my PDE is always something like $\vec{\nabla} V(\vec{x}) \cdot \vec{f}(\vec{x}) < 0$, where $\vec{f}$ is user-defined and we're searching for $V$. My data-free loss is then always $\ell(\vec{x}, \theta) = \max \left( 0, \vec{\nabla} V_\theta(\vec{x}) \cdot \vec{f}(\vec{x}) \right)^2$, just with a different $\vec{f}$ each time.

In some functionality that I'm currently adding to NeuralLyapunov, I'm hoping to only enforce that PDE in the region $\{ \vec{x} : V(\vec{x}) \le \rho \}$. It would be nice to be able to have an if ... else ... statement to make the loss as above when $V(\vec{x}) \le \rho$ and $0$ when $V(\vec{x}) > \rho$, but the NeuralPDE parser doesn't like if ... else ....

I believe IfElse.ifelse provides a workaround, which I will likely use, but I haven't yet gotten it to work, I think because the dot operator gets applied to it in a weird way by the parser. Even if I get it to work, it'll be somewhat inconvenient, especially for users of my library who might want to define their own version of the above conditions. For example, it would be reasonable to try, instead of $0$ in the "else" case, having a conditional there that depends now on the sign of $\vec{\nabla} V_\theta(\vec{x}) \cdot \vec{f}(\vec{x})$ and currently anyone wanting to do that will also have to know about IfElse.ifelse.

As I described in the first post of this issue, I would very much like to leverage the training strategies in NeuralPDE and not re-implement them myself, but the parser is not just unnecessary for my application (since I know my PDE ahead of time), but actually unhelpful (since now the user-defined $\vec{f}$ has to be traceable by Symbolics and I don't have as much freedom to optimize the data-free loss function to my specific PDE, such as with the directional derivative issue in #702).

I could also imagine someone who wanted to use the parser, then do their own special training strategy, but I don't have need for that personally yet.

@ChrisRackauckas
Copy link
Member

I think that would be interesting to have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants