-
-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Physics informed neural operator ode #806
base: master
Are you sure you want to change the base?
Conversation
@ChrisRackauckas I need help with packages version. Adding dependency of NeuralOperator.jl to project, fail CI. I've tried a little to line up suitable versions but not success. |
@ChrisRackauckas @sathvikbhagavan Could you please review the PR, I guess it ready for merge. |
docs/src/tutorials/pino_ode.md
Outdated
# σ = gelu) | ||
|
||
opt = OptimizationOptimisers.Adam(0.01) | ||
pino_phase = OperatorLearning(train_set, is_data_loss = true, is_physics_loss = true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is a training dataset required here? That's just the normal neural operator. Can you show the pure PINO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually what I've seen PINO mean train with both and data and physics loss.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it shouldn't be required.
docs/src/tutorials/pino_ode.md
Outdated
Lux.Dense(16, 32, Lux.σ), | ||
Lux.Dense(32, 1)) | ||
alg = PINOODE(chain, opt, pino_phase) | ||
fine_tune_solution = solve( prob, alg, verbose = false, maxiters = 2000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why this one is needed. Why not just use the pino_solution
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'pino_solution' is prediction for all family of parametric ODE. 'fine_tune_solution' is prediction for only one instance of ODE from all family. So we do additional training but for predict only one exemplar and increase accuracy using already trained operator from 'pino_solution'.
'pino_solution' have trained for all but less accurate, 'fine_tune_solution' for one but with better accurate then 'pino_solution'
src/pino_ode_solve.jl
Outdated
""" | ||
TRAINSET(input_data, output_data; isu0 = false) | ||
## Positional Arguments | ||
* input_data: variables set 'a' of equation (for example initial condition {u(t0 x)} or parameter p | ||
* output_data: set of solutions u(t){a} corresponding parameter 'a'. | ||
""" | ||
struct TRAINSET{} | ||
input_data::Vector{ODEProblem} | ||
output_data::Array | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Too generic of a name to export
It doesn't look like this can do the pure PINO with just the physics loss? Also, it looks like this needs a big rebase. |
it can with just the physics loss but obvious not so good how it with data. I will add test. ok, what do you think it need to rebase? |
This should have had a bit more of an API discussion before starting. The API is really the key here. I think PINO just falls out of doing that API correctly. So let's dive into https://arxiv.org/abs/2103.10974 . The core element of PINO is the way that the network takes functional inputs. While that's the theory, in practice it usually gets simplified to over some vector space of inputs like in https://arxiv.org/abs/2103.10974. So the point of the PINO is that it should learn over basically the space of Thus there's a few things to disconnect here. You could have a non neural operator also take in u0 and p. But it would treat it slightly differently. But the sample space and the neural network are not necessarily the same thing here. So that leads us down the path to an API. First of all, the Initial conditions can simply be set to be functions of parameters, so only parameters need to be supported and a tutorial can handle the rest. So For solution data, What would be required though is a slightly different implementation of the NN. We should then require So given this is the PINO... I don't understand how this PR implements an API like this at all. pino_phase = EquationSolving(dt, pino_solution)
chain = Lux.Chain(Lux.Dense(2, 16, Lux.σ),
Lux.Dense(16, 16, Lux.σ),
Lux.Dense(16, 32, Lux.σ),
Lux.Dense(32, 1))
alg = PINOODE(chain, opt, pino_phase)
fine_tune_solution = solve(
prob, alg, verbose = false, maxiters = 2000) this doesn't have the right arguments, the space over which things are trained, a neural operator compliant NN? I don't understand how any of this is the PINO. So let's take a step back. Before doing it for the PDEs, let's get the ODE form. @KirillZubov @sathvikbhagavan are we in agreement on the API here? |
yes, makes sense. |
@ChrisRackauckas thank for you comment. I agree with your comment. The mapping between the functional space by parameters and the solution is not explicitly shown in the interface as arguments. Something like 'mapping = [u0, u(t)]'. Instead of this, space of parameters are implicitly generated as datasets and put in 'TRAINSET', which is not a better API solution - agree. Also, in a more general form, it can be not limited only to 'u0' and 'p' giving access to parameterize any argument or even a function as part of an equation. I’ll think about it and provide in this PR what an API for PINOODE might look like according to your comments. After agreeing with you about API, I will begin to upgrade the code for PINOODE following the new requirements. To discuss API for PINO PDE, I'll create an added issue and provide my version of the prototype for API there too but later. |
@ChrisRackauckas Considering your comments above, I tried to make a new API for PINOODE. Could you please check, that is what you described and waiting for? #API
#only physics
equation = (u, p, t) -> cos(p * t)
tspan = (0.0f0, 2.0f0)
u0 = 0.0f0
prob = ODEProblem(equation, u0, tspan)
# prob = PINOODEProblem(equation, tspan)?
# init neural operator
deeponet = DeepONet(branch, trunk)
bounds = (p = [0.1, pi / 2], u0 = [1, 2])
opt = OptimizationOptimisers.Adam(0.1)
alg = NeuralPDE.PINOODE(deeponet, opt, bounds)
sol = solve(prob, alg, dt = 0.1, verbose = true, maxiters = 2000)
# with data
equation = (u, p, t) -> cos(p * t)
tspan = (0.0f0, 2.0f0)
u0 = 0.0f0
prob = ODEProblem(linear, u0 ?, tspan)
# prob = PINOODEProblem(equation, tspan)?
# init neural operator
deeponet = DeepONet(branch, trunk)
opt = OptimizationOptimisers.Adam(0.01)
bounds = (p = [0, pi / 2])
function data_loss()
#code
end
alg = NeuralPDE.PINOODE(chain, opt, bounds; add_loss = data_loss)
sol = solve(prob, alg, verbose = false, maxiters = 2000) |
Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet. |
@KirillZubov Can I suggest a PINO-based spatiotemporal MFG example and test https://www.mdpi.com/2227-7390/12/6/803 @ChrisRackauckas Also the excellent Sophon.jl https://yichengdwu.github.io/Sophon.jl/dev/ @YichengDWu for interation with MTK and other SciML packages |
Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet.
#575
Checklist
https://arxiv.org/abs/2103.10974
https://arxiv.org/abs/2111.03794