Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Physics informed neural operator ode #806

Open
wants to merge 50 commits into
base: master
Choose a base branch
from
Open

Physics informed neural operator ode #806

wants to merge 50 commits into from

Conversation

KirillZubov
Copy link
Member

@KirillZubov KirillZubov commented Feb 12, 2024

Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet.

#575

Checklist

  • pino ode
  • family ode by parameter
  • physics informed DeepOnet
  • tests
  • addition loss test
  • doc

https://arxiv.org/abs/2103.10974
https://arxiv.org/abs/2111.03794

@KirillZubov
Copy link
Member Author

@ChrisRackauckas I need help with packages version. Adding dependency of NeuralOperator.jl to project, fail CI. I've tried a little to line up suitable versions but not success.

@KirillZubov KirillZubov requested review from ChrisRackauckas and removed request for sathvikbhagavan March 22, 2024 14:31
@KirillZubov
Copy link
Member Author

@ChrisRackauckas @sathvikbhagavan Could you please review the PR, I guess it ready for merge.

# σ = gelu)

opt = OptimizationOptimisers.Adam(0.01)
pino_phase = OperatorLearning(train_set, is_data_loss = true, is_physics_loss = true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is a training dataset required here? That's just the normal neural operator. Can you show the pure PINO?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually what I've seen PINO mean train with both and data and physics loss.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it shouldn't be required.

Lux.Dense(16, 32, Lux.σ),
Lux.Dense(32, 1))
alg = PINOODE(chain, opt, pino_phase)
fine_tune_solution = solve( prob, alg, verbose = false, maxiters = 2000)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why this one is needed. Why not just use the pino_solution ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'pino_solution' is prediction for all family of parametric ODE. 'fine_tune_solution' is prediction for only one instance of ODE from all family. So we do additional training but for predict only one exemplar and increase accuracy using already trained operator from 'pino_solution'.
'pino_solution' have trained for all but less accurate, 'fine_tune_solution' for one but with better accurate then 'pino_solution'

Comment on lines 1 to 10
"""
TRAINSET(input_data, output_data; isu0 = false)
## Positional Arguments
* input_data: variables set 'a' of equation (for example initial condition {u(t0 x)} or parameter p
* output_data: set of solutions u(t){a} corresponding parameter 'a'.
"""
struct TRAINSET{}
input_data::Vector{ODEProblem}
output_data::Array
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too generic of a name to export

@ChrisRackauckas
Copy link
Member

It doesn't look like this can do the pure PINO with just the physics loss?

Also, it looks like this needs a big rebase.

@KirillZubov KirillZubov removed the request for review from sathvikbhagavan April 3, 2024 07:59
@KirillZubov
Copy link
Member Author

It doesn't look like this can do the pure PINO with just the physics loss?

Also, it looks like this needs a big rebase.

it can with just the physics loss but obvious not so good how it with data. I will add test.

ok, what do you think it need to rebase?

@ChrisRackauckas
Copy link
Member

This should have had a bit more of an API discussion before starting. The API is really the key here. I think PINO just falls out of doing that API correctly. So let's dive into https://arxiv.org/abs/2103.10974 .

The core element of PINO is the way that the network takes functional inputs. While that's the theory, in practice it usually gets simplified to over some vector space of inputs like in https://arxiv.org/abs/2103.10974. So the point of the PINO is that it should learn over basically the space of u0 and p.

Thus there's a few things to disconnect here. You could have a non neural operator also take in u0 and p. But it would treat it slightly differently. But the sample space and the neural network are not necessarily the same thing here.

So that leads us down the path to an API. First of all, the PINO should be like PhysicsInformedNN except it should make use of information from the bounds metadata https://docs.sciml.ai/ModelingToolkit/stable/basics/Variable_metadata/#Bounds of the parameters. For anything with bounds, it should seek to train a neural network that satisfies all parameter values within the bounds. To keep things simple, it should have a keyword argument bounds which takes an array of tuples for the bounds of the parameters, and pre-populate it to match the bounds from the metadata. Anything without bounds would be treated as a constant.

Initial conditions can simply be set to be functions of parameters, so only parameters need to be supported and a tutorial can handle the rest.

So PINO would take the same information as PhysicsInformedNN except that it would also require parameter bounds. Then its physical loss would sample over the parameter space as well. That would need possibly its own strategy, but random and quasi-random would do for starters.

For solution data, PhysicsInformedNN and PINO should just have a nice way of supporting that, that's just a completely separate feature.

What would be required though is a slightly different implementation of the NN. We should then require NN(indepvars,p) where the first part is the independent variables [t,x,y,...]. Thus this is a bit of a difference from the NN form from before. But this is required for example for things like the DeepONets which treat the two in separate neural networks and then merge. The output is a solution for those parameters.

So given this is the PINO... I don't understand how this PR implements an API like this at all.

    pino_phase = EquationSolving(dt, pino_solution)
    chain = Lux.Chain(Lux.Dense(2, 16, Lux.σ),
        Lux.Dense(16, 16, Lux.σ),
        Lux.Dense(16, 32, Lux.σ),
        Lux.Dense(32, 1))
    alg = PINOODE(chain, opt, pino_phase)
    fine_tune_solution = solve(
        prob, alg, verbose = false, maxiters = 2000)

this doesn't have the right arguments, the space over which things are trained, a neural operator compliant NN? I don't understand how any of this is the PINO.

So let's take a step back. Before doing it for the PDEs, let's get the ODE form. PINOODE needs a chain of the form NN([t],p) and, because ODEProblems don't have it, some representation of the bounds over which to sample p. It needs to then learn by sampling both t andp. Now the weird thing is representing the solution here, since it's not quite an ODESolution. What I would recommend is giving it the ODE solution at the p specified by the prob, but then document that sol.original gives the neural network weights for the NN(t,p) object, and show how it can be used to sample at new points.

@KirillZubov @sathvikbhagavan are we in agreement on the API here?

@sathvikbhagavan
Copy link
Member

are we in agreement on the API here?

yes, makes sense.

@KirillZubov
Copy link
Member Author

are we in agreement on the API here?

@ChrisRackauckas thank for you comment.
I oriented on this article https://arxiv.org/abs/2111.03794 while implementing 'pinoode'. Here it is from appeared 'fine_tune_solution' and other features which remained unclear to you.

I agree with your comment. The mapping between the functional space by parameters and the solution is not explicitly shown in the interface as arguments. Something like 'mapping = [u0, u(t)]'. Instead of this, space of parameters are implicitly generated as datasets and put in 'TRAINSET', which is not a better API solution - agree.

Also, in a more general form, it can be not limited only to 'u0' and 'p' giving access to parameterize any argument or even a function as part of an equation.

I’ll think about it and provide in this PR what an API for PINOODE might look like according to your comments.

After agreeing with you about API, I will begin to upgrade the code for PINOODE following the new requirements.

To discuss API for PINO PDE, I'll create an added issue and provide my version of the prototype for API there too but later.

@KirillZubov
Copy link
Member Author

KirillZubov commented Apr 18, 2024

@ChrisRackauckas Considering your comments above, I tried to make a new API for PINOODE. Could you please check, that is what you described and waiting for?

#API
#only physics
  equation = (u, p, t) -> cos(p * t)
    tspan = (0.0f0, 2.0f0)
    u0 = 0.0f0
    prob = ODEProblem(equation, u0, tspan)
    # prob = PINOODEProblem(equation, tspan)?

    # init neural operator
    deeponet = DeepONet(branch, trunk)

    bounds = (p = [0.1, pi / 2], u0 = [1, 2])
    opt = OptimizationOptimisers.Adam(0.1)
    alg = NeuralPDE.PINOODE(deeponet, opt, bounds)
    sol  = solve(prob, alg, dt = 0.1, verbose = true, maxiters = 2000)


# with data
   equation = (u, p, t) -> cos(p * t)
    tspan = (0.0f0, 2.0f0)
    u0 = 0.0f0
    prob = ODEProblem(linear, u0 ?, tspan)
   # prob = PINOODEProblem(equation, tspan)?

    # init neural operator
    deeponet = DeepONet(branch, trunk)
    opt = OptimizationOptimisers.Adam(0.01)
    bounds = (p = [0, pi / 2])
    function data_loss()
        #code
    end
    alg = NeuralPDE.PINOODE(chain, opt, bounds; add_loss = data_loss)
    sol = solve(prob, alg, verbose = false, maxiters = 2000)

@KirillZubov KirillZubov changed the title Physics informed neural operator ode [WIP] Physics informed neural operator ode Apr 25, 2024
@KirillZubov KirillZubov changed the title [WIP] Physics informed neural operator ode Physics informed neural operator ode May 8, 2024
@KirillZubov
Copy link
Member Author

Implementation Physics-informed neural operator method for solve parametric Ordinary Differential Equations (ODE) use DeepOnet.

@KirillZubov KirillZubov removed the request for review from ChrisRackauckas May 8, 2024 18:34
@finmod
Copy link

finmod commented May 24, 2024

@KirillZubov Can I suggest a PINO-based spatiotemporal MFG example and test https://www.mdpi.com/2227-7390/12/6/803

@ChrisRackauckas Also the excellent Sophon.jl https://yichengdwu.github.io/Sophon.jl/dev/ @YichengDWu for interation with MTK and other SciML packages

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants