Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helping to solve complex system of DAE by NeuralPDE. #721

Open
HuynhTran0301 opened this issue Aug 17, 2023 · 32 comments
Open

Helping to solve complex system of DAE by NeuralPDE. #721

HuynhTran0301 opened this issue Aug 17, 2023 · 32 comments

Comments

@HuynhTran0301
Copy link

I am trying to solve the DAE system. From the results, although the loss function is very small, the status of the optimization solver is a success, but the results are not the same with the ODE or DAE numerical solvers.
My code is:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches
import ModelingToolkit: Interval
using CSV
using DataFrames
data = CSV.File("3gens.csv");
Y1 = CSV.read("Y_des.CSV", DataFrame, types=Complex{Float64}); #decreasing

#Input of the system.
E1 = 1.054;
E2 = 1.050;
E3 = 1.017;

#Define equation of the system
@variables delta1(..) delta2(..) delta3(..) omega1(..) omega2(..) omega3(..) TM1(..) TM2(..) TM3(..) Pe1(..) Pe2(..) Pe3(..) Psv1(..) Psv2(..) Psv3(..)
@parameters t 
D = Differential(t)
omega_s = 120*pi
function eq_norm(omegaN,deltaN,TMN,PeN,PsvN)
    eq1 = [
        D(delta1(t)) ~ (omega1(t) + omegaN - omega_s)/deltaN,
        #
        D(delta2(t)) ~ (omega2(t) + omegaN - omega_s)/deltaN,
        #
        D(delta3(t)) ~ (omega3(t) + omegaN - omega_s)/deltaN,
        #
        D(omega1(t)) ~ (TM1(t) + TMN[1] - Pe1(t) - PeN[1])/(2*data["H"][1])*omega_s,
        #
        D(omega2(t)) ~ (TM2(t) + TMN[2] - Pe2(t) - PeN[2])/(2*data["H"][2])*omega_s,
        #
        D(omega3(t)) ~ (TM3(t) + TMN[3] - Pe3(t) - PeN[3])/(2*data["H"][3])*omega_s,
        #
        0 ~ Pe1(t) + PeN[1] - (E1^2*Y1[1,1].re + E1*E2*sin(deltaN*(delta1(t)-delta2(t)))*Y1[1,2].im + E1*E2*cos(deltaN*(delta1(t)-delta2(t)))*Y1[1,2].re + E1*E3*sin(deltaN*(delta1(t)-delta3(t)))*Y1[1,3].im 
                + E1*E3*cos(deltaN*(delta1(t)-delta3(t)))*Y1[1,3].re),
        #
        0 ~ Pe2(t) + PeN[2] - (E2^2*Y1[2,2].re + E1*E2*sin(deltaN*(delta2(t)-delta1(t)))*Y1[2,1].im + E1*E2*cos(deltaN*(delta2(t)-delta1(t)))*Y1[2,1].re + E2*E3*sin(deltaN*(delta2(t)-delta3(t)))*Y1[2,3].im 
                + E2*E3*cos(deltaN*(delta2(t)-delta3(t)))*Y1[2,3].re),
        #
        0 ~ Pe3(t) + PeN[3] - (E3^2*Y1[3,3].re + E3*E1*sin(deltaN*(delta3(t)-delta1(t)))*Y1[3,1].im + E3*E1*cos(deltaN*(delta3(t)-delta1(t)))*Y1[3,1].re + E3*E2*sin(deltaN*(delta3(t)-delta2(t)))*Y1[3,2].im 
                + E3*E2*cos(deltaN*(delta3(t)-delta2(t)))*Y1[3,2].re)];
    
    eq2 = [D(TM1(t)) ~ (-TM1(t) - TMN[1] + Psv1(t) + PsvN[1]) / data["TCH"][1],
        #
        D(TM2(t)) ~ (-TM2(t) - TMN[2] + Psv2(t) + PsvN[2]) / data["TCH"][2],
        #
        D(TM3(t)) ~ (-TM3(t) - TMN[3] + Psv3(t) + PsvN[3]) / data["TCH"][3]]

    eq3 = [D(Psv1(t)) ~ (-Psv1(t) - PsvN[1] + 0.70945 + 0.33*(-0.0024) - ((omega1(t) + omegaN)/omega_s - 1)/data["RD"][1])/data["TSV"][1],
        #
        D(Psv2(t)) ~ (-Psv2(t) - PsvN[2] + 1.62342 + 0.334*(-0.0024) - ((omega2(t) + omegaN)/omega_s - 1)/data["RD"][2])/data["TSV"][2],
        #
        D(Psv3(t)) ~ (-Psv3(t) - PsvN[3] + 0.84843 + 0.33*(-0.0024) - ((omega3(t) + omegaN)/omega_s - 1)/data["RD"][3])/data["TSV"][3]];
    
    eqs = [eq1;eq2;eq3];
    # return eqs
end;

omegaN = 120*pi;
deltaN = 1;
TMN = [0, 1.62342, 0];
PeN = [0, 1.62342, 0];
PsvN = [0, 1.62342, 0];
eqs = eq_norm(omegaN,deltaN,TMN,PeN,PsvN)
bcs = [delta1(0.0) ~ 0.03957/deltaN, delta2(0.0) ~ 0.3447/deltaN, delta3(0.0) ~ 0.23038/deltaN,
    omega1(0.0) ~ omega_s - omegaN, omega2(0.0) ~ omega_s - omegaN, omega3(0.0) ~ omega_s - omegaN,
    TM1(0.0) ~ 0.70945 - TMN[1], TM2(0.0) ~ 1.62342 - TMN[2], TM3(0.0) ~ 0.848433 - TMN[3],
    Pe1(0.0) ~ 0.70945 - PeN[1], Pe2(0.0) ~ 1.62342 - PeN[2], Pe3(0.0) ~ 0.848433 - PeN[3],
    Psv1(0.0) ~ 0.70945 - PsvN[1], Psv2(0.0) ~ 1.62342 - PsvN[2], Psv3(0.0)~ 0.848433 - PsvN[3]]

domains = [t ∈ Interval(0.0,25.0)]

n = 10
chain =[Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,1)) for _ in 1:15]
dvs = [delta1(t),delta2(t),delta3(t),omega1(t),omega2(t),omega3(t),TM1(t),TM2(t),TM3(t),Pe1(t),Pe2(t),Pe3(t),Psv1(t),Psv2(t),Psv3(t)]

@named pde_system = PDESystem(eqs,bcs,domains,[t],dvs)

strategy = NeuralPDE.QuadratureTraining()
discretization = PhysicsInformedNN(chain, strategy, adaptive_loss  = NeuralPDE.GradientScaleAdaptiveLoss(10))

sym_prob = NeuralPDE.symbolic_discretize(pde_system, discretization)

pde_loss_functions = sym_prob.loss_functions.pde_loss_functions
bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

iter = 0
maxiters = 10000
callback = function (p, l)
    global iter
    #Show the loss after 100 iterations.
    if iter % 100 == 0
        println("loss: ", l,"\t\t","Iterations: $iter")
    end
    iter += 1
    return false
end

loss_functions =  [pde_loss_functions;bc_loss_functions]

function loss_function(θ,p)
    sum(map(l->l(θ) ,loss_functions))
end

f_ = OptimizationFunction(loss_function, Optimization.AutoZygote())
prob = Optimization.OptimizationProblem(f_, sym_prob.flat_init_params);
phi = sym_prob.phi;
res = Optimization.solve(prob,OptimizationOptimJL.BFGS(linesearch=LineSearches.BackTracking()); callback = callback, maxiters = maxiters)

The information after solving is:

* Status: success

 * Candidate solution
    Final objective value:     6.250256e-06

 * Found with
    Algorithm:     BFGS

 * Convergence measures
    |x - x'|               = 0.00e+00 ≤ 0.0e+00
    |x - x'|/|x'|          = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|         = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
    |g(x)|                 = 3.20e-01 ≰ 1.0e-08

 * Work counters
    Seconds run:   140  (vs limit Inf)
    Iterations:    323
    f(x) calls:    1541
    ∇f(x) calls:   323

The following results from NeuralPDE:
8ce2b3f431f3d43b31f2c9a3ed7a747bf10b80c3

And the results from ODE solvers (Tsit5):
f489e8a7189a0ca8910f02547ad44c82cb475476

The CSV files that I used on the code are attached below.
3gens.csv
Y_des.csv

@ChrisRackauckas
Copy link
Member

@sdesai1287 this would be a good one to try the alternative strategies on.

@sdesai1287
Copy link
Contributor

Ok I can look into that for sure

@ChrisRackauckas
Copy link
Member

Using #790 might be simpler. This would be a good problem to dive into.

@hippyhippohops
Copy link

Hi, I am working on this issue right now.I have tried a the QuasiRandomTraining training strategy with both 50 and 200 points. I plan on trying stochastic training method next, and then going back to the quadature training strategy with different adaptive loss functions. I am also working on the plots right now (I am new to Julia, so I am learning the language as I go. So, the progress is slow. I hope that's fine). I have a question regarding the plot. Which variable corresponds to the phase angle? Thank you!

@sathvikbhagavan
Copy link
Member

Which variable corresponds to the phase angle?

@HuynhTran0301, can you answer this?

@HuynhTran0301
Copy link
Author

@sathvikbhagavan In my model, the delta variables represent the rotor phase angle in the power system dynamics.

@hippyhippohops
Copy link

I am getting this error when I try to plot the graphs of delta1,delta2 and delta3 with respect to the domain (The command used is: plot(domains, delta1) ): Cannot convert Symbolics.VarDomainPairing to series data for plotting.

@hippyhippohops
Copy link

From what I understand, delta1, delta2, delta3 are all a function of t and domains is defined as:
domains = [t ∈ Interval(0.0,25.0)]. I tried extracting the real/complex of delta in case it might be a complex number, but that doesn't work too.

@sathvikbhagavan
Copy link
Member

Do something like: (You have to do a forward pass)

ts = 0.0:0.1:25.0 |> collect
x = phi[1](ts', res.u.depvar.delta1)
plot(ts, x')

where phi is the vector of NeuralPDE.phi, index it into the corresponding variable and pass in the parameters from res.u

@hippyhippohops
Copy link

I got somewhere close when I used NeuralPDE.QuasiRandomTraining(150) (I attached the image above). QuasiRandomTraining(50) was wildly off, so I will try QuasiRandomTraining(250) to see if I can get something closer to the actual graph.
QuasiRandomTraining(150)

@ChrisRackauckas
Copy link
Member

50 is definitely too small 😅 , that's looking more reasonable.

@hippyhippohops
Copy link

Right, I am trying to play with different number of points, ranging from small to large, to see how that affects the output. When the number of points is 500, the output is extremely similar to the one I have above:
QuasiRandomTraining(500)

@hippyhippohops
Copy link

But I just realised my x-axis needs to be extended, so I am going to run everything again

@hippyhippohops
Copy link

After rescaling the axis, it seems to match the original OP's result from NeuralPDE. I have trained setting the number of points as: 250, 500, 1000.
QuasiRandomTraining(250)-CorrectAxis
QuasiRandomTraining(500)-CorrectAxis
QuasiRandomTraining(1000)_CorrectedAxis
I am going to try stochastic training (setting the number of points as 250,500 and 1000). Also, I had a question regarding WeightedIntervalTraining(weights, samples). I did try reading the documentation (https://docs.sciml.ai/NeuralPDE/stable/manual/training_strategies/) but I do not completely understand it. The weight is a set of vector such that the sum of all elements is equal to 1. So, the timespan is divided by the size of the vector? Then the size of the i^{th} element tells us the proportion of sample points (w_i x number of sampling point) to take from the i^{th} interval? If so, in this case, the timespan is given by the domain (t values)?

@hippyhippohops
Copy link

Also, in the original code, the domain is given to be domains = [t ∈ Interval(0.0,25.0)] but the graph is plotted from 0 to 2500. The x-axis is given by the domain right? We are plotting rotor phase angle (delta variables) against t? So, shouldn't it be from 0 to 25?

@HuynhTran0301
Copy link
Author

When I plot the figure I just put the data of delta plot(delta1). Thus, it plots the delta with the number of time steps (h = 0.01, then the total number of time steps is 2501). You can plot the delta by tspan [0, 25].

@hippyhippohops
Copy link

I have tried the following approaches: QuasiRandomTraining, StochasticTraining with 50, 500 and 1000 points. I also have tried playing the following adaptive loss functions ( paired together with NeuralPDE.QuadratureTraining() ): MiniMaxAdaptiveLoss(5), MiniMaxAdaptiveLoss(10), MiniMaxAdaptiveLoss(20) and NeuralPDE.GradientScaleAdaptiveLoss(5), NeuralPDE.GradientScaleAdaptiveLoss(20). The closest I would get is to QuasiRandomTraining(250) (the graph which I posted a couple of posts above). I am unable to detect the 2nd hump which we can see in the ODE solvers (Tsit5). Is there an alternative route I can try?

@sathvikbhagavan
Copy link
Member

Wait, are you using NNDAE? I think for NNDAE, only GridTraining is implemented - https://github.com/SciML/NeuralPDE.jl/blob/master/src/dae_solve.jl#L75 (NNDAE has a diffeq interface, not symbolic).

I think other strategies need to be implemented first, @ChrisRackauckas?

@ChrisRackauckas
Copy link
Member

Yeah, well that surely would make it bad. This is a good first issue. It should just be made generic and reuse the sampling code of NNODE, I don't see why it wouldn't.

@hippyhippohops
Copy link

I am just playing with the different strategies in strategy = NeuralPDE.QuadratureTraining()
discretization = PhysicsInformedNN(chain, strategy, adaptive_loss = NeuralPDE.GradientScaleAdaptiveLoss(10)). So, it is the PhysicsInformedNN(). I was looking at the training strategies in the documentation and going along with that. The only caveat I saw was that NeuralPDE.WeightedIntervalTraining can be used for ODE (NNODE). When I tried Gridtraining with different dx sizes, the results weren't better as well. The best approximation I got was with dx = 0.05:
NeuralPDE GridTraining(0 05)

@hippyhippohops
Copy link

What do mean by other strategies need to be implemented first? Do you mean in the source code found in https://github.com/SciML/NeuralPDE.jl/blob/master/src/dae_solve.jl#L75 ? I have enjoyed learning and working on this and since Chris mentioned this is a good first issue, I would like to continue working on this as my first issue.

@sathvikbhagavan
Copy link
Member

Yes, the strategies need to be refactored such that it can be used for both NNODE and NNDAE. See the strategies present for NNODE. Using PhysicsInformedNN may not be the best approach.

@hippyhippohops
Copy link

I am trying to use NNODE now. Following through the tutorial (https://docs.sciml.ai/NeuralPDE/stable/tutorials/ode/#Solving-an-ODE-with-NNODE), I made the following changes:

eqs(t) = eq_norm(omegaN,deltaN,TMN,PeN,PsvN)
prob = ODEProblem(eqs, bcs, domains)

n = 10
chain =[Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,1)) for _ in 1:15]
dvs = [delta1(t),delta2(t),delta3(t),omega1(t),omega2(t),omega3(t),TM1(t),TM2(t),TM3(t),Pe1(t),Pe2(t),Pe3(t),Psv1(t),Psv2(t),Psv3(t)]

alg = NNODE(chain,OptimizationOptimisers.Adam(0.1))
sol = solve(prob, alg, verbose = true, maxiters = 2000, saveat = 0.01)
plot(sol.t, sol.u, label = "pred")

Everything else is the same. So, in this case, I am getting the following error:

ERROR: cannot define function eqs; it already has a value Stacktrace: [1] top-level scope @ none:0

Has it to do with this: Note that NNODE only supports ODEs which are written in the out-of-place form, i.e. du = f(u,p,t), and not f(du,u,p,t). If not declared out-of-place, then the NNODE will exit with an error (https://docs.sciml.ai/NeuralPDE/stable/manual/dae/).

But from I see the in function eq_norm(omegaN,deltaN,TMN,PeN,PsvN), the derivatives are the subjects.

@sathvikbhagavan
Copy link
Member

sathvikbhagavan commented Mar 8, 2024

NNODE & NNDAE only work with diffeq interface, not symbolic. You would have to write out the equations in a function similar to how you would write for solving it using DifferentialEquations.jl

@hippyhippohops
Copy link

I am getting the following error when I am implementing the diffeq interface: MethodError: no method matching transform(::Vector{Chain{@NamedTuple{…}, Nothing}})
Stacktrace:
[1] NNODE(chain::Vector{…}, opt::Optimisers.Adam, init_params::Nothing; strategy::Nothing, autodiff::Bool, batch::Nothing, param_estim::Bool, additional_loss::Nothing, kwargs::@kwargs{})
@ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:92
[2] NNODE(chain::Vector{Chain{@NamedTuple{…}, Nothing}}, opt::Optimisers.Adam, init_params::Nothing)
@ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:89
[3] top-level scope
@ ~/Issue721:94
Some type information was truncated. Use show(err) to see complete types.

This is my code:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches,  OptimizationOptimisers
import ModelingToolkit: Interval
using CSV
using DataFrames
using Plots
using DifferentialEquations
data = CSV.File("/Users/aravinthkrishnan/Desktop/3gens.csv");
Y1 = CSV.read("/Users/aravinthkrishnan/Desktop/Y_des.csv", DataFrame, types=Complex{Float64}); #decreasing

# Input of the system.
E1 = 1.054;
E2 = 1.050;
E3 = 1.017;

omegaN = 120*pi;
omega_s = 120*pi
deltaN = 1;
TMN = [0, 1.62342, 0];
PeN = [0, 1.62342, 0];
PsvN = [0, 1.62342, 0];

function eqs(du,u,t)
    du[1] = (u[4]+ omegaN - omega_s)/deltaN
    du[2] = (u[5] + omegaN - omega_s)/deltaN
    du[3] =  (u[6] + omegaN - omega_s)/deltaN
    du[4] =  (u[7] + TMN[1] - u[10] - PeN[1])/(2*data["H"][1])*omega_s
    du[5] =  (u[8] + TMN[2] - u[11] - PeN[2])/(2*data["H"][2])*omega_s
    du[6] =  (u[9] + TMN[3] - u[12] - PeN[3])/(2*data["H"][3])*omega_s
    u[10] = -( PeN[1] - (E1^2*Y1[1,1].re + E1*E2*sin(deltaN*(u[1]-u[2]))*Y1[1,2].im + E1*E2*cos(deltaN*(u[1]-u[2]))*Y1[1,2].re + E1*E3*sin(deltaN*(u[1]-u[3]))*Y1[1,3].im + E1*E3*cos(deltaN*(delta1(t)-delta3(t)))*Y1[1,3].re))
    u[11] =  -( PeN[2] - (E2^2*Y1[2,2].re + E1*E2*sin(deltaN*(u[2]-u[1]))*Y1[2,1].im + E1*E2*cos(deltaN*(u[2]-u[1]))*Y1[2,1].re + E2*E3*sin(deltaN*(u[2]-u[3]))*Y1[2,3].im + E2*E3*cos(deltaN*(delta2(t)-delta3(t)))*Y1[2,3].re))
    u[12] =  -( PeN[3] - (E3^2*Y1[3,3].re + E3*E1*sin(deltaN*(u[3]-u[1]))*Y1[3,1].im + E3*E1*cos(deltaN*(u[3]-u[1]))*Y1[3,1].re + E3*E2*sin(deltaN*(u[3]-u[2]))*Y1[3,2].im + E3*E2*cos(deltaN*(delta3(t)-delta2(t)))*Y1[3,2].re))
    du[7] = (-u[7] - TMN[1] + u[13] + PsvN[1]) / data["TCH"][1]
    du[8] = (-u[8] - TMN[2] + u[14] + PsvN[2]) / data["TCH"][2]
    du[9] = (-u[9] - TMN[3] + u[15] + PsvN[3]) / data["TCH"][3]
    du[13] = (-u[13] - PsvN[1] + 0.70945 + 0.33*(-0.0024) - ((u[4] + omegaN)/omega_s - 1)/data["RD"][1])/data["TSV"][1]
    du[14] = (-u[14] - PsvN[2] + 1.62342 + 0.334*(-0.0024) - ((u[5] + omegaN)/omega_s - 1)/data["RD"][2])/data["TSV"][2]
    du[15] = (-u[15] - PsvN[3] + 0.84843 + 0.33*(-0.0024) - ((u[6] + omegaN)/omega_s - 1)/data["RD"][3])/data["TSV"][3]
end;


bcs = [0.03957/deltaN,  0.3447/deltaN, 0.23038/deltaN, omega_s - omegaN, omega_s - omegaN, omega_s - omegaN, 0.70945 - TMN[1], 1.62342 - TMN[2], 0.848433 - TMN[3], 0.70945 - PeN[1], 1.62342 - PeN[2], 0.848433 - PeN[3], 0.70945 - PsvN[1], 1.62342 - PsvN[2], 0.848433 - PsvN[3]]
domains = (0.0,25.0)
prob = ODEProblem(eqs, bcs, domains)

n = 10
chain =[Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,1)) for _ in 1:15]

alg = NNODE(chain,OptimizationOptimisers.Adam(0.1))
sol = solve(prob, alg, verbose = true, maxiters = 2000, saveat = 0.01)
plot(sol.t, sol.u, label = "pred")

@sathvikbhagavan
Copy link
Member

Instead of

chain =[Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,1)) for _ in 1:15]

try

chain = Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,15))

NNODE doesn't support neural network for each variable separately. I think we can add it as a feature.

@hippyhippohops
Copy link

hippyhippohops commented Mar 11, 2024

Hmm, what do you mean add it as a feature? Should I create a pull request and try to add the code and then submit it? Is this because for NeuralPDE.PhysicsInformedNN we have the following positional argument:
chain: a vector of Lux/Flux chains with a d-dimensional input and a 1-dimensional output corresponding to each of the dependent variables. Note that this specification respects the order of the dependent variables as specified in the PDESystem. Flux chains will be converted to Lux internally using Lux.transform.
But for NNODE and NNADE we have the following respectively:

chain: A neural network architecture, defined as a Lux.AbstractExplicitLayer or Flux.Chain. Flux.Chain will be converted to Lux using Lux.transform.

A neural network architecture, defined as either a Flux.Chain or a Lux.AbstractExplicitLayer

So, essentially, for PhysicsInformedNN, the chain has to be a vector but in NNODE and NNADE it can't?

Also, after making the changes you suggested above, I am having the error:

ERROR: BoundsError: attempt to access Num at index [4]
Stacktrace:

  [1] getindex
    @ ./number.jl:98 [inlined]
  [2] eqs(du::Vector{ForwardDiff.Dual{ForwardDiff.Tag{…}, Float64, 12}}, u::Num, t::Float64)
    @ Main ~/Issue721:69
  [3] (::ODEFunction{…})(::Vector{…}, ::Vararg{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/scimlfunctions.jl:2203
  [4] inner_loss(phi::NeuralPDE.ODEPhi{…}, f::ODEFunction{…}, autodiff::Bool, t::Float64, θ::ComponentArrays.ComponentVector{…}, p::Num, param_estim::Bool)
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:209
  [5] (::NeuralPDE.var"#integrand#186"{…})(t::Float64, θ::ComponentArrays.ComponentVector{…})
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:229
  [6] evalrule(f::Integrals.var"#53#59"{…}, a::Float64, b::Float64, x::Vector{…}, w::Vector{…}, gw::Vector{…}, nrm::typeof(LinearAlgebra.norm))
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/evalrule.jl:0
  [7] #6
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:15 [inlined]
  [8] ntuple
    @ ./ntuple.jl:48 [inlined]
  [9] do_quadgk(f::Integrals.var"#53#59"{…}, s::Tuple{…}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(LinearAlgebra.norm), segbuf::Nothing)
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:13
 [10] #50
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:253 [inlined]
 [11] handle_infinities(workfunc::QuadGK.var"#50#51"{…}, f::Integrals.var"#53#59"{…}, s::Tuple{…})
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:145
 [12] #quadgk#49
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:252 [inlined]
 [13] quadgk
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:250 [inlined]
 [14] __solvebp_call(cache::Integrals.IntegralCache{…}, alg::Integrals.QuadGKJL{…}, sensealg::Integrals.ReCallVJP{…}, domain::Tuple{…}, p::ComponentArrays.ComponentVector{…}; reltol::Float64, abstol::Float64, maxiters::Int64)
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/Integrals.jl:142
 [15] __solvebp_call
    @ ~/.julia/packages/Integrals/uahDt/src/Integrals.jl:88 [inlined]
 [16] #__solvebp#1
    @ ~/.julia/packages/Integrals/uahDt/ext/IntegralsForwardDiffExt.jl:27 [inlined]
 [17] __solvebp
    @ ~/.julia/packages/Integrals/uahDt/ext/IntegralsForwardDiffExt.jl:7 [inlined]
 [18] solve!(cache::Integrals.IntegralCache{…})
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/common.jl:105
 [19] solve(prob::IntegralProblem{…}, alg::Integrals.QuadGKJL{…}; kwargs::@Kwargs{…})
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/common.jl:101
 [20] (::NeuralPDE.var"#loss#188"{…})(θ::ComponentArrays.ComponentVector{…}, ::NeuralPDE.ODEPhi{…})
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:236
 [21] total_loss
    @ ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:414 [inlined]
 [22] (::OptimizationForwardDiffExt.var"#37#55"{…})(::ComponentArrays.ComponentVector{…})
    @ OptimizationForwardDiffExt ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:98
 [23] #39
    @ ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:102 [inlined]
 [24] chunk_mode_gradient!(result::ComponentArrays.ComponentVector{…}, f::OptimizationForwardDiffExt.var"#39#57"{…}, x::ComponentArrays.ComponentVector{…}, cfg::ForwardDiff.GradientConfig{…})
    @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:123
 [25] gradient!
    @ ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:39 [inlined]
 [26] (::OptimizationForwardDiffExt.var"#38#56"{…})(::ComponentArrays.ComponentVector{…}, ::ComponentArrays.ComponentVector{…})
    @ OptimizationForwardDiffExt ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:102
 [27] macro expansion
    @ ~/.julia/packages/OptimizationOptimisers/AOkbT/src/OptimizationOptimisers.jl:68 [inlined]
 [28] macro expansion
    @ ~/.julia/packages/Optimization/Zc00b/src/utils.jl:41 [inlined]
 [29] __solve(cache::OptimizationCache{…})
    @ OptimizationOptimisers ~/.julia/packages/OptimizationOptimisers/AOkbT/src/OptimizationOptimisers.jl:66
 [30] solve!(cache::OptimizationCache{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/solve.jl:180
 [31] solve(::OptimizationProblem{…}, ::Optimisers.Adam; kwargs::@Kwargs{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/solve.jl:96
 [32] __solve(::ODEProblem{…}, ::NNODE{…}; dt::Nothing, timeseries_errors::Bool, save_everystep::Bool, adaptive::Bool, abstol::Float32, reltol::Float32, verbose::Bool, saveat::Float64, maxiters::Int64, tstops::Nothing)
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:454
 [33] __solve
    @ ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:336 [inlined]
 [34] solve_call(_prob::ODEProblem{…}, args::NNODE{…}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@Kwargs{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:612
 [35] solve_call
    @ ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:569 [inlined]
 [36] #solve_up#42
    @ ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:1064 [inlined]
 [37] solve_up
    @ ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:1050 [inlined]
 [38] #solve#40
    @ ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:987 [inlined]
 [39] top-level scope
    @ ~/Issue721:96
Some type information was truncated. Use `show(err)` to see complete types.

It seems like the error is in line 69 and u[4] is said to be out of bounds. I am not sure why this is happening but what I tried minicking was the tutorial here: https://docs.sciml.ai/DiffEqDocs/stable/getting_started/#Example-2:-Solving-Systems-of-Equations.

@sathvikbhagavan
Copy link
Member

Hmm, what do you mean add it as a feature? Should I create a pull request and try to add the code and then submit it?

Yeah, don't worry about it. I will add it soon.

ERROR: BoundsError: attempt to access Num at index [4]

It looks like there is some symbolic stuff somewhere. Looking at your code, this caught my attention:

u[10] = -( PeN[1] - (E1^2*Y1[1,1].re + E1*E2*sin(deltaN*(u[1]-u[2]))*Y1[1,2].im + E1*E2*cos(deltaN*(u[1]-u[2]))*Y1[1,2].re + E1*E3*sin(deltaN*(u[1]-u[3]))*Y1[1,3].im + E1*E3*cos(deltaN*(delta1(t)-delta3(t)))*Y1[1,3].re))

I think delta1(t) should be u[1] and delta3(t) should be u[3]? Ensure you have replaced all symbolic variables correctly.

@hippyhippohops
Copy link

Hi, I am sorry but I tried spending the last few days making sure that it was symbolic:

function eqs(du,u,t)
    du[1] = (u[4]+ 120*pi - 120*pi)/1
    du[2] = (u[5] + 120*pi - 120*pi)/1
    du[3] =  (u[6] + 120*pi - 120*pi)/1
    du[4] =  (u[7] + 0 - u[10] - 0)/(2*data["H"][1])*120*pi
    du[5] =  (u[8] + 1.62342 - u[11] - 1.62342)/(2*data["H"][2])*120*pi
    du[6] =  (u[9] + 0 - u[12] - 0)/(2*data["H"][3])*120*pi
    u[10] = -( 0 - ((1.054)^2*Y1[1,1].re + 1.054*1.050*sin(1*(u[1]-u[2]))*Y1[1,2].im + 1.054*1.050*cos(1*(u[1]-u[2]))*Y1[1,2].re + 1.054*1.017*sin(1*(u[1]-u[3]))*Y1[1,3].im + 1.054*1.017*cos(1*(u[1]-u[3](t)))*Y1[1,3].re))
    u[11] =  -( 1.62342 - ((1.050)^2*Y1[2,2].re + 1.054*1.050*sin(1*(u[2]-u[1]))*Y1[2,1].im + 1.054*1.050*cos(1*(u[2]-u[1]))*Y1[2,1].re + 1.050*1.017*sin(1*(u[2]-u[3]))*Y1[2,3].im + 1.050*1.017*cos(1*(u[2]-u[3](t)))*Y1[2,3].re))
    u[12] =  -( 0 - ((1.017)^2*Y1[3,3].re + 1.017*1.054*sin(1*(u[3]-u[1]))*Y1[3,1].im + 1.017*1.054*cos(1*(u[3]-u[1]))*Y1[3,1].re + 1.017*1.050*sin(1*(u[3]-u[2]))*Y1[3,2].im + 1.017*1.050*cos(1*(u[3]-u[2](t)))*Y1[3,2].re))
    du[7] = (-u[7] - 0 + u[13] + 0) / data["TCH"][1]
    du[8] = (-u[8] - 1.62342 + u[14] + 1.62342) / data["TCH"][2]
    du[9] = (-u[9] - 0 + u[15] + 0) / data["TCH"][3]
    du[13] = (-u[13] - 0 + 0.70945 + 0.33*(-0.0024) - ((u[4] + 120*pi)/120*pi - 1)/data["RD"][1])/data["TSV"][1]
    du[14] = (-u[14] - 1.62342 + 1.62342 + 0.334*(-0.0024) - ((u[5] + 120*pi)/120*pi - 1)/data["RD"][2])/data["TSV"][2]
    du[15] = (-u[15] - 0 + 0.84843 + 0.33*(-0.0024) - ((u[6] + 120*pi)/120*pi - 1)/data["RD"][3])/data["TSV"][3]
end;

But I am getting the same error:

ERROR: BoundsError: attempt to access Num at index [4]
Stacktrace:
  [1] getindex
    @ ./number.jl:98 [inlined]
  [2] eqs(du::Vector{ForwardDiff.Dual{ForwardDiff.Tag{…}, Float64, 12}}, u::Num, t::Float64)
    @ Main ~/Issue721:69
  [3] (::ODEFunction{…})(::Vector{…}, ::Vararg{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/scimlfunctions.jl:2203
  [4] inner_loss(phi::NeuralPDE.ODEPhi{…}, f::ODEFunction{…}, autodiff::Bool, t::Float64, θ::ComponentArrays.ComponentVector{…}, p::Num, param_estim::Bool)
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:209
  [5] (::NeuralPDE.var"#integrand#186"{…})(t::Float64, θ::ComponentArrays.ComponentVector{…})
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:229
  [6] evalrule(f::Integrals.var"#53#59"{…}, a::Float64, b::Float64, x::Vector{…}, w::Vector{…}, gw::Vector{…}, nrm::typeof(LinearAlgebra.norm))
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/evalrule.jl:0
  [7] #6
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:15 [inlined]
  [8] ntuple
    @ ./ntuple.jl:48 [inlined]
  [9] do_quadgk(f::Integrals.var"#53#59"{…}, s::Tuple{…}, n::Int64, atol::Float64, rtol::Float64, maxevals::Int64, nrm::typeof(LinearAlgebra.norm), segbuf::Nothing)
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:13
 [10] #50
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:253 [inlined]
 [11] handle_infinities(workfunc::QuadGK.var"#50#51"{…}, f::Integrals.var"#53#59"{…}, s::Tuple{…})
    @ QuadGK ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:145
 [12] #quadgk#49
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:252 [inlined]
 [13] quadgk
    @ ~/.julia/packages/QuadGK/OtnWt/src/adapt.jl:250 [inlined]
 [14] __solvebp_call(cache::Integrals.IntegralCache{…}, alg::Integrals.QuadGKJL{…}, sensealg::Integrals.ReCallVJP{…}, domain::Tuple{…}, p::ComponentArrays.ComponentVector{…}; reltol::Float64, abstol::Float64, maxiters::Int64)
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/Integrals.jl:142
 [15] __solvebp_call
    @ ~/.julia/packages/Integrals/uahDt/src/Integrals.jl:88 [inlined]
 [16] #__solvebp#1
    @ ~/.julia/packages/Integrals/uahDt/ext/IntegralsForwardDiffExt.jl:27 [inlined]
 [17] __solvebp
    @ ~/.julia/packages/Integrals/uahDt/ext/IntegralsForwardDiffExt.jl:7 [inlined]
 [18] solve!(cache::Integrals.IntegralCache{…})
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/common.jl:105
 [19] solve(prob::IntegralProblem{…}, alg::Integrals.QuadGKJL{…}; kwargs::@Kwargs{…})
    @ Integrals ~/.julia/packages/Integrals/uahDt/src/common.jl:101
 [20] (::NeuralPDE.var"#loss#188"{…})(θ::ComponentArrays.ComponentVector{…}, ::NeuralPDE.ODEPhi{…})
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:236
 [21] total_loss
    @ ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:414 [inlined]
 [22] (::OptimizationForwardDiffExt.var"#37#55"{…})(::ComponentArrays.ComponentVector{…})
    @ OptimizationForwardDiffExt ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:98
 [23] #39
    @ ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:102 [inlined]
 [24] chunk_mode_gradient!(result::ComponentArrays.ComponentVector{…}, f::OptimizationForwardDiffExt.var"#39#57"{…}, x::ComponentArrays.ComponentVector{…}, cfg::ForwardDiff.GradientConfig{…})
    @ ForwardDiff ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:123
 [25] gradient!
    @ ~/.julia/packages/ForwardDiff/PcZ48/src/gradient.jl:39 [inlined]
 [26] (::OptimizationForwardDiffExt.var"#38#56"{…})(::ComponentArrays.ComponentVector{…}, ::ComponentArrays.ComponentVector{…})
    @ OptimizationForwardDiffExt ~/.julia/packages/Optimization/Zc00b/ext/OptimizationForwardDiffExt.jl:102
 [27] macro expansion
    @ ~/.julia/packages/OptimizationOptimisers/AOkbT/src/OptimizationOptimisers.jl:68 [inlined]
 [28] macro expansion
    @ ~/.julia/packages/Optimization/Zc00b/src/utils.jl:41 [inlined]
 [29] __solve(cache::OptimizationCache{…})
    @ OptimizationOptimisers ~/.julia/packages/OptimizationOptimisers/AOkbT/src/OptimizationOptimisers.jl:66
 [30] solve!(cache::OptimizationCache{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/solve.jl:180
 [31] solve(::OptimizationProblem{…}, ::Optimisers.Adam; kwargs::@Kwargs{…})
    @ SciMLBase ~/.julia/packages/SciMLBase/m3AcC/src/solve.jl:96
 [32] __solve(::ODEProblem{…}, ::NNODE{…}; dt::Nothing, timeseries_errors::Bool, save_everystep::Bool, adaptive::Bool, abstol::Float32, reltol::Float32, verbose::Bool, saveat::Float64, maxiters::Int64, tstops::Nothing)
    @ NeuralPDE ~/.julia/packages/NeuralPDE/z18Qg/src/ode_solve.jl:454
 [33] solve_call(_prob::ODEProblem{…}, args::NNODE{…}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@Kwargs{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:612
 [34] solve_up(prob::ODEProblem{…}, sensealg::Nothing, u0::Vector{…}, p::Num, args::NNODE{…}; kwargs::@Kwargs{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:1064
 [35] solve(prob::ODEProblem{…}, args::NNODE{…}; sensealg::Nothing, u0::Nothing, p::Nothing, wrap::Val{…}, kwargs::@Kwargs{…})
    @ DiffEqBase ~/.julia/packages/DiffEqBase/4084R/src/solve.jl:987
 [36] top-level scope
    @ ~/Issue721:96
Some type information was truncated. Use `show(err)` to see complete types.

I am not sure why, I even changed the variables into their specific numbers: E1 = 1.054, E2 = 1.050, E3 = 1.017, omegaN = 120pi, omega_s = 120pi, deltaN = 1, TMN = [0, 1.62342, 0], PeN = [0, 1.62342, 0], PsvN = [0, 1.62342, 0].

@sathvikbhagavan
Copy link
Member

There was a bug in cos(1*(u[1]-u[3](t)) - (t) should not be there and two more similar places.

So with NNDAE, the DAEProblem need to be out of place and looking at your code, I am assuming 10, 11, 12 are the algebraic variables. I have written a snippet which is working. But verify once all the equations are correct or not.

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimisers
import ModelingToolkit: Interval
using CSV
using DataFrames
using Plots
using OrdinaryDiffEq
data = CSV.File("/home/sathvikbhagavan/NeuralPDE.jl/test/3gens.csv");
Y1 = CSV.read("/home/sathvikbhagavan/NeuralPDE.jl/test/Y_des.csv", DataFrame, types=Complex{Float64}); #decreasing

# Input of the system.
E1 = 1.054;
E2 = 1.050;
E3 = 1.017;

omegaN = 120*pi;
omega_s = 120*pi
deltaN = 1;
TMN = [0, 1.62342, 0];
PeN = [0, 1.62342, 0];
PsvN = [0, 1.62342, 0];
H = data["H"]
TCH = data["TCH"]
RD = data["RD"]
TSV = data["TSV"]

function eqs(du, u, p, t)
    [(u[4]+ 120*pi - 120*pi)/1 - du[1],
    (u[5] + 120*pi - 120*pi)/1 - du[2],
    (u[6] + 120*pi - 120*pi)/1 - du[3],
    (u[7] + 0 - u[10] - 0)/(2*H[1])*120*pi - du[4],
    (u[8] + 1.62342 - u[11] - 1.62342)/(2*H[2])*120*pi - du[5],
    (u[9] + 0 - u[12] - 0)/(2*H[3])*120*pi - du[6],
    -( 0 - ((1.054)^2*Y1[1,1].re + 1.054*1.050*sin(1*(u[1]-u[2]))*Y1[1,2].im + 1.054*1.050*cos(1*(u[1]-u[2]))*Y1[1,2].re + 1.054*1.017*sin(1*(u[1]-u[3]))*Y1[1,3].im + 1.054*1.017*cos(1*(u[1]-u[3]))*Y1[1,3].re)) - du[10],
    -( 1.62342 - ((1.050)^2*Y1[2,2].re + 1.054*1.050*sin(1*(u[2]-u[1]))*Y1[2,1].im + 1.054*1.050*cos(1*(u[2]-u[1]))*Y1[2,1].re + 1.050*1.017*sin(1*(u[2]-u[3]))*Y1[2,3].im + 1.050*1.017*cos(1*(u[2]-u[3]))*Y1[2,3].re)) - du[11],
    -( 0 - ((1.017)^2*Y1[3,3].re + 1.017*1.054*sin(1*(u[3]-u[1]))*Y1[3,1].im + 1.017*1.054*cos(1*(u[3]-u[1]))*Y1[3,1].re + 1.017*1.050*sin(1*(u[3]-u[2]))*Y1[3,2].im + 1.017*1.050*cos(1*(u[3]-u[2]))*Y1[3,2].re)) - du[12],
    (-u[7] - 0 + u[13] + 0) / TCH[1] - du[7],
    (-u[8] - 1.62342 + u[14] + 1.62342) / TCH[2] - du[8],
    (-u[9] - 0 + u[15] + 0) / TCH[3] - du[9],
    (-u[13] - 0 + 0.70945 + 0.33*(-0.0024) - ((u[4] + 120*pi)/120*pi - 1)/RD[1])/TSV[1] - du[13],
    (-u[14] - 1.62342 + 1.62342 + 0.334*(-0.0024) - ((u[5] + 120*pi)/120*pi - 1)/RD[2])/TSV[2] - du[14],
    (-u[15] - 0 + 0.84843 + 0.33*(-0.0024) - ((u[6] + 120*pi)/120*pi - 1)/RD[3])/TSV[3] - du[15]]
end


bcs = [0.03957/deltaN,  0.3447/deltaN, 0.23038/deltaN, omega_s - omegaN, omega_s - omegaN, omega_s - omegaN, 0.70945 - TMN[1], 1.62342 - TMN[2], 0.848433 - TMN[3], 0.70945 - PeN[1], 1.62342 - PeN[2], 0.848433 - PeN[3], 0.70945 - PsvN[1], 1.62342 - PsvN[2], 0.848433 - PsvN[3]]
dus = zeros(15)
domains = (0.0,25.0)
prob = DAEProblem(eqs, dus, bcs, domains; differential_vars = [[true for i = 1:9]..., false, false, false, [true for i = 1:3]...])

n = 10
chain = Lux.Chain(Dense(1,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,n,Lux.tanh),Dense(n,15))

alg = NNDAE(chain, OptimizationOptimisers.Adam(0.1))
sol = solve(prob, alg, verbose = true, maxiters = 2000, dt = 1 / 10.0, saveat = 0.01)

@hippyhippohops
Copy link

Yup, that works. Thanks!

I am trying to use the following command to plot the function:

using Plots

plot(sol.t, sol.u, label = "pred")

However, I am getting the error: BoundsError: attempt to access 15-element Vector{Float64} at index [1:2501]. However, checking the sizes of sol.t and sol.u shows that they are both the same size:

julia> size(sol.t)
(2501,)

julia> size(sol.u)
(2501,)

I am not sure where the 15-element Vector{Float64} comes from.

@sathvikbhagavan
Copy link
Member

I think each element of sol.u is a 15 element vector. Plot them separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants