Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIOSR reproduces the Lorenz equations with target_sparsity, but not group_sparsity. #415

Open
scatr opened this issue Sep 22, 2023 · 2 comments
Labels
good first issue Good for newcomers

Comments

@scatr
Copy link

scatr commented Sep 22, 2023

Hi again,

I've been using the MIOSR formulation recently, and I've noticed that when using target_sparsity to identify the Lorenz equations, it works fine, but when I try to use the group_sparsity option, the results do not agree. Further, if we perform the regression using the target_sparsity on one variable at a time, including the rest of the variables as control inputs, it does reproduce the each of the Lorenz equations individually. Have I used the group_sparsity option incorrectly? I would have expected that fitting each variable individually would produce an equivalent problem to the group_sparsity one.

I've included a short example to reproduce this

import pysindy as ps
from pysindy.utils import lorenz
from pysindy.differentiation import FiniteDifference
import numpy as np
from scipy.integrate import solve_ivp
from pysindy.optimizers import MIOSR

fd = FiniteDifference()

### create the data
integrator_keywords ={}
integrator_keywords["atol"] = 1e-12
integrator_keywords["rtol"] = 1e-12
dt = 0.0001
t_train = np.arange(0, 100, dt)
t_train_span = (t_train[0], t_train[-1])
x0_train = [-8, 8, 27]
x_train = solve_ivp(lorenz, t_train_span,
                    x0_train, t_eval=t_train, **integrator_keywords).y.T
poly_order = 2

#fit using target sparsity
print("Model with target sparsity")
model = ps.SINDy(
    optimizer=MIOSR(target_sparsity=7, alpha = 0.01, normalize_columns=False),
    feature_library=ps.PolynomialLibrary(degree=poly_order), feature_names = ["x","y","z"]
)
xdot = fd._differentiate(x_train, t_train)

model.fit(
    x_train,
    x_dot = xdot, 
    t=t_train,
)

model.print(precision = 10)

#fit using group sparsity
print("Model with group sparsity")
model = ps.SINDy(
    optimizer=MIOSR(group_sparsity = (4,4,4), alpha = 0.01, normalize_columns=False),
    feature_library=ps.PolynomialLibrary(degree=poly_order), feature_names = ["x","y","z"]
)


model.fit(
    x_train,
    x_dot = xdot,
    t=t_train,
)

model.print(precision = 20)


## fit each variable x,y,z one at a time and include the remainder as input variables
square_lib = ps.PolynomialLibrary(2)
generalized_lib = ps.GeneralizedLibrary([square_lib],
                                         inputs_per_library =np.array([[0,1,2]])
                                        )

labels = ["x", "y", "z"]
group_sparse = (2,3,2) 
print("Model fitting each variable separately")
for i in range(3): 
    control_indices = [jj for jj in range(3) if jj!=i]
    total_indices = [i] + control_indices
    lib_labels = [labels[jj] for jj in total_indices]

    opt = MIOSR(target_sparsity=group_sparse[i], alpha=0.01)
    model = ps.SINDy(optimizer=opt, feature_names=lib_labels, feature_library=generalized_lib)
    model.fit(x_train[:,i], x_dot=xdot[:,i], u= x_train[:,control_indices])
    model.print(precision=20)

This gives the output for me as

Model with target sparsity
(x)' = -9.9999977902 x + 9.9999977888 y
(y)' = 27.9999800480 x + -0.9999961070 y + -0.9999994417 x z
(z)' = -2.6666692197 z + 0.9999997068 x y
Model with group sparsity
(x)' = 2.27950419579689000926 y
(y)' = 25.54290423393682374353 x + -0.95131667283365073384 x z
(z)' = -2.66666921967712333696 z + 0.99999970675281002475 x y
Model fitting each variable separately
(x)' = -9.99999779017980650053 x + 9.99999778884162004999 y
(y)' = -0.99999610704016128615 y + 27.99998004798279893635 x + -0.99999944166784260347 x z
(z)' = -2.66666921967712333696 z + 0.99999970675281002475 x y

@scatr
Copy link
Author

scatr commented Sep 25, 2023

The issue is because target_sparsity has the default option 5 meaning that the group_sparsity only works if target_sparsity=None is passed as well.

@Jacob-Stevens-Haas
Copy link
Collaborator

I agree that the code should raise a ValueError here.

@Jacob-Stevens-Haas Jacob-Stevens-Haas added the good first issue Good for newcomers label Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants