Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jacobian computed using autograd of MPC solver function inaccurate when returning 'u' #340

Open
wlertkitPSU opened this issue Feb 28, 2024 · 0 comments

Comments

@wlertkitPSU
Copy link

wlertkitPSU commented Feb 28, 2024

馃悰 Describe the bug

I have a function (mpc_solver) that takes in a value of mass, solves a system using pp.module.MPC, and returns (u_norm())**2. However, the Jacobian of this function computed using autograd is quite inaccurate. When I tried making the function return (x_norm())**2 instead, the resulting Jacobian is extremely accurate. Why is this the case?

The code I have provided outputs the Jacobian values at each value of mass using both the finite difference method (FD) and autograd method. In addition, the Jacobian computed by the finite difference method has been determined to be accurate.

import pypose as pp
import torch
import numpy as np
import matplotlib.pyplot as plt


class CartPole(pp.module.NLS):
    def __init__(self, dt, pole_length, cart_mass, pole_mass, gravity):
        super().__init__()
        self.dt = dt
        self.pole_length = pole_length
        self.cart_mass = cart_mass
        self.pole_mass = pole_mass
        self.gravity = gravity

    def state_transition(self, state, input, t=None):
        x, x_dot, theta, theta_dot = state.squeeze().clone()
        force = input.squeeze().clone()

        theta_acc = (force + self.pole_mass * torch.sin(theta) * \
                     (self.pole_length * theta_dot ** 2 + self.gravity * torch.cos(theta))) \
                    / (self.cart_mass + self.pole_mass * torch.sin(theta) ** 2).squeeze(0)

        x_acc = (-force * torch.cos(theta) - self.pole_mass * self.pole_length * theta_dot ** 2 * torch.sin(theta) * \
                 torch.cos(theta) - (self.pole_mass + self.cart_mass) * self.gravity * torch.sin(theta)) \
                / (self.pole_length * (self.cart_mass + self.pole_mass * torch.sin(theta) ** 2)).squeeze(0)

        dstate = torch.stack((x_dot, x_acc, theta_dot, theta_acc))

        new_state = state.squeeze() + torch.mul(dstate, self.dt)
        return new_state.unsqueeze(0)

    def observation(self, state, input, t=None):
        return state


def mpc_solver(cart_mass):
    system = CartPole(dt, pole_length, cart_mass, pole_mass, g)
    stepper = pp.utils.ReduceToBason(steps=15, verbose=False)
    mpc = pp.module.MPC(system, Q, p, T, stepper=stepper)
    x, u, cost = mpc(dt, x_init, u_init)
    u_total = (u.norm())**2  # USING U_TOTAL GIVES HIGH ERROR FOR SOME REASON
    x_total = (x.norm())**2

    return x_total, u_total


def autograd_jacobian(x):
    input_parameter = torch.tensor(x, requires_grad=True)
    input_float = input_parameter[0].float().clone()
    calculated_jacobian = torch.autograd.functional.jacobian(mpc_solver, input_float)
    return calculated_jacobian


def FD_jacobian(x, epsilon=1e-3):
    x_plus = x
    x_plus = x_plus + epsilon
    x_minus = x
    x_minus = x_minus - epsilon

    f_plus = mpc_solver(x_plus)
    f_minus = mpc_solver(x_minus)

    FD = []
    FD.append([(f_plus[0] - f_minus[0]) / (2 * epsilon), (f_plus[1] - f_minus[1]) / (2 * epsilon)])
    return FD


# Use CUDA if possible
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
torch.manual_seed(0)

# Universal Constants
n_batch, n_state, n_ctrl, T = 1, 4, 1, 5
dt = 0.01
g = 9.81
pole_length = torch.tensor(1, device=device)
pole_mass = torch.tensor(10, device=device)
x_init = torch.tensor([[0, 0, -0.3, 0]], device=device)
u_init = torch.tile(torch.zeros(1, device=device), (n_batch, T, 1))

# MPC Constants
Q = torch.tile(torch.eye(n_state + n_ctrl, device=device), (n_batch, T, 1, 1))
p = torch.tile(torch.zeros(n_state + n_ctrl, device=device), (n_batch, T, 1))

# Main Code - Find Jacobian using autograd and FD
x = np.linspace(0.1, 10, 20)  # varying mass
autograd_xnorm = []
FD_xnorm = []
autograd_unorm = []
FD_unorm = []

for i in range(len(x)):
    autograd_xnorm.append(autograd_jacobian([x[i]])[0].item())
    FD_xnorm.append(FD_jacobian(x[i])[0][0].item())
    autograd_unorm.append(autograd_jacobian([x[i]])[1].item())
    FD_unorm.append(FD_jacobian(x[i])[0][1].item())

# Plotting Jacobian
fig, axs = plt.subplots(2)
axs[0].plot(x, autograd_xnorm, '.-b', label='Autograd')
axs[0].plot(x, FD_xnorm, '.-r', label='FD')
axs[0].axhline(y=0, color='black', linestyle='--')
axs[0].set_xlabel('mass')
axs[0].set_ylabel('jacobian value returning x_norm')
axs[0].legend()

axs[1].plot(x, autograd_unorm, '.-b', label='Autograd')
axs[1].plot(x, FD_unorm, '.-r', label='FD')
axs[1].axhline(y=0, color='black', linestyle='--')
axs[1].set_xlabel('mass')
axs[1].set_ylabel('jacobian value returning u_norm')
axs[1].legend()

plt.show()

Versions

PyTorch version: 2.2.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Home Single Language
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Ti
Nvidia driver version: 546.33
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.1
[conda] Could not collect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant