Skip to content
Carlos Mastalli edited this page Sep 2, 2019 · 15 revisions

Crocoddyl: Contact RObot COntrol by Differential DYnamic programming Library

I. Welcome to crocoddyl

Crocoddyl is an optimal control library for robot control under contact sequence. Its solver is based on an efficient Differential Dynamic Programming (DDP) algorithm. Crocoddyl computes optimal trajectories along to optimal feedback gains. It uses Pinocchio for fast computation of robot dynamics and its analytical derivatives.

Crocoddyl is focused on multi-contact optimal control problem (MCOP) which as the form:

$\mathbf{X}^*,\mathbf{U}^*= \begin{Bmatrix} \mathbf{x}^*_0,\cdots,\mathbf{x}^*_N \\ \mathbf{u}^*_0,\cdots,\mathbf{u}^*_N \end{Bmatrix} = \arg\min_{\mathbf{X},\mathbf{U}} \sum_{k=1}^N \int_{t_k}^{t_k+\Delta t} l(\mathbf{x},\mathbf{u})dt$

subject to

$\mathbf{\dot{x}} = \mathbf{f}(\mathbf{x},\mathbf{u}),$

$\mathbf{x}\in\mathcal{X}, \mathbf{u}\in\mathcal{U}, \boldsymbol{\lambda}\in\mathcal{K}.$

where

  • the state $\mathbf{x}=(\mathbf{q},\mathbf{v})$ lies in a manifold, e.g. Lie manifold $\mathbf{q}\in SE(3)\times \mathbb{R}^{n_j}$,
  • the system has underactuacted dynamics, i.e. $\mathbf{u}=(\mathbf{0},\boldsymbol{\tau})$,
  • $\mathcal{X}$, $\mathcal{U}$ are the state and control admissible sets, and
  • $\mathcal{K}$ represents the contact constraints.

Note that $\boldsymbol{\lambda}=\mathbf{g}(\mathbf{x},\mathbf{u})$ denotes the contact force, and is dependent on the state and control.

Let's start by understanding the concept behind crocoddyl design.

II. Action models

In crocoddyl, an action model combines dynamics and cost models. Each node, in our optimal control problem, is described through an action model. Every time that we want describe a problem, we need to provide ways of computing the dynamics, cost functions and their derivatives. All these is described inside the action model.

To understand the mathematical aspects behind an action model, let's first get a locally linearize version of our optimal control problem as:

$\mathbf{X}^*(\mathbf{x}_0),\mathbf{U}^*(\mathbf{x}_0) = \arg\min_{\mathbf{X},\mathbf{U}} = cost_T(\delta\mathbf{x}_N) + \sum_{k=1}^N cost_t(\delta\mathbf{x}_k, \delta\mathbf{u}_k)$

subject to

$dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k)=\mathbf{0},$

where

  • $cost_T(\delta\mathbf{x}_k) = \frac{1}{2} \begin{bmatrix} 1 \\ \delta\mathbf{x}_k \end{bmatrix}^\top \begin{bmatrix} 0 & \mathbf{l_x}^\top_k \\ \mathbf{l_x}_k & \mathbf{l_{xx}}_k \end{bmatrix} \begin{bmatrix} 1 \\ \delta\mathbf{x}_k \end{bmatrix}$,      $cost_t(\delta\mathbf{x}_k,\delta\mathbf{u}_k) = \frac{1}{2} \begin{bmatrix} 1 \\ \delta\mathbf{x}_k \\ \delta\mathbf{u}_k \end{bmatrix}^\top \begin{bmatrix} 0 & \mathbf{l_x}^\top_k & \mathbf{l_u}^\top_k\\ \mathbf{l_x}_k & \mathbf{l_{xx}}_k & \mathbf{l_{ux}}^\top_k\\ \mathbf{l_u}_k & \mathbf{l_{ux}}_k & \mathbf{l_{uu}}_k \end{bmatrix} \begin{bmatrix} 1 \\ \delta\mathbf{x}_k \\ \delta\mathbf{u}_k \end{bmatrix}$

  • $dynamics(\delta\mathbf{x}_{k+1},\delta\mathbf{x}_k,\delta\mathbf{u}_k) = \delta\mathbf{x}_{k+1} - (\mathbf{f_x}_k\delta\mathbf{x}_k + \mathbf{f_u}_k\delta\mathbf{u}_k)$

Important notes:

  • An action model describes the dynamics and cost functions for a node in our optimal control problem.
  • Action models lie in the discrete time space.
  • For debugging and prototyping, we have also implemented NumDiff abstractions. These computations depend only in the defining of the dynamics equation and cost functions. However to asses efficiency, crocoddyl uses analytical derivatives computed from Pinocchio.

Differential and Integrated Action Models

It's often convenient to implement action models in continuous time. In crocoddyl, this continuous-time action models are called Differential Action Model (DAM). And together with predefined Integrated Action Models (IAM), it possible to retrieve the time-discrete action model needed by the solver.

At the moment, we have the following integration rules:

  • simpletic Euler and
  • Runge-Kutta 4.

II. State and its integrate and difference rules

General speaking, the system's state can lie in a manifold $M$ where the state rate of change lies in its tangent space $T_\mathbf{x}M$. There are few operators that needs to be defined for different routines inside our solvers:

  • $\mathbf{x}_{k+1} = integrate(\mathbf{x}_k,\delta\mathbf{x}_k) = \mathbf{x}_k \oplus \delta\mathbf{x}_k$
  • $\delta\mathbf{x}_k = difference(\mathbf{x}_{k+1},\mathbf{x}_k) = \mathbf{x}_{k+1} \ominus \mathbf{x}_k$

where $\mathbf{x}\in M$ and $\delta\mathbf{x}\in T_\mathbf{x} M$.

And we also need to defined the Jacobians of these operators with respect to the first and second arguments:

  • $\frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial \mathbf{x}}, \frac{\partial \mathbf{x}\oplus\delta\mathbf{x}}{\partial\delta\mathbf{x}} =Jintegrante(\mathbf{x},\delta\mathbf{x})$
  • $\frac{\partial\mathbf{x}_2\ominus\mathbf{x}_2}{\partial \mathbf{x}_1}, \frac{\partial \mathbf{x}_2\ominus\mathbf{x}_1}{\partial\mathbf{x}_1} =Jdifference(\mathbf{x}_2,\mathbf{x}_1)$

For instance, a state that lies in the Euclidean space will the typical operators:

  • $integrate(\mathbf{x},\delta\mathbf{x}) = \mathbf{x} + \delta\mathbf{x}$

  • $difference(\mathbf{x}_2,\mathbf{x}_1) = \mathbf{x}_2 - \mathbf{x}_1$

  • $Jintegrate(\cdot,\cdot) = Jdifference(\cdot,\cdot) = \mathbf{I}$

All these functions are encapsulate inside the State class. For Pinocchio models, we have implemented the StateMultibody class which can be used for any robot model.