Skip to content

dattalab/moseq2-app

Repository files navigation

Welcome to the moseq2-app repository. Motion Sequencing (MoSeq) is an unsupervised machine learning method used to describe mouse behavior and moseq2-app is the starting point to MoSeq2 package suite.

To get started, head over to the wiki to find which installation option works best for your environment and detailed documentation for the MoSeq2 package suite.

Quick Links: MoSeq Slack Channel Open In Colab Documentation DOI

Last Updated: 12/23/2021

Overview

MoSeq takes 3D depth videos as input (obtained using commercially-available sensors) and then uses statistical learning techniques to identify the components of mouse body language. This is achieved by fitting an autoregressive hidden Markov model that parses behavior into a set of sub-second motifs called syllables. This segmentation naturally yields boundaries between syllables, and therefore also reveals the structure that governs the interconnections between syllables over time, which we refer to as behavioral grammar (see Wiltschko et al., 2015 for the first description of MoSeq).

Because MoSeq relies on unsupervised machine learning, MoSeq discovers the set of syllables and grammar expressed in any given experiment. By combining MoSeq with electrophysiology, multi-color photometry, and miniscope methods, neural correlates for 3D behavioral syllables have recently been identified in the dorsolateral striatum (DLS) (Markowitz et al., 2018). Furthermore, MoSeq has been combined with optogenetic stimulation to reveal the differential consequences of activating the motor cortex, the dorsal striatum, and the ventral striatum (Pisanello et al.,2017; Wiltschko et al., 2015). These results are consistent with similar results recently obtained using marker-based approaches to explore the relationship between 3D posture and activity in the posterior parietal cortex (Mimica et al., 2018).

There are two basic steps to MoSeq, and this GitHub repository and the wiki supports both. First, 3D data need to be acquired. Here you will find instructions for assembling a standard MoSeq data acquisition platform, and code to acquire 3D videos of mice as they freely behave. Second, these 3D data need to be modeled. We provide several different methods for modeling the data using MoSeq, which are compared below. We continue development of MoSeq and plan to incorporate additional features in the near future.

MoSeq2 package suite

The MoSeq2 toolkit enables users to model mouse behavior across different experimental groups, and measure the differences between their behavior usages, durations, transition patterns. etc.

This package contains functionalities that can be used interactively in jupyter notebooks. We provide a series of Jupyter Notebooks that cover the entire MoSeq pipeline to process their depth videos of mice, and segment their behavior into what is denoted as "syllables". In addition to the Jupyter notebooks, MoSeq has Google Colab notebooks and a Command Line Interface. Consult the wiki page for more detailed documentation of the MoSeq pipeline here. You can try MoSeq on Google Colab on our test data or your data on Google Drive.

Getting Started

If you like MoSeq and you are interested in installing it in your environment, you can install the MoSeq2 package suite with either Conda or Docker.

  • If you are familiar with Conda/terminal, and you enjoy more control over the packages and virtual environment, we recommend installing the MoSeq2 package suite with Conda.

  • If you are interested in using a standardized/containerized version of the MoSeq2 package suite with simple installation steps and minimum local environment setup, we recommend installing the MoSeq2 package suite with Docker.

We provide step-by-step guides for installing the MoSeq2 package suite in the installation documentation.

You can find more information about the MoSeq2 package suite, the acquisition and analysis pipeline, documentation for Command Line Interface (CLI), tutorials, etc. in the wiki. If you want to explore MoSeq functionalities, check out the guide for downloading test data.

Community Support and Contributing

  • Please join MoSeq Slack Channel to post questions and interactive with MoSeq developers and users.
  • If you encounter bugs, errors or issues, please submit a Bug report here. We encourage you to check out the troubleshooting and tips section and search your issues in the existing issues first.
  • If you want to see certain features in MoSeq or you have new ideas, please submit a Feature request here.
  • If you want to contribute to our codebases, please check out our Developer Guidelines.
  • Please tell us what you think by filling out this user survey.

Versions

Events & News

We are hosting a tutorial workshop on Thursday, March 3rd, 2022 at 1:30-4PM EST. Fill out this form by March 2nd to register for the workshop and receive the Zoom link and password.

Publications

License

MoSeq is freely available for academic use under a license provided by Harvard University. Please refer to the license file for details. If you are interested in using MoSeq for commercial purposes please contact Bob Datta directly at srdatta@hms.harvard.edu, who will put you in touch with the appropriate people in the Harvard Technology Transfer office.