Skip to content

bisraelsen/POMDPs.jl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

POMDPs

Linux Mac OS X
Build Status Build Status

Docs Gitter

This package provides a core interface for working with Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). For examples, please see the Gallery.

Our goal is to provide a common programming vocabulary for:

  1. Expressing problems as MDPs and POMDPs.
  2. Writing solver software.
  3. Running simulations efficiently.

There are multiple interfaces for expressing and interacting with (PO)MDPs: When the explicit interface is used, the transition and observation probabilities are explicitly defined using api functions or tables; when the generative interface is used, only a single step simulator (e.g. (s', o, r) = G(s,a)) needs to be defined.

For help, please post to the Google group, or on gitter. See NEWS.md for information on changes.

POMDPs.jl and all packages in the JuliaPOMDP project are fully supported on Linux and OS X. Windows support is available for all native Julia packages*.

Installation

To install POMDPs.jl, run the following from the Julia REPL:

Pkg.add("POMDPs")

To install a specific supported JuliaPOMDP package run:

using POMDPs
# the following command installs the SARSOP solver, you can add any supported solver this way
POMDPs.add("SARSOP") 

To install all solvers, support tools, and dependencies that are part of JuliaPOMDP, run:

using POMDPs
POMDPs.add_all() # this may take a few minutes

To only install native solvers, without any non-Julia dependecies, run:

using POMDPs
POMDPs.add_all(native_only=true)

Quick Start

To run a simple simulation of the classic Tiger POMDP using a policy created by the QMDP solver.

using POMDPs, POMDPModels, POMDPToolbox, QMDP
pomdp = TigerPOMDP()

# initialize a solver and compute a policy
solver = QMDPSolver() # from QMDP
policy = solve(solver, pomdp)
belief_updater = updater(policy) # the default QMDP belief updater (discrete Bayesian filter)

# run a short simulation with the QMDP policy
history = simulate(HistoryRecorder(max_steps=10), pomdp, policy, belief_updater)

# look at what happened
for (s, b, a, o) in eachstep(history, "sbao")
    println("State was $s,")
    println("belief was $b,")
    println("action $a was taken,")
    println("and observation $o was received.\n")
end
println("Discounted reward was $(discounted_reward(history)).")

For more examples with visualization see POMDPGallery.jl.

Tutorials

The following tutorials aim to get you up to speed with POMDPs.jl:

  • MDP Tutorial for beginners gives an overview of using Value Iteration and Monte-Carlo Tree Search with the classic grid world problem
  • POMDP Tutorial gives an overview of using SARSOP and QMDP to solve the tiger problem

Documentation

Detailed documentation can be found here.

Docs

Supported Packages

Many packages use the POMDPs.jl interface, including MDP and POMDP solvers, support tools, and extensions to the POMDPs.jl interface.

MDP solvers:

Package Build Coverage
Value Iteration Build Status Coverage Status
Monte Carlo Tree Search Build Status Coverage Status

POMDP solvers:

Package Build Coverage
QMDP Build Status Coverage Status
SARSOP* Build Status Coverage Status
BasicPOMCP Build Status Coverage Status
DESPOT Build Status Coverage Status
MCVI Build Status Coverage Status
POMDPSolve* Build Status Coverage Status
POMCPOW Build Status Coverage Status

Reinforcement Learning:

Package Build Coverage
TabularTDLearning Build Status Coverage Status

Tools:

Package Build Coverage
POMDPToolbox Build Status Coverage Status
POMDPModels Build Status Coverage Status
ParticleFilters Build Status codecov.io

Performance Benchmarks:

Package
DESPOT

*These packages require non-Julia dependencies

Citing POMDPs

If POMDPs is useful in your research and you would like to acknowledge it, please cite this paper:

@article{egorov2017pomdps,
  author  = {Maxim Egorov and Zachary N. Sunberg and Edward Balaban and Tim A. Wheeler and Jayesh K. Gupta and Mykel J. Kochenderfer},
  title   = {{POMDP}s.jl: A Framework for Sequential Decision Making under Uncertainty},
  journal = {Journal of Machine Learning Research},
  year    = {2017},
  volume  = {18},
  number  = {26},
  pages   = {1-5},
  url     = {http://jmlr.org/papers/v18/16-300.html}
}

About

MDPs and POMDPs in Julia - An interface for working with discrete and continuous, fully and partially observable Markov decision processes.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Julia 100.0%