Skip to content

coco66/ADFQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bayesian Q-learning with Explicit Uncertainty Measures : Assumed Density Filtering Q-learning (ADFQ)

This repository contains the ADFQ algorithms from the following paper. See the paper for more technical details.

  • Assumed Density Filtering Q-learning (https://arxiv.org/abs/1712.03333) : H. Jeong, C. Zhang, D. D. Lee, and G. J. Pappas, “Assumed Density Filtering Q-learning,” the 28th International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 2019

Requirement

The ADFQ codes for the finite state and action spaces (directly under the ADFQ directory) work both in python 2.7.x and python 3. For deep ADFQ, python 3 (>=3.5) and tensorflow-gpu are recommended.

Installation

Clone the repo

git clone https://github.com/coco66/ADFQ.git
cd ADFQ && source setup

Use the Dockerfile or install the dependencies individually. We include a part of codes from OpenAI baselines (due to the repository stability issue, we are not directly using the current version of the OpenAI baseline git repo). You may need some packages mentioned in the installation guidelines at https://github.com/openai/baselines.

Example for running ADFQ algorithm

Classic environments:

python run_adfq.py --env loop

And running ADFQ in Cartpole-v0

python run_mlp.py

set callback=None in line 78 if you don't want it to end its training after reaching a maximum time step of a task (e.g. 199 for CartPole).\ Running ADFQ in an atari game, for example, Asterix-v0

python run_atari.py --env AsterixNoFrameskip-v4 --act_policy bayesian

Usage with Target Tracking Environment

This repository also contains example codes to run the presented RL algorithms in the target tracking environments (https://github.com/coco66/ttenv). Please install the ttenv repository separately in order to use deep_adfq/run_tracking.py or deep_adfq/baselines0/deepq/run_tracking.py. The related work is presented in the following paper:

  • Learning Q-network for Active Information Acquisition (https://arxiv.org/abs/1910.10754) : H. Jeong, B. Schlotfeldt, H. Hassani, M. Morari, D. D. Lee, and G. J. Pappas, “Learning Q-network for Active Information Acquisition,”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macao, China, 2019

Citing

If you use this repo in your research, you can cite it as follows:

@misc{ADFQrepo,
    author = {Heejin Jeong, Clark Zhang, Daniel D. Lee, George J. Pappas},
    title = {ADFQ_open_source},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/coco66/ADFQ.git}},
}

About

Bayesian Q-learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published