Skip to content

We attempt to enhance multi-agent coordination in a tightly-coupled domain using auto-encoders.

Notifications You must be signed in to change notification settings

being-aerys/Multiagent_Reinforcement_Learning_for_Coordination_in_a_Tightly-Coupled_Domain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoEncoder for Latent Representation Learning and Multi-Agent Coordination using DDPG

  • First we use an autoencoder to compress the state representation to one-fourth the original size.
  • Then we train DDPG agents to learn joint optimal policies to observe points of interests in a rover domain using the compressed state representation.
    • The learned policies perform as good as the policies learned with the original state representation.

P.S. This work uses a codebase of the rover environment from Autonomous Agents and Distributed Intelligence (AADI) Lab from Oregon State University. Hence, this repo includes some code not used in this project.

Demo

Game Process

About

We attempt to enhance multi-agent coordination in a tightly-coupled domain using auto-encoders.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages