Skip to content

Implementation of Soft Actor-Critic (SAC) algorithm using TensorFlow 2.1.0

License

Notifications You must be signed in to change notification settings

RickyMexx/SAC-tf2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Soft Actor-Critic TF2

This work aims to implement, train, and test the Soft Actor-Critic algorithm using Python and TensorFlow 2.1.0. This recent reinforcement learning algorithm is defined off-policy because it is independent of the policy used, meaning that the agent could behave randomly. In fact, in the first part of the learning process, the agent will gather information from random moves and this randomness will slow down over time. In particular, with SAC we speak about entropy regularized reinforcement learning.

This implementation has been tested using MountainCarContinuous-v0 and Walker2d-v2 Gym environments.

See Report.

"La Sapienza" University of Rome - MSc in Artificial Intelligence and Robotics, Reinforcement Learning 2019/2020

MountainCarContinuous-v0

Walker2d-v2

About

Implementation of Soft Actor-Critic (SAC) algorithm using TensorFlow 2.1.0

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages