Skip to content

Releases: hill-a/stable-baselines

Bug fixes release

05 Aug 19:45
e54554d
Compare
Choose a tag to compare

Breaking Changes:

  • render() method of VecEnvs now only accept one argument: mode

New Features:

  • Added momentum parameter to A2C for the embedded RMSPropOptimizer (@kantneel)
  • ActionNoise is now an abstract base class and implements __call__, NormalActionNoise and OrnsteinUhlenbeckActionNoise have return types (@partiallytyped)
  • HER now passes info dictionary to compute_reward, allowing for the computation of rewards that are independent of the goal (@tirafesi)

Bug Fixes:

  • Fixed DDPG sampling empty replay buffer when combined with HER (@tirafesi)
  • Fixed a bug in HindsightExperienceReplayWrapper, where the openai-gym signature for compute_reward was not matched correctly (@johannes-dornheim)
  • Fixed SAC/TD3 checking time to update on learn steps instead of total steps (@partiallytyped)
  • Added **kwarg pass through for reset method in atari_wrappers.FrameStack (@partiallytyped)
  • Fix consistency in setup_model() for SAC, target_entropy now uses self.action_space instead of self.env.action_space (@partiallytyped)
  • Fix reward threshold in test_identity.py
  • Partially fix tensorboard indexing for PPO2 (@Enderdead)
  • Fixed potential bug in DummyVecEnv where copy() was used instead of deepcopy()
  • Fixed a bug in GAIL where the dataloader was not available after saving, causing an error when using CheckpointCallback
  • Fixed a bug in SAC where any convolutional layers were not included in the target network parameters.
  • Fixed render() method for VecEnvs
  • Fixed seed()``` method for SubprocVecEnv``
  • Fixed a bug callback.locals did not have the correct values (@partiallytyped)
  • Fixed a bug in the close() method of SubprocVecEnv, causing wrappers further down in the wrapper stack to not be closed. (@NeoExtended)
  • Fixed a bug in the generate_expert_traj() method in record_expert.py when using a non-image vectorized environment (@jbarsce)
  • Fixed a bug in CloudPickleWrapper's (used by VecEnvs) __setstate___ where loading was incorrectly using pickle.loads (@shwang).
  • Fixed a bug in SAC and TD3 where the log timesteps was not correct(@YangRui2015)
  • Fixed a bug where the environment was reset twice when using evaluate_policy

Others:

  • Added version.txt to manage version number in an easier way
  • Added .readthedocs.yml to install requirements with read the docs
  • Added a test for seeding ``SubprocVecEnv``` and rendering

Documentation:

  • Fix typos (@caburu)
  • Fix typos in PPO2 (@kvenkman)
  • Removed stable_baselines\deepq\experiments\custom_cartpole.py (@aakash94)
  • Added Google's motion imitation project
  • Added documentation page for monitor
  • Fixed typos and update VecNormalize example to show normalization at test-time
  • Fixed train_mountaincar description
  • Added imitation baselines project
  • Updated install instructions
  • Added Slime Volleyball project (@hardmaru)
  • Added a table of the variables accessible from the on_step function of the callbacks for each algorithm (@partiallytyped)
  • Fix typo in README.md (@ColinLeongUDRI)

Callback collection, cleanup and bug fixes

12 Mar 22:26
cfcdb2f
Compare
Choose a tag to compare

Breaking Changes

  • evaluate_policy now returns the standard deviation of the reward per episode
    as second return value (instead of n_steps)

  • evaluate_policy now returns as second return value a list of the episode lengths
    when return_episode_rewards is set to True (instead of n_steps)

  • Callback are now called after each env.step() for consistency (it was called every n_steps before
    in algorithm like A2C or PPO2)

  • Removed unused code in common/a2c/utils.py (calc_entropy_softmax, make_path)

  • Refactoring, including removed files and moving functions.

    • Algorithms no longer import from each other, and common does not import from algorithms.

    • a2c/utils.py removed and split into other files:

      • common/tf_util.py: sample, calc_entropy, mse, avg_norm, total_episode_reward_logger,
        q_explained_variance, gradient_add, avg_norm, check_shape,
        seq_to_batch, batch_to_seq.
      • common/tf_layers.py: conv, linear, lstm, _ln, lnlstm, conv_to_fc, ortho_init.
      • a2c/a2c.py: discount_with_dones.
      • acer/acer_simple.py: get_by_index, EpisodeStats.
      • common/schedules.py: constant, linear_schedule, middle_drop, double_linear_con, double_middle_drop,
        SCHEDULES, Scheduler.
    • trpo_mpi/utils.py functions moved (traj_segment_generator moved to common/runners.py, flatten_lists to common/misc_util.py).

    • ppo2/ppo2.py functions moved (safe_mean to common/math_util.py, constfn and get_schedule_fn to common/schedules.py).

    • sac/policies.py function mlp moved to common/tf_layers.py.

    • sac/sac.py function get_vars removed (replaced with tf.util.get_trainable_vars).

    • deepq/replay_buffer.py renamed to common/buffers.py.

New Features:

  • Parallelized updating and sampling from the replay buffer in DQN. (@flodorner)
  • Docker build script, scripts/build_docker.sh, can push images automatically.
  • Added callback collection
  • Added unwrap_vec_normalize and sync_envs_normalization in the vec_env module
    to synchronize two VecNormalize environment
  • Added a seeding method for vectorized environments. (@NeoExtended)
  • Added extend method to store batches of experience in ReplayBuffer. (@solliet)

Bug Fixes:

  • Fixed Docker images via scripts/build_docker.sh and Dockerfile: GPU image now contains tensorflow-gpu,
    and both images have stable_baselines installed in developer mode at correct directory for mounting.
  • Fixed Docker GPU run script, scripts/run_docker_gpu.sh, to work with new NVidia Container Toolkit.
  • Repeated calls to RLModel.learn() now preserve internal counters for some episode
    logging statistics that used to be zeroed at the start of every call.
  • Fix DummyVecEnv.render for num_envs > 1. This used to print a warning and then not render at all. (@shwang)
  • Fixed a bug in PPO2, ACER, A2C, and ACKTR where repeated calls to learn(total_timesteps) reset
    the environment on every call, potentially biasing samples toward early episode timesteps.
    (@shwang)
  • Fixed by adding lazy property ActorCriticRLModel.runner. Subclasses now use lazily-generated
    self.runner instead of reinitializing a new Runner every time learn() is called.
  • Fixed a bug in check_env where it would fail on high dimensional action spaces
  • Fixed Monitor.close() that was not calling the parent method
  • Fixed a bug in BaseRLModel when seeding vectorized environments. (@NeoExtended)
  • Fixed num_timesteps computation to be consistent between algorithms (updated after env.step())
    Only TRPO and PPO1 update it differently (after synchronization) because they rely on MPI
  • Fixed bug in TRPO with NaN standardized advantages (@richardwu)
  • Fixed partial minibatch computation in ExpertDataset (@richardwu)
  • Fixed normalization (with VecNormalize) for off-policy algorithms
  • Fixed sync_envs_normalization to sync the reward normalization too
  • Bump minimum Gym version (>=0.11)

Others:

  • Removed redundant return value from a2c.utils::total_episode_reward_logger. (@shwang)
  • Cleanup and refactoring in common/identity_env.py (@shwang)
  • Added a Makefile to simplify common development tasks (build the doc, type check, run the tests)

Documentation:

  • Add dedicated page for callbacks
  • Fixed example for creating a GIF (@KuKuXia)
  • Change Colab links in the README to point to the notebooks repo
  • Fix typo in Reinforcement Learning Tips and Tricks page. (@mmcenta)

Reproducible results, automatic `VecEnv` wrapping, env checker and more usability improvements

19 Dec 23:18
98e9ee9
Compare
Choose a tag to compare

Breaking Changes:

  • The seed argument has been moved from learn() method to model constructor
    in order to have reproducible results
  • allow_early_resets of the Monitor wrapper now default to True
  • make_atari_env now returns a DummyVecEnv by default (instead of a SubprocVecEnv)
    this usually improves performance.
  • Fix inconsistency of sample type, so that mode/sample function returns tensor of tf.int64 in CategoricalProbabilityDistribution/MultiCategoricalProbabilityDistribution (@seheevic)

New Features:

  • Add n_cpu_tf_sess to model constructor to choose the number of threads used by Tensorflow

  • Environments are automatically wrapped in a DummyVecEnv if needed when passing them to the model constructor

  • Added stable_baselines.common.make_vec_env helper to simplify VecEnv creation

  • Added stable_baselines.common.evaluation.evaluate_policy helper to simplify model evaluation

  • VecNormalize changes:

    • Now supports being pickled and unpickled (@AdamGleave).
    • New methods .normalize_obs(obs) and normalize_reward(rews) apply normalization
      to arbitrary observation or rewards without updating statistics (@shwang)
    • .get_original_reward() returns the unnormalized rewards from the most recent timestep
    • .reset() now collects observation statistics (used to only apply normalization)
  • Add parameter exploration_initial_eps to DQN. (@jdossgollin)

  • Add type checking and PEP 561 compliance.
    Note: most functions are still not annotated, this will be a gradual process.

  • DDPG, TD3 and SAC accept non-symmetric action spaces. (@Antymon)

  • Add check_env util to check if a custom environment follows the gym interface (@araffin and @justinkterry)

Bug Fixes:

  • Fix seeding, so it is now possible to have deterministic results on cpu
  • Fix a bug in DDPG where predict method with deterministic=False would fail
  • Fix a bug in TRPO: mean_losses was not initialized causing the logger to crash when there was no gradients (@MarvineGothic)
  • Fix a bug in cmd_util from API change in recent Gym versions
  • Fix a bug in DDPG, TD3 and SAC where warmup and random exploration actions would end up scaled in the replay buffer (@Antymon)

Deprecations:

  • nprocs (ACKTR) and num_procs (ACER) are deprecated in favor of n_cpu_tf_sess which is now common
    to all algorithms
  • VecNormalize: load_running_average and save_running_average are deprecated in favour of using pickle.

Others:

  • Add upper bound for Tensorflow version (<2.0.0).
  • Refactored test to remove duplicated code
  • Add pull request template
  • Replaced redundant code in load_results (@jbulow)
  • Minor PEP8 fixes in dqn.py (@justinkterry)
  • Add a message to the assert in PPO2
  • Update replay buffer doctring
  • Fix VecEnv docstrings

Documentation:

  • Add plotting to the Monitor example (@rusu24edward)
  • Add Snake Game AI project (@pedrohbtp)
  • Add note on the support Tensorflow versions.
  • Remove unnecessary steps required for Windows installation.
  • Remove DummyVecEnv creation when not needed
  • Added make_vec_env to the examples to simplify VecEnv creation
  • Add QuaRL project (@srivatsankrishnan)
  • Add Pwnagotchi project (@evilsocket)
  • Fix multiprocessing example (@rusu24edward)
  • Fix result_plotter example
  • Add JNRR19 tutorial (by @edbeeching, @hill-a and @araffin)
  • Updated notebooks link
  • Fix typo in algos.rst, "containes" to "contains" (@SyllogismRXS)
  • Fix outdated source documentation for load_results
  • Add PPO_CPP project (@Antymon)
  • Add section on C++ portability of Tensorflow models (@Antymon)
  • Update custom env documentation to reflect new gym API for the close() method (@justinkterry)
  • Update custom env documentation to clarify what step and reset return (@justinkterry)
  • Add RL tips and tricks for doing RL experiments
  • Corrected lots of typos
  • Add spell check to documentation if available

MPI dependency optional, new save format, ACKTR with continuous actions

29 Sep 16:54
3105f30
Compare
Choose a tag to compare

Breaking Changes:

  • OpenMPI-dependent algorithms (PPO1, TRPO, GAIL, DDPG) are disabled
    in the default installation of stable_baselines. mpi4py is now
    installed as an extra. When mpi4py is not available,
    stable-baselines skips imports of OpenMPI-dependent algorithms. See
    installation notes <openmpi> and
    Issue #430.
  • SubprocVecEnv now defaults to a thread-safe start method, forkserver
    when available and otherwise spawn. This may require application
    code be wrapped in if __name__ == '__main__'. You can
    restore previous behavior by explicitly setting start_method =
    'fork'. See PR #428.
  • Updated dependencies: tensorflow v1.8.0 is now required
  • Removed checkpoint_path and checkpoint_freq argument from DQN that
    were not used
  • Removed bench/benchmark.py that was not used
  • Removed several functions from common/tf_util.py that were not used
  • Removed ppo1/run_humanoid.py

New Features:

  • important change Switch to using zip-archived JSON and Numpy
    savez for storing models for better support across library/Python
    versions. (@Miffyli)
  • ACKTR now supports continuous actions
  • Add double_q argument to DQN constructor

Bug Fixes:

  • Skip automatic imports of OpenMPI-dependent algorithms to avoid an
    issue where OpenMPI would cause stable-baselines to hang on Ubuntu
    installs. See installation notes
    <openmpi>
    and Issue #430.
  • Fix a bug when calling logger.configure() with MPI enabled
    (@keshaviyengar)
  • set allow_pickle=True for numpy>=1.17.0 when loading expert
    dataset
  • Fix a bug when using VecCheckNan with numpy ndarray as state. Issue #489. (@ruifeng96150)

Deprecations:

  • Models saved with cloudpickle format (stable-baselines<=2.7.0) are
    now deprecated in favor of zip-archive format for better support
    across Python/Tensorflow versions. (@Miffyli)

Others:

  • Implementations of noise classes (AdaptiveParamNoiseSpec,
    NormalActionNoise, OrnsteinUhlenbeckActionNoise) were moved from
    stable_baselines.ddpg.noise to stable_baselines.common.noise. The
    API remains backward-compatible; for example from
    stable_baselines.ddpg.noise import NormalActionNoise is still okay.
    (@shwang)
  • Docker images were updated
  • Cleaned up files in common/ folder and in acktr/ folder that were
    only used by old ACKTR version (e.g. filter.py)
  • Renamed acktr_disc.py to acktr.py

Documentation:

  • Add WaveRL project (@jaberkow)
  • Add Fenics-DRL project (@DonsetPG)
  • Fix and rename custom policy names (@eavelardev)
  • Add documentation on exporting models.
  • Update maintainers list (Welcome to @Miffyli)

Twin Delayed DDPG (TD3) and GAE bug fix (TRPO, PPO1, GAIL)

31 Jul 11:25
8ceda3b
Compare
Choose a tag to compare

New Features

  • added Twin Delayed DDPG (TD3) algorithm, with HER support
  • added support for continuous action spaces to action_probability, computing the
    PDF of a Gaussian policy in addition to the existing support for categorical stochastic policies.
  • added flag to action_probability to return log-probabilities.
  • added support for python lists and numpy arrays in logger.writekvs. (@dwiel)
  • the info dict returned by VecEnvs now include a terminal_observation key providing access to the last observation in a trajectory. (@qxcv)

Bug Fixes

  • fixed a bug in traj_segment_generator where the episode_starts was wrongly recorded, resulting in wrong calculation of Generalized Advantage Estimation (GAE), this affects TRPO, PPO1 and GAIL (thanks to @miguelrass for spotting the bug)
  • added missing property n_batch in BasePolicy.

Others

  • renamed some keys in traj_segment_generator to be more meaningful
  • retrieve unnormalized reward when using Monitor wrapper with TRPO, PPO1 and GAIL to display them in the logs (mean episode reward)
  • clean up DDPG code (renamed variables)

Documentation

  • doc fix for the hyperparameter tuning command in the rl zoo
  • added an example on how to log additional variable with tensorboard and a callback

Hindsight Experience Replay (HER) - Reloaded | get/load parameters

13 Jun 10:50
Compare
Choose a tag to compare

Breaking Changes:

  • breaking change removed stable_baselines.ddpg.memory in favor of stable_baselines.deepq.replay_buffer (see fix below)

Breaking Change: DDPG replay buffer was unified with DQN/SAC replay buffer. As a result, when loading a DDPG model trained with stable_baselines<2.6.0, it throws an import error. You can fix that using:

import sys
import pkg_resources

import stable_baselines

# Fix for breaking change for DDPG buffer in v2.6.0
if pkg_resources.get_distribution("stable_baselines").version >= "2.6.0":
    sys.modules['stable_baselines.ddpg.memory'] = stable_baselines.deepq.replay_buffer
    stable_baselines.deepq.replay_buffer.Memory = stable_baselines.deepq.replay_buffer.ReplayBuffer

We recommend you to save again the model afterward, so the fix won't be needed the next time the trained agent is loaded.

New Features:

  • revamped HER implementation: clean re-implementation from scratch, now supports DQN, SAC and DDPG
  • add action_noise param for SAC, it helps exploration for problem with deceptive reward
  • The parameter filter_size of the function conv in A2C utils now supports passing a list/tuple of two integers (height and width), in order to have non-squared kernel matrix. (@yutingsz)
  • add random_exploration parameter for DDPG and SAC, it may be useful when using HER + DDPG/SAC. This hack was present in the original OpenAI Baselines DDPG + HER implementation.
  • added load_parameters and get_parameters to base RL class. With these methods, users are able to load and get parameters to/from existing model, without touching tensorflow. (@Miffyli)
  • added specific hyperparameter for PPO2 to clip the value function (cliprange_vf)
  • added VecCheckNan wrapper

Bug Fixes:

  • bugfix for VecEnvWrapper.__getattr__ which enables access to class attributes inherited from parent classes.
  • fixed path splitting in TensorboardWriter._get_latest_run_id() on Windows machines (@PatrickWalter214)
  • fixed a bug where initial learning rate is logged instead of its placeholder in A2C.setup_model (@sc420)
  • fixed a bug where number of timesteps is incorrectly updated and logged in A2C.learn and A2C._train_step (@sc420)
  • fixed num_timesteps (total_timesteps) variable in PPO2 that was wrongly computed.
  • fixed a bug in DDPG/DQN/SAC, when there were the number of samples in the replay buffer was lesser than the batch size (thanks to @dwiel for spotting the bug)
  • removed a2c.utils.find_trainable_params please use common.tf_util.get_trainable_vars instead. find_trainable_params was returning all trainable variables, discarding the scope argument. This bug was causing the model to save duplicated parameters (for DDPG and SAC) but did not affect the performance.

Deprecations:

  • deprecated memory_limit and memory_policy in DDPG, please use buffer_size instead. (will be removed in v3.x.x)

Others:

  • important change switched to using dictionaries rather than lists when storing parameters, with tensorflow Variable names being the keys. (@Miffyli)
  • removed unused dependencies (tdqm, dill, progressbar2, seaborn, glob2, click)
  • removed get_available_gpus function which hadn't been used anywhere (@Pastafarianist)

Documentation:

  • added guide for managing NaN and inf
  • updated ven_env doc
  • misc doc updates

Bug Fixes and Improvements (VecEnv)

04 May 09:05
bddd1ab
Compare
Choose a tag to compare

Warning: breaking change when using custom policies

  • doc update (fix example of result plotter + improve doc)
  • fixed logger issues when stdout lacks read function
  • fixed a bug in common.dataset.Dataset where shuffling was not disabled properly (it affects only PPO1 with recurrent policies)
  • fixed output layer name for DDPG q function, used in pop-art normalization and l2 regularization of the critic
  • added support for multi env recording to generate_expert_traj (@XMaster96)
  • added support for LSTM model recording to generate_expert_traj (@XMaster96)
  • GAIL: remove mandatory matplotlib dependency and refactor as subclass of TRPO (@kantneel and @AdamGleave)
  • added get_attr(), env_method() and set_attr() methods for all VecEnv.
    Those methods now all accept indices keyword to select a subset of envs.
    set_attr now returns None rather than a list of None. (@kantneel)
  • GAIL: gail.dataset.ExpertDataset supports loading from memory rather than file, and
    gail.dataset.record_expert supports returning in-memory rather than saving to file.
  • added support in VecEnvWrapper for accessing attributes of arbitrarily deeply nested
    instances of VecEnvWrapper and VecEnv. This is allowed as long as the attribute belongs
    to exactly one of the nested instances i.e. it must be unambiguous. (@kantneel)
  • fixed bug where result plotter would crash on very short runs (@Pastafarianist)
  • added option to not trim output of result plotter by number of timesteps (@Pastafarianist)
  • clarified the public interface of BasePolicy and ActorCriticPolicy. Breaking change when using custom policies: masks_ph is now called dones_ph.
  • support for custom stateful policies.
  • fixed episode length recording in trpo_mpi.utils.traj_segment_generator (@GerardMaggiolino)

Working GAIL, pretrain RL models and hotfix for A2C with continuous actions

28 Mar 13:27
Compare
Choose a tag to compare
  • fixed various bugs in GAIL
  • added scripts to generate dataset for gail
  • added tests for GAIL + data for Pendulum-v0
  • removed unused utils file in DQN folder
  • fixed a bug in A2C where actions were cast to int32 even in the continuous case
  • added addional logging to A2C when Monitor wrapper is used
  • changed logging for PPO2: do not display NaN when reward info is not present
  • change default value of A2C lr schedule
  • removed behavior cloning script
  • added pretrain method to base class, in order to use behavior cloning on all models
  • fixed close() method for DummyVecEnv.
  • added support for Dict spaces in DummyVecEnv and SubprocVecEnv. (@AdamGleave)
  • added support for arbitrary multiprocessing start methods and added a warning about SubprocVecEnv that are not thread-safe by default. (@AdamGleave)
  • added support for Discrete actions for GAIL
  • fixed deprecation warning for tf: replaces tf.to_float() by tf.cast()
  • fixed bug in saving and loading ddpg model when using normalization of obs or returns (@tperol)
  • changed DDPG default buffer size from 100 to 50000.
  • fixed a bug in ddpg.py in combined_stats for eval. Computed mean on eval_episode_rewards and eval_qs (@keshaviyengar)
  • fixed a bug in setup.py that would error on non-GPU systems without TensorFlow installed

Welcome to @AdamGleave who joins the maintainer team.

Bug fixes and improvements

11 Feb 19:13
Compare
Choose a tag to compare
  • fixed computation of training metrics in TRPO and PPO1
  • added reset_num_timesteps keyword when calling train() to continue
    tensorboard learning curves
  • reduced the size taken by tensorboard logs (added a
    full_tensorboard_log to enable full logging, which was the
    previous behavior)
  • fixed image detection for tensorboard logging
  • fixed ACKTR for recurrent policies
  • fixed gym breaking changes
  • fixed custom policy examples in the doc for DQN and DDPG
  • remove gym spaces patch for equality functions
  • fixed tensorflow dependency: cpu version was installed overwritting
    tensorflow-gpu when present.
  • fixed a bug in traj_segment_generator (used in ppo1 and trpo)
    where new was not updated. (spotted by @junhyeokahn)

Soft Actor-Critic (SAC) and policy kwargs

17 Jan 17:16
Compare
Choose a tag to compare
  • added Soft Actor-Critic (SAC) model
  • fixed a bug in DQN where prioritized_replay_beta_iters param was not used
  • fixed DDPG that did not save target network parameters
  • fixed bug related to shape of true_reward (@abhiskk)
  • fixed example code in documentation of tf_util:Function (@JohannesAck)
  • added learning rate schedule for SAC
  • fixed action probability for continuous actions with actor-critic models
  • added optional parameter to action_probability for likelihood calculation of given action being taken.
  • added more flexible custom LSTM policies
  • added auto entropy coefficient optimization for SAC
  • clip continuous actions at test time too for all algorithms (except SAC/DDPG where it is not needed)
  • added a mean to pass kwargs to policy when creating a model (+ save those kwargs)
  • fixed DQN examples in DQN folder
  • added possibility to pass activation function for DDPG, DQN and SAC

We would like to thanks our contributors (in random order): @abhiskk @JohannesAck
@EliasHasle @mrakgr @Bleyddyn
and welcoming a new maintainer: @erniejunior