Migrating from Stable-Baselines¶
This is a guide to migrate from Stable-Baselines (SB2) to Stable-Baselines3 (SB3).
It also references the main changes.
Overall Stable-Baselines3 (SB3) keeps the high-level API of Stable-Baselines (SB2). Most of the changes are to ensure more consistency and are internal ones. Because of the backend change, from Tensorflow to PyTorch, the internal code is much much readable and easy to debug at the cost of some speed (dynamic graph vs static graph., see Issue #90) However, the algorithms were extensively benchmarked on Atari games and continuous control PyBullet envs (see Issue #48 and Issue #49) so you should not expect performance drop when switching from SB2 to SB3.
How to migrate?¶
In most cases, replacing
from stable_baselines by
from stable_baselines3 will be sufficient.
Some files were moved to the common folder (cf below) and could result to import errors.
Some algorithms were removed because of their complexity to improve the maintainability of the project.
We recommend reading this guide carefully to understand all the changes that were made.
You can also take a look at the rl-zoo3 and compare the imports
to the rl-zoo of SB2 to have a concrete example of successful migration.
SB3 requires python 3.6+ (instead of python 3.5+ for SB2)
Dropped MPI support
Dropped layer normalized policies (
LSTM policies (
`CnnLstmPolicy`) are not supported for the time being
Dropped parameter noise for DDPG and DQN
PPO is now closer to the original implementation (no clipping of the value function by default), cf PPO section below
Orthogonal initialization is only used by A2C/PPO
The features extractor (CNN extractor) is shared between policy and q-networks for DDPG/SAC/TD3 and only the policy loss used to update it (much faster)
Tensorboard legacy logging was dropped in favor of having one logger for the terminal and Tensorboard (cf Tensorboard integration)
We dropped ACKTR/ACER support because of their complexity compared to simpler alternatives (PPO, SAC, TD3) performing as good.
We dropped GAIL support as we are focusing on model-free RL only, you can however take a look at the imitation project which implements GAIL and other imitation learning algorithms on top of SB3.
action_probabilityis currently not implemented in the base class
pretrain()method for behavior cloning was removed (see issue #27)
You can take a look at the issue about SB3 implementation design for more details.
Utility functions are no longer exported from
common module, you should import them with their absolute path, e.g.:
from stable_baselines3.common.env_util import make_atari_env, make_vec_env from stable_baselines3.common.utils import set_random_seed
from stable_baselines3.common import make_atari_env
Changes and renaming in parameters¶
Base-class (all algorithms)¶
get/set_parametersreturn a dictionary mapping object names to their respective PyTorch tensors and other objects representing their parameters, instead of simpler mapping of parameter name to a NumPy array. These functions also return PyTorch tensors rather than NumPy arrays.
features_extractorin now used with
lr_scheduleis part of
learning_rate(it can be a callable).
momentumare modifiable through
PyTorch implementation of RMSprop differs from Tensorflow’s,
which leads to different and potentially more unstable results.
stable_baselines3.common.sb2_compat.rmsprop_tf_like.RMSpropTFLike optimizer to match the results
with TensorFlow’s implementation. This can be done through
nminibatches gave different batch size depending on the number of environments:
batch_size = (n_steps * n_envs) // nminibatches
clip_range_vfbehavior for PPO is slightly different: Set it to
None(default) to deactivate clipping (in SB2, you had to pass
Nonemeant to use
clip_rangefor the clipping)
PPO default hyperparameters are the one tuned for continuous control environment. We recommend taking a look at the RL Zoo for hyperparameters tuned for Atari games.
Only the vanilla DQN is implemented right now but extensions will follow. Default hyperparameters are taken from the nature paper, except for the optimizer and learning rate that were taken from Stable Baselines defaults.
DDPG now follows the same interface as SAC/TD3.
For state/reward normalization, you should use
VecNormalize as for all other algorithms.
SAC/TD3 now accept any number of critics, e.g.
policy_kwargs=dict(n_critics=3), instead of only two before.
SAC/TD3 default hyperparameters (including network architecture) now match the ones from the original papers. DDPG is using TD3 defaults.
SAC implementation matches the latest version of the original implementation: it uses two Q function networks and two target Q function networks instead of two Q function networks and one Value function network (SB2 implementation, first version of the original implementation). Despite this change, no change in performance should be expected.
predict() method has now
deterministic=False by default for consistency.
To match SB2 behavior, you need to explicitly pass
HER implementation now also supports online sampling of the new goals. This is done in a vectorized version.
The goal selection strategy
RANDOM is no longer supported.
HER now supports
VecNormalize wrapper but only when
For performance reasons, the maximum number of steps per episodes must be specified (see HER documentation).
New logger API¶
Methods were renamed in the logger:
New Features (SB3 vs SB2)¶
Much cleaner and consistent base code (and no more warnings =D!) and static type checks
Independent saving/loading/predict for policies
A2C now supports Generalized Advantage Estimation (GAE) and advantage normalization (both are deactivated by default)
Generalized State-Dependent Exploration (gSDE) exploration is available for A2C/PPO/SAC. It allows to use RL directly on real robots (cf https://arxiv.org/abs/2005.05719)
Proper evaluation (using separate env) is included in the base class (using
EvalCallback), if you pass the environment as a string, you can pass
create_eval_env=Trueto the algorithm constructor.
Better saving/loading: optimizers are now included in the saved parameters and there is two new methods
load_replay_bufferfor the replay buffer when using off-policy algorithms (DQN/DDPG/SAC/TD3)
You can pass
policy_kwargsin order to easily customize optimizers
Seeding now works properly to have deterministic results
Replay buffer does not grow, allocate everything at build time (faster)
We added a memory efficient replay buffer variant (pass
optimize_memory_usage=Trueto the constructor), it reduces drastically the memory used especially when using images
You can specify an arbitrary number of critics for SAC/TD3 (e.g.