RL Algorithms

This table displays the rl algorithms that are implemented in the Stable Baselines3 project, along with some useful characteristics: support for discrete/continuous actions, multiprocessing.

Name

Box

Discrete

MultiDiscrete

MultiBinary

Multi Processing

A2C

✔️

✔️

✔️

✔️

✔️

DDPG

✔️

DQN

✔️

HER

✔️

✔️

PPO

✔️

✔️

✔️

✔️

✔️

SAC

✔️

TD3

✔️

Note

Tuple observation spaces are not supported by any environment however single-level Dict spaces are (cf. Examples).

Actions gym.spaces:

  • Box: A N-dimensional box that contains every point in the action space.

  • Discrete: A list of possible actions, where each timestep only one of the actions can be used.

  • MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used.

  • MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

Note

More algorithms (like QR-DQN or TQC) are implemented in our contrib repo.

Note

Some logging values (like ep_rew_mean, ep_len_mean) are only available when using a Monitor wrapper See Issue #339 for more info.

Note

When using off-policy algorithms, Time Limits (aka timeouts) are handled properly (cf. issue #284). You can revert to SB3 < 2.1.0 behavior by passing handle_timeout_termination=False via the replay_buffer_kwargs argument.

Reproducibility

Completely reproducible results are not guaranteed across PyTorch releases or different platforms. Furthermore, results need not be reproducible between CPU and GPU executions, even when using identical seeds.

In order to make computations deterministics, on your specific problem on one specific platform, you need to pass a seed argument at the creation of a model. If you pass an environment to the model using set_env(), then you also need to seed the environment first.

Credit: part of the Reproducibility section comes from PyTorch Documentation