TD3

Twin Delayed DDPG (TD3) Addressing Function Approximation Error in Actor-Critic Methods.

TD3 is a direct successor of DDPG and improves it using three major tricks: clipped double Q-Learning, delayed policy update and target policy smoothing. We recommend reading OpenAI Spinning guide on TD3 to learn more about those.

Available Policies

MlpPolicy

alias of stable_baselines3.td3.policies.TD3Policy

Notes

Note

The default policies for TD3 differ a bit from others MlpPolicy: it uses ReLU instead of tanh activation, to match the original paper

Can I use?

  • Recurrent policies: ❌

  • Multi processing: ❌

  • Gym spaces:

Space

Action

Observation

Discrete

✔️

Box

✔️

✔️

MultiDiscrete

✔️

MultiBinary

✔️

Example

import gym
import numpy as np

from stable_baselines3 import TD3
from stable_baselines3.td3.policies import MlpPolicy
from stable_baselines3.common.noise import NormalActionNoise, OrnsteinUhlenbeckActionNoise

env = gym.make('Pendulum-v0')

# The noise objects for TD3
n_actions = env.action_space.shape[-1]
action_noise = NormalActionNoise(mean=np.zeros(n_actions), sigma=0.1 * np.ones(n_actions))

model = TD3(MlpPolicy, env, action_noise=action_noise, verbose=1)
model.learn(total_timesteps=10000, log_interval=10)
model.save("td3_pendulum")
env = model.get_env()

del model # remove to demonstrate saving and loading

model = TD3.load("td3_pendulum")

obs = env.reset()
while True:
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Parameters

class stable_baselines3.td3.TD3(policy: Union[str, Type[stable_baselines3.td3.policies.TD3Policy]], env: Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv, str], learning_rate: Union[float, Callable] = 0.001, buffer_size: int = 1000000, learning_starts: int = 100, batch_size: int = 100, tau: float = 0.005, gamma: float = 0.99, train_freq: int = - 1, gradient_steps: int = - 1, n_episodes_rollout: int = 1, action_noise: Optional[stable_baselines3.common.noise.ActionNoise] = None, optimize_memory_usage: bool = False, policy_delay: int = 2, target_policy_noise: float = 0.2, target_noise_clip: float = 0.5, use_sde: bool = False, sde_sample_freq: int = - 1, sde_max_grad_norm: float = 1, sde_ent_coef: float = 0.0, sde_log_std_scheduler: Optional[Callable] = None, use_sde_at_warmup: bool = False, tensorboard_log: Optional[str] = None, create_eval_env: bool = False, policy_kwargs: Dict[str, Any] = None, verbose: int = 0, seed: Optional[int] = None, device: Union[torch.device, str] = 'auto', _init_setup_model: bool = True)[source]

Twin Delayed DDPG (TD3) Addressing Function Approximation Error in Actor-Critic Methods.

Original implementation: https://github.com/sfujim/TD3 Paper: https://arxiv.org/abs/1802.09477 Introduction to TD3: https://spinningup.openai.com/en/latest/algorithms/td3.html

Parameters
  • policy – (TD3Policy or str) The policy model to use (MlpPolicy, CnnPolicy, …)

  • env – (GymEnv or str) The environment to learn from (if registered in Gym, can be str)

  • learning_rate – (float or callable) learning rate for adam optimizer, the same learning rate will be used for all networks (Q-Values, Actor and Value function) it can be a function of the current progress remaining (from 1 to 0)

  • buffer_size – (int) size of the replay buffer

  • learning_starts – (int) how many steps of the model to collect transitions for before learning starts

  • batch_size – (int) Minibatch size for each gradient update

  • tau – (float) the soft update coefficient (“Polyak update”, between 0 and 1)

  • gamma – (float) the discount factor

  • train_freq – (int) Update the model every train_freq steps. Set to -1 to disable.

  • gradient_steps – (int) How many gradient steps to do after each rollout (see train_freq and n_episodes_rollout) Set to -1 means to do as many gradient steps as steps done in the environment during the rollout.

  • n_episodes_rollout – (int) Update the model every n_episodes_rollout episodes. Note that this cannot be used at the same time as train_freq. Set to -1 to disable.

  • action_noise – (ActionNoise) the action noise type (None by default), this can help for hard exploration problem. Cf common.noise for the different action noise type.

  • optimize_memory_usage – (bool) Enable a memory efficient variant of the replay buffer at a cost of more complexity. See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195

  • policy_delay – (int) Policy and target networks will only be updated once every policy_delay steps per training steps. The Q values will be updated policy_delay more often (update every training step).

  • target_policy_noise – (float) Standard deviation of Gaussian noise added to target policy (smoothing noise)

  • target_noise_clip – (float) Limit for absolute value of target policy smoothing noise.

  • use_sde – (bool) Whether to use State Dependent Exploration (SDE) instead of action noise exploration (default: False)

  • sde_sample_freq – (int) Sample a new noise matrix every n steps when using SDE Default: -1 (only sample at the beginning of the rollout)

  • sde_max_grad_norm – (float)

  • sde_ent_coef – (float)

  • sde_log_std_scheduler – (callable)

  • use_sde_at_warmup – (bool) Whether to use SDE instead of uniform sampling during the warm up phase (before learning starts)

  • create_eval_env – (bool) Whether to create a second environment that will be used for evaluating the agent periodically. (Only available when passing string for the environment)

  • policy_kwargs – (dict) additional arguments to be passed to the policy on creation

  • verbose – (int) the verbosity level: 0 no output, 1 info, 2 debug

  • seed – (int) Seed for the pseudo random generators

  • device – (str or th.device) Device (cpu, cuda, …) on which the code should be run. Setting it to auto, the code will be run on the GPU if possible.

  • _init_setup_model – (bool) Whether or not to build the network at the creation of the instance

collect_rollouts(env: stable_baselines3.common.vec_env.base_vec_env.VecEnv, callback: stable_baselines3.common.callbacks.BaseCallback, n_episodes: int = 1, n_steps: int = - 1, action_noise: Optional[stable_baselines3.common.noise.ActionNoise] = None, learning_starts: int = 0, replay_buffer: Optional[stable_baselines3.common.buffers.ReplayBuffer] = None, log_interval: Optional[int] = None) → stable_baselines3.common.type_aliases.RolloutReturn[source]

Collect rollout using the current policy (and possibly fill the replay buffer)

Parameters
  • env – (VecEnv) The training environment

  • n_episodes – (int) Number of episodes to use to collect rollout data You can also specify a n_steps instead

  • n_steps – (int) Number of steps to use to collect rollout data You can also specify a n_episodes instead.

  • action_noise – (Optional[ActionNoise]) Action noise that will be used for exploration Required for deterministic policy (e.g. TD3). This can also be used in addition to the stochastic policy for SAC.

  • callback – (BaseCallback) Callback that will be called at each step (and at the beginning and end of the rollout)

  • learning_starts – (int) Number of steps before learning for the warm-up phase.

  • replay_buffer – (ReplayBuffer)

  • log_interval – (int) Log data every log_interval episodes

Returns

(RolloutReturn)

excluded_save_params() → List[str][source]

Returns the names of the parameters that should be excluded by default when saving the model.

Returns

(List[str]) List of parameters that should be excluded from save

get_env() → Optional[stable_baselines3.common.vec_env.base_vec_env.VecEnv]

Returns the current environment (can be None if not defined).

Returns

(Optional[VecEnv]) The current environment

get_torch_variables() → Tuple[List[str], List[str]][source]

cf base class

get_vec_normalize_env() → Optional[stable_baselines3.common.vec_env.vec_normalize.VecNormalize]

Return the VecNormalize wrapper of the training env if it exists. :return: Optional[VecNormalize] The VecNormalize env.

learn(total_timesteps: int, callback: Union[None, Callable, List[stable_baselines3.common.callbacks.BaseCallback], stable_baselines3.common.callbacks.BaseCallback] = None, log_interval: int = 4, eval_env: Optional[Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]] = None, eval_freq: int = - 1, n_eval_episodes: int = 5, tb_log_name: str = 'TD3', eval_log_path: Optional[str] = None, reset_num_timesteps: bool = True)stable_baselines3.common.off_policy_algorithm.OffPolicyAlgorithm[source]

Return a trained model.

Parameters
  • total_timesteps – (int) The total number of samples (env steps) to train on

  • callback – (MaybeCallback) callback(s) called at every step with state of the algorithm.

  • log_interval – (int) The number of timesteps before logging.

  • tb_log_name – (str) the name of the run for TensorBoard logging

  • eval_env – (gym.Env) Environment that will be used to evaluate the agent

  • eval_freq – (int) Evaluate the agent every eval_freq timesteps (this may vary a little)

  • n_eval_episodes – (int) Number of episode to evaluate the agent

  • eval_log_path – (Optional[str]) Path to a folder where the evaluations will be saved

  • reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)

Returns

(BaseAlgorithm) the trained model

classmethod load(load_path: str, env: Optional[Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]] = None, **kwargs)stable_baselines3.common.base_class.BaseAlgorithm

Load the model from a zip-file

Parameters
  • load_path – the location of the saved data

  • env – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment

  • kwargs – extra arguments to change the model when loading

load_replay_buffer(path: Union[str, pathlib.Path, io.BufferedIOBase]) → None

Load a replay buffer from a pickle file.

Parameters

path – (Union[str, pathlib.Path, io.BufferedIOBase]) Path to the pickled replay buffer.

predict(observation: numpy.ndarray, state: Optional[numpy.ndarray] = None, mask: Optional[numpy.ndarray] = None, deterministic: bool = False) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]

Get the model’s action(s) from an observation

Parameters
  • observation – (np.ndarray) the input observation

  • state – (Optional[np.ndarray]) The last states (can be None, used in recurrent policies)

  • mask – (Optional[np.ndarray]) The last masks (can be None, used in recurrent policies)

  • deterministic – (bool) Whether or not to return deterministic actions.

Returns

(Tuple[np.ndarray, Optional[np.ndarray]]) the model’s action and the next state (used in recurrent policies)

save(path: Union[str, pathlib.Path, io.BufferedIOBase], exclude: Optional[Iterable[str]] = None, include: Optional[Iterable[str]] = None) → None

Save all the attributes of the object and the model parameters in a zip-file.

Parameters
  • pathlib.Path, io.BufferedIOBase]) ((Union[str,) – path to the file where the rl agent should be saved

  • exclude – name of parameters that should be excluded in addition to the default one

  • include – name of parameters that might be excluded but should be included anyway

save_replay_buffer(path: Union[str, pathlib.Path, io.BufferedIOBase]) → None

Save the replay buffer as a pickle file.

Parameters

path – (Union[str,pathlib.Path, io.BufferedIOBase]) Path to the file where the replay buffer should be saved. if path is a str or pathlib.Path, the path is automatically created if necessary.

set_env(env: Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]) → None

Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space

Parameters

env – The environment for learning a policy

set_random_seed(seed: Optional[int] = None) → None

Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)

Parameters

seed – (int)

train(gradient_steps: int, batch_size: int = 100) → None[source]

Sample the replay buffer and do the updates (gradient descent and update target networks)

TD3 Policies

stable_baselines3.td3.MlpPolicy

alias of stable_baselines3.td3.policies.TD3Policy