PPO¶
The Proximal Policy Optimization algorithm combines ideas from A2C (having multiple workers) and TRPO (it uses a trust region to improve the actor).
The main idea is that after an update, the new policy should be not too far form the old policy. For that, ppo uses clipping to avoid too large update.
Note
PPO contains several modifications from the original algorithm not documented by OpenAI: advantages are normalized and value function can be also clipped .
Notes¶
Original paper: https://arxiv.org/abs/1707.06347
Clear explanation of PPO on Arxiv Insights channel: https://www.youtube.com/watch?v=5P7I-xPq8u8
OpenAI blog post: https://blog.openai.com/openai-baselines-ppo/
Spinning Up guide: https://spinningup.openai.com/en/latest/algorithms/ppo.html
Can I use?¶
Recurrent policies: ❌
Multi processing: ✔️
Gym spaces:
Space |
Action |
Observation |
---|---|---|
Discrete |
✔️ |
✔️ |
Box |
✔️ |
✔️ |
MultiDiscrete |
✔️ |
✔️ |
MultiBinary |
✔️ |
✔️ |
Example¶
Train a PPO agent on Pendulum-v0
using 4 environments.
import gym
from stable_baselines3 import PPO
from stable_baselines3.ppo import MlpPolicy
from stable_baselines3.common.cmd_util import make_vec_env
# Parallel environments
env = make_vec_env('CartPole-v1', n_envs=4)
model = PPO(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
model.save("ppo_cartpole")
del model # remove to demonstrate saving and loading
model = PPO.load("ppo_cartpole")
obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
Parameters¶
-
class
stable_baselines3.ppo.
PPO
(policy: Union[str, Type[stable_baselines3.common.policies.ActorCriticPolicy]], env: Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv, str], learning_rate: Union[float, Callable] = 0.0003, n_steps: int = 2048, batch_size: Optional[int] = 64, n_epochs: int = 10, gamma: float = 0.99, gae_lambda: float = 0.95, clip_range: float = 0.2, clip_range_vf: Optional[float] = None, ent_coef: float = 0.0, vf_coef: float = 0.5, max_grad_norm: float = 0.5, use_sde: bool = False, sde_sample_freq: int = - 1, target_kl: Optional[float] = None, tensorboard_log: Optional[str] = None, create_eval_env: bool = False, policy_kwargs: Optional[Dict[str, Any]] = None, verbose: int = 0, seed: Optional[int] = None, device: Union[torch.device, str] = 'auto', _init_setup_model: bool = True)[source]¶ Proximal Policy Optimization algorithm (PPO) (clip version)
Paper: https://arxiv.org/abs/1707.06347 Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/) https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail and and Stable Baselines (PPO2 from https://github.com/hill-a/stable-baselines)
Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html
- Parameters
policy – (ActorCriticPolicy or str) The policy model to use (MlpPolicy, CnnPolicy, …)
env – (Gym environment or str) The environment to learn from (if registered in Gym, can be str)
learning_rate – (float or callable) The learning rate, it can be a function of the current progress remaining (from 1 to 0)
n_steps – (int) The number of steps to run for each environment per update (i.e. batch size is n_steps * n_env where n_env is number of environment copies running in parallel)
batch_size – (int) Minibatch size
n_epochs – (int) Number of epoch when optimizing the surrogate loss
gamma – (float) Discount factor
gae_lambda – (float) Factor for trade-off of bias vs variance for Generalized Advantage Estimator
clip_range – (float or callable) Clipping parameter, it can be a function of the current progress remaining (from 1 to 0).
clip_range_vf – (float or callable) Clipping parameter for the value function, it can be a function of the current progress remaining (from 1 to 0). This is a parameter specific to the OpenAI implementation. If None is passed (default), no clipping will be done on the value function. IMPORTANT: this clipping depends on the reward scaling.
ent_coef – (float) Entropy coefficient for the loss calculation
vf_coef – (float) Value function coefficient for the loss calculation
max_grad_norm – (float) The maximum value for the gradient clipping
use_sde – (bool) Whether to use generalized State Dependent Exploration (gSDE) instead of action noise exploration (default: False)
sde_sample_freq – (int) Sample a new noise matrix every n steps when using gSDE Default: -1 (only sample at the beginning of the rollout)
target_kl – (float) Limit the KL divergence between updates, because the clipping is not enough to prevent large update see issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213) By default, there is no limit on the kl div.
tensorboard_log – (str) the log location for tensorboard (if None, no logging)
create_eval_env – (bool) Whether to create a second environment that will be used for evaluating the agent periodically. (Only available when passing string for the environment)
policy_kwargs – (dict) additional arguments to be passed to the policy on creation
verbose – (int) the verbosity level: 0 no output, 1 info, 2 debug
seed – (int) Seed for the pseudo random generators
device – (str or th.device) Device (cpu, cuda, …) on which the code should be run. Setting it to auto, the code will be run on the GPU if possible.
_init_setup_model – (bool) Whether or not to build the network at the creation of the instance
-
collect_rollouts
(env: stable_baselines3.common.vec_env.base_vec_env.VecEnv, callback: stable_baselines3.common.callbacks.BaseCallback, rollout_buffer: stable_baselines3.common.buffers.RolloutBuffer, n_rollout_steps: int) → bool¶ Collect rollouts using the current policy and fill a RolloutBuffer.
- Parameters
env – (VecEnv) The training environment
callback – (BaseCallback) Callback that will be called at each step (and at the beginning and end of the rollout)
rollout_buffer – (RolloutBuffer) Buffer to fill with rollouts
n_steps – (int) Number of experiences to collect per environment
- Returns
(bool) True if function returned with at least n_rollout_steps collected, False if callback terminated rollout prematurely.
-
excluded_save_params
() → List[str]¶ Returns the names of the parameters that should be excluded by default when saving the model.
- Returns
([str]) List of parameters that should be excluded from save
-
get_env
() → Optional[stable_baselines3.common.vec_env.base_vec_env.VecEnv]¶ Returns the current environment (can be None if not defined).
- Returns
(Optional[VecEnv]) The current environment
-
get_torch_variables
() → Tuple[List[str], List[str]]¶ cf base class
-
get_vec_normalize_env
() → Optional[stable_baselines3.common.vec_env.vec_normalize.VecNormalize]¶ Return the
VecNormalize
wrapper of the training env if it exists. :return: Optional[VecNormalize] TheVecNormalize
env.
-
learn
(total_timesteps: int, callback: Union[None, Callable, List[stable_baselines3.common.callbacks.BaseCallback], stable_baselines3.common.callbacks.BaseCallback] = None, log_interval: int = 1, eval_env: Optional[Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]] = None, eval_freq: int = - 1, n_eval_episodes: int = 5, tb_log_name: str = 'PPO', eval_log_path: Optional[str] = None, reset_num_timesteps: bool = True) → stable_baselines3.ppo.ppo.PPO[source]¶ Return a trained model.
- Parameters
total_timesteps – (int) The total number of samples (env steps) to train on
callback – (MaybeCallback) callback(s) called at every step with state of the algorithm.
log_interval – (int) The number of timesteps before logging.
tb_log_name – (str) the name of the run for TensorBoard logging
eval_env – (gym.Env) Environment that will be used to evaluate the agent
eval_freq – (int) Evaluate the agent every
eval_freq
timesteps (this may vary a little)n_eval_episodes – (int) Number of episode to evaluate the agent
eval_log_path – (Optional[str]) Path to a folder where the evaluations will be saved
reset_num_timesteps – (bool) whether or not to reset the current timestep number (used in logging)
- Returns
(BaseAlgorithm) the trained model
-
classmethod
load
(load_path: str, env: Optional[Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]] = None, **kwargs) → stable_baselines3.common.base_class.BaseAlgorithm¶ Load the model from a zip-file
- Parameters
load_path – the location of the saved data
env – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment
kwargs – extra arguments to change the model when loading
-
predict
(observation: numpy.ndarray, state: Optional[numpy.ndarray] = None, mask: Optional[numpy.ndarray] = None, deterministic: bool = False) → Tuple[numpy.ndarray, Optional[numpy.ndarray]]¶ Get the model’s action(s) from an observation
- Parameters
observation – (np.ndarray) the input observation
state – (Optional[np.ndarray]) The last states (can be None, used in recurrent policies)
mask – (Optional[np.ndarray]) The last masks (can be None, used in recurrent policies)
deterministic – (bool) Whether or not to return deterministic actions.
- Returns
(Tuple[np.ndarray, Optional[np.ndarray]]) the model’s action and the next state (used in recurrent policies)
-
save
(path: Union[str, pathlib.Path, io.BufferedIOBase], exclude: Optional[Iterable[str]] = None, include: Optional[Iterable[str]] = None) → None¶ Save all the attributes of the object and the model parameters in a zip-file.
- Parameters
pathlib.Path, io.BufferedIOBase]) ((Union[str,) – path to the file where the rl agent should be saved
exclude – name of parameters that should be excluded in addition to the default one
include – name of parameters that might be excluded but should be included anyway
-
set_env
(env: Union[gym.core.Env, stable_baselines3.common.vec_env.base_vec_env.VecEnv]) → None¶ Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space
- Parameters
env – The environment for learning a policy
-
set_random_seed
(seed: Optional[int] = None) → None¶ Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)
- Parameters
seed – (int)