Custom Environments

Those environments were created for testing purposes.

BitFlippingEnv

class stable_baselines3.common.envs.BitFlippingEnv(n_bits=10, continuous=False, max_steps=None, discrete_obs_space=False, image_obs_space=False, channel_first=True, render_mode='human')[source]

Simple bit flipping env, useful to test HER. The goal is to flip all the bits to get a vector of ones. In the continuous variant, if the ith action component has a value > 0, then the ith bit will be flipped. Uses a MultiBinary observation space by default.

Parameters:
  • n_bits – Number of bits to flip

  • continuous – Whether to use the continuous actions version or not, by default, it uses the discrete one

  • max_steps – Max number of steps, by default, equal to n_bits

  • discrete_obs_space – Whether to use the discrete observation version or not, ie a one-hot encoding of all possible states

  • image_obs_space – Whether to use an image observation version or not, ie a greyscale image of the state

  • channel_first – Whether to use channel-first or last image.

close()[source]

After the user has finished using the environment, close contains the code necessary to “clean up” the environment.

This is critical for closing rendering windows, database or HTTP connections. Calling close on an already closed environment has no effect and won’t raise an error.

Return type:

None

convert_if_needed(state)[source]

Convert to discrete space if needed.

Parameters:

state (ndarray) –

Returns:

Return type:

int | ndarray

convert_to_bit_vector(state, batch_size)[source]

Convert to bit vector if needed.

Parameters:
  • state (int | ndarray) – The state to be converted, which can be either an integer or a numpy array.

  • batch_size (int) – The batch size.

Returns:

The state converted into a bit vector.

Return type:

ndarray

render()[source]

Compute the render frames as specified by render_mode during the initialization of the environment.

The environment’s metadata render modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.

Note:

As the render_mode is known during __init__, the objects used to render the environment state should be initialised in __init__.

By convention, if the render_mode is:

  • None (default): no render is computed.

  • “human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during step() and render() doesn’t need to be called. Returns None.

  • “rgb_array”: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.

  • “ansi”: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).

  • “rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper, gymnasium.wrappers.RenderCollection that is automatically applied during gymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped after render() is called or reset().

Note:

Make sure that your class’s metadata "render_modes" key includes the list of supported modes.

Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e., gymnasium.make("CartPole-v1", render_mode="human")

Return type:

ndarray | None

reset(*, seed=None, options=None)[source]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset.

Therefore, reset() should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version v0.25: The return_info parameter was removed and now info is expected to be returned.

Args:
seed (optional int): The seed that is used to initialize the environment’s PRNG (np_random).

If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

options (optional dict): Additional information to specify how the environment is reset (optional,

depending on the specific environment)

Returns:
observation (ObsType): Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to

the info returned by step().

Parameters:
  • seed (int | None) –

  • options (Dict | None) –

Return type:

Tuple[Dict[str, ndarray | int], Dict]

step(action)[source]

Step into the env.

Parameters:

action (ndarray | int) –

Returns:

Return type:

Tuple[Tuple | Dict[str, Any] | ndarray | int, float, bool, bool, Dict]

SimpleMultiObsEnv

class stable_baselines3.common.envs.SimpleMultiObsEnv(num_col=4, num_row=4, random_start=True, discrete_actions=True, channel_last=True)[source]

Base class for GridWorld-based MultiObs Environments 4x4 grid world.

 ____________
| 0  1  2   3|
| 4|¯5¯¯6¯| 7|
| 8|_9_10_|11|
|12 13  14 15|
¯¯¯¯¯¯¯¯¯¯¯¯¯¯

start is 0 states 5, 6, 9, and 10 are blocked goal is 15 actions are = [left, down, right, up]

simple linear state env of 15 states but encoded with a vector and an image observation: each column is represented by a random vector and each row is represented by a random image, both sampled once at creation time.

Parameters:
  • num_col – Number of columns in the grid

  • num_row – Number of rows in the grid

  • random_start – If true, agent starts in random position

  • channel_last – If true, the image will be channel last, else it will be channel first

get_state_mapping()[source]

Uses the state to get the observation mapping.

Returns:

observation dict {‘vec’: …, ‘img’: …}

Return type:

Dict[str, ndarray]

init_possible_transitions()[source]

Initializes the transitions of the environment The environment exploits the cardinal directions of the grid by noting that they correspond to simple addition and subtraction from the cell id within the grid

  • up => means moving up a row => means subtracting the length of a column

  • down => means moving down a row => means adding the length of a column

  • left => means moving left by one => means subtracting 1

  • right => means moving right by one => means adding 1

Thus one only needs to specify in which states each action is possible in order to define the transitions of the environment

Return type:

None

init_state_mapping(num_col, num_row)[source]

Initializes the state_mapping array which holds the observation values for each state

Parameters:
  • num_col (int) – Number of columns.

  • num_row (int) – Number of rows.

Return type:

None

render(mode='human')[source]

Prints the log of the environment.

Parameters:

mode (str) –

Return type:

None

reset(*, seed=None, options=None)[source]

Resets the environment state and step count and returns reset observation.

Parameters:
  • seed (int | None) –

  • options (Dict | None) –

Returns:

observation dict {‘vec’: …, ‘img’: …}

Return type:

Tuple[Dict[str, ndarray], Dict]

step(action)[source]

Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state. Accepts an action and returns a tuple (observation, reward, terminated, truncated, info).

Parameters:

action (int | ndarray) –

Returns:

tuple (observation, reward, terminated, truncated, info).

Return type:

Tuple[Tuple | Dict[str, Any] | ndarray | int, float, bool, bool, Dict]