graph LR
rlbench_environment_Environment["rlbench.environment.Environment"]
rlbench_task_environment_TaskEnvironment["rlbench.task_environment.TaskEnvironment"]
rlbench_gym_RLBenchEnv["rlbench.gym.RLBenchEnv"]
rlbench_environment_Environment -- "uses" --> rlbench_task_environment_TaskEnvironment
rlbench_gym_RLBenchEnv -- "wraps" --> rlbench_environment_Environment
The RLBench Environment API subsystem is a core part of the RLBench project, serving as the primary interface for external agents to interact with the simulation. It encapsulates the environment's lifecycle, task management, and the standardized interaction points for reinforcement learning algorithms.
The central component acting as the primary interface for external agents. It manages the simulation lifecycle (launch, shutdown), provides task instances, and orchestrates the flow of observations and actions between the agent and the simulation. It also handles task loading and state management, embodying the "Environment Abstraction Layer" pattern.
Related Classes/Methods:
Encapsulates the specific logic, state, and variations for an individual reinforcement learning task. It manages task-specific resets and handles different task configurations (variations). This component implements the "Modular Task Design" pattern, allowing for distinct and pluggable learning problems within the RLBench framework.
Related Classes/Methods:
Provides a standardized Gymnasium (formerly OpenAI Gym) API for interacting with the RLBench environment. It translates the internal state and action spaces of RLBench into the widely adopted Gym format, ensuring compatibility with various reinforcement learning algorithms and frameworks. This component is crucial for the "Agent-Environment Interaction" aspect of an ML Toolkit, facilitating seamless integration with existing RL ecosystems.
Related Classes/Methods: