This document briefly describes how to build custom environments for rearrange.
To create a holdout environment using teleopration, follow the steps below:
- Have the xml definition ready for your objects, they should be placed under
robogym/assets/xmls
. An object is defined as a mujoco body. See block.xml as an example. - Create a jsonnet config for the environment under
robogym/envs/rearrange/holdouts/configs
which defines all objects in the environment. You can specify which xml file to used for each object and override some properties. See sample.jsonnet as an example. - Run
scripts/create_holdout.py <path_to_your_config>
where you can- Teleoperate the robot to move the objects.
- Save current state as initial state by pressing
K
. - Save current state as goal state by pressing
L
. - Revert back to state 1 second ago by pressing
<
(this will pause simulation and you can pressSPACE
to resume.
- Set
initial_state_path
andgoal_state_paths
in your jsonnet config to those generated byscripts/create_holdout.py
. They should be automatically saved to the right place and you can copy path from console output. - Verify your env looks reasonable by running
scripts/examine.py <path_to_your_config>::make_env
- Commit all files generated during this process.
To use the new environment in code, we need to create a new environment by subclassing RearrangeEnv
in robogym/envs/rearrange/common/base.py
. RearrangeEnv
(and its related simulation and goal provider) provide the majority of things you’ll need (glue code, observation code, setting up a robot with a table, ...).
There are two things you’ll likely need to do yourself when extending it:
- Change the objects themselves. We currently have environments that use very simple geometries (
BlocksRearrangeEnv
andBlocksRearrangeSimulation
) and mesh geometries (MeshRearrangeEnv
andMeshRearrangeSimulation
). In both cases, you need to modifyRearrangeSimulationInterface::make_objects_xml
(to return the objects you’d like) andRearrangeSimulationInterface::get_object_bounding_boxes
(to compute their bounding boxes for object placement). - Change the placement of objects and goals. The former is done in
RearrangeEnv::_randomize_object_initial_positions
and the latter in the goal generators (and you can create a new goal generator and replace the default by overridingRearrangeEnv::build_goal_generation
). By default, we randomly place objects and goals so that they don’t overlap.
To give a concrete example, let’s say you want to build an environment that sets a table with a fork, knife, and plate. To do so, you’d do the following:
- Create
TableSettingRearrangeEnv
asenvs/rearrange/table_setting.py
- Use
MeshRearrangeSimulation
to load the meshes of the knife, fork, and plate from some STL files (e.g. YCB, which contains those objects) - Because the target state of objects are fixed, we can simply call
ObjectFixedStateGoal
with desired positions and rotations inTableSettingRearrangeEnv.build_goal_generation
. - In this case, you wouldn’t need to change the initial placement, which will simply default to be random.
You can visualize and debug your new env by running scripts/examine.py envs/rearrange/table_setting.py
.