Skip to content

Latest commit

 

History

History
44 lines (29 loc) · 3.48 KB

build_new_rearrange_envs.md

File metadata and controls

44 lines (29 loc) · 3.48 KB

Building New Environments for Rearrange

This document briefly describes how to build custom environments for rearrange.

Integrating new environments by teleoperation

To create a holdout environment using teleopration, follow the steps below:

  1. Have the xml definition ready for your objects, they should be placed under robogym/assets/xmls . An object is defined as a mujoco body. See block.xml as an example.
  2. Create a jsonnet config for the environment under robogym/envs/rearrange/holdouts/configs which defines all objects in the environment. You can specify which xml file to used for each object and override some properties. See sample.jsonnet as an example.
  3. Run scripts/create_holdout.py <path_to_your_config> where you can
    • Teleoperate the robot to move the objects.
    • Save current state as initial state by pressing K.
    • Save current state as goal state by pressing L.
    • Revert back to state 1 second ago by pressing < (this will pause simulation and you can press SPACE to resume.
  4. Set initial_state_path and goal_state_paths in your jsonnet config to those generated by scripts/create_holdout.py. They should be automatically saved to the right place and you can copy path from console output.
  5. Verify your env looks reasonable by running scripts/examine.py <path_to_your_config>::make_env
  6. Commit all files generated during this process.

Creating a new environment class

To use the new environment in code, we need to create a new environment by subclassing RearrangeEnv in robogym/envs/rearrange/common/base.py. RearrangeEnv (and its related simulation and goal provider) provide the majority of things you’ll need (glue code, observation code, setting up a robot with a table, ...).

There are two things you’ll likely need to do yourself when extending it:

  • Change the objects themselves. We currently have environments that use very simple geometries (BlocksRearrangeEnv and BlocksRearrangeSimulation) and mesh geometries (MeshRearrangeEnv and MeshRearrangeSimulation). In both cases, you need to modify RearrangeSimulationInterface::make_objects_xml (to return the objects you’d like) and RearrangeSimulationInterface::get_object_bounding_boxes (to compute their bounding boxes for object placement).
  • Change the placement of objects and goals. The former is done in RearrangeEnv::_randomize_object_initial_positions and the latter in the goal generators (and you can create a new goal generator and replace the default by overriding RearrangeEnv::build_goal_generation). By default, we randomly place objects and goals so that they don’t overlap.

To give a concrete example, let’s say you want to build an environment that sets a table with a fork, knife, and plate. To do so, you’d do the following:

  • Create TableSettingRearrangeEnv as envs/rearrange/table_setting.py
  • Use MeshRearrangeSimulation to load the meshes of the knife, fork, and plate from some STL files (e.g. YCB, which contains those objects)
  • Because the target state of objects are fixed, we can simply call ObjectFixedStateGoal with desired positions and rotations in TableSettingRearrangeEnv.build_goal_generation.
  • In this case, you wouldn’t need to change the initial placement, which will simply default to be random.

You can visualize and debug your new env by running scripts/examine.py envs/rearrange/table_setting.py.