Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: high - low < 0 for env.reset() #37

Open
LeoHink opened this issue Mar 20, 2024 · 0 comments
Open

ValueError: high - low < 0 for env.reset() #37

LeoHink opened this issue Mar 20, 2024 · 0 comments

Comments

@LeoHink
Copy link

LeoHink commented Mar 20, 2024

When I run env.reset() now and then, I get this error ValueError: high - low < 0; it is hard to reproduce this error. It occurs somewhat randomly. I have no idea why this is occurring. In particular, it seems to occur when calling self.init_robot_pose when calling env.reset()`. This also occurs when setting a random seed. I'm testing with the scratch-itch environment. Any help would be much appreciated.

Edit: This error does not occur in Feeding environments have yet to test other so it might be isolated to scratch itch.

Full error:

ValueError                                Traceback (most recent call last)
[/var/folders/_l/435lwsyd56n753cy_66n91xh0000gn/T/ipykernel_9937/1077114707.py](https://file+.vscode-resource.vscode-cdn.net/var/folders/_l/435lwsyd56n753cy_66n91xh0000gn/T/ipykernel_9937/1077114707.py) in <module>
     21 
     22         # TRY NOT TO MODIFY: execute the game and log data.
---> 23         next_obs, reward, terminations, infos = envs.step(action.cpu().numpy())
     24         # print(terminations)
     25         # print(f"next_obs: {next_obs} \n dtype: {next_obs.dtype}")

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/vector_env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/vector_env.py) in step(self, actions)
    110 
    111         self.step_async(actions)
--> 112         return self.step_wait()
    113 
    114     def call_async(self, name, *args, **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/sync_vector_env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/vector/sync_vector_env.py) in step_wait(self)
    139             if self._dones[i]:
    140                 info["terminal_observation"] = observation
--> 141                 observation = env.reset()
    142             observations.append(observation)
    143             infos.append(info)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    322 class RewardWrapper(Wrapper):
    323     def reset(self, **kwargs):
--> 324         return self.env.reset(**kwargs)
    325 
    326     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    281 
    282     def reset(self, **kwargs) -> Union[ObsType, tuple[ObsType, dict]]:
--> 283         return self.env.reset(**kwargs)
    284 
    285     def render(self, mode="human", **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    309             return self.observation(obs), info
    310         else:
--> 311             return self.observation(self.env.reset(**kwargs))
    312 
    313     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/normalize.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/normalize.py) in reset(self, **kwargs)
     69             obs, info = self.env.reset(**kwargs)
     70         else:
---> 71             obs = self.env.reset(**kwargs)
     72         if self.is_vector_env:
     73             obs = self.normalize(obs)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    335 class ActionWrapper(Wrapper):
    336     def reset(self, **kwargs):
--> 337         return self.env.reset(**kwargs)
    338 
    339     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/record_episode_statistics.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/record_episode_statistics.py) in reset(self, **kwargs)
     20 
     21     def reset(self, **kwargs):
---> 22         observations = super().reset(**kwargs)
     23         self.episode_returns = np.zeros(self.num_envs, dtype=np.float32)
     24         self.episode_lengths = np.zeros(self.num_envs, dtype=np.int32)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    281 
    282     def reset(self, **kwargs) -> Union[ObsType, tuple[ObsType, dict]]:
--> 283         return self.env.reset(**kwargs)
    284 
    285     def render(self, mode="human", **kwargs):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/core.py) in reset(self, **kwargs)
    309             return self.observation(obs), info
    310         else:
--> 311             return self.observation(self.env.reset(**kwargs))
    312 
    313     def step(self, action):

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/time_limit.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/time_limit.py) in reset(self, **kwargs)
     24     def reset(self, **kwargs):
     25         self._elapsed_steps = 0
---> 26         return self.env.reset(**kwargs)

[~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/order_enforcing.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/anaconda3/envs/assistive_robotics/lib/python3.7/site-packages/gym/wrappers/order_enforcing.py) in reset(self, **kwargs)
     16     def reset(self, **kwargs):
     17         self._has_reset = True
---> 18         return self.env.reset(**kwargs)

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/scratch_itch.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/scratch_itch.py) in reset(self)
    116         target_ee_pos = np.array([-0.6, 0, 0.8]) + self.np_random.uniform(-0.05, 0.05, size=3)
    117         target_ee_orient = self.get_quaternion(self.robot.toc_ee_orient_rpy[self.task])
--> 118         self.init_robot_pose(target_ee_pos, target_ee_orient, [(target_ee_pos, target_ee_orient)], [(shoulder_pos, None), (elbow_pos, None), (wrist_pos, None)], arm='left', tools=[self.tool], collision_objects=[self.human, self.furniture])
    119 
    120         # Open gripper to hold the tool

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/env.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/env.py) in init_robot_pose(self, target_ee_pos, target_ee_orient, start_pos_orient, target_pos_orients, arm, tools, collision_objects, wheelchair_enabled, right_side, max_iterations)
    294             elif self.robot.wheelchair_mounted and wheelchair_enabled:
    295                 # Use IK to find starting joint angles for mounted robots
--> 296                 self.robot.ik_random_restarts(right=(arm == 'right'), target_pos=target_ee_pos, target_orient=target_ee_orient, max_iterations=1000, max_ik_random_restarts=1000, success_threshold=0.01, step_sim=False, check_env_collisions=False, randomize_limits=True, collision_objects=collision_objects)
    297             else:
    298                 # Use TOC with JLWKI to find an optimal base position for the robot near the person

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/robot.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/robot.py) in ik_random_restarts(self, right, target_pos, target_orient, max_iterations, max_ik_random_restarts, success_threshold, step_sim, check_env_collisions, randomize_limits, collision_objects)
     89         best_ik_distance = 0
     90         for r in range(max_ik_random_restarts):
---> 91             target_joint_angles = self.ik(self.right_end_effector if right else self.left_end_effector, target_pos, target_orient, ik_indices=self.right_arm_ik_indices if right else self.left_arm_ik_indices, max_iterations=max_iterations, half_range=self.half_range, randomize_limits=(randomize_limits and r >= 10))
     92             self.set_joint_angles(self.right_arm_joint_indices if right else self.left_arm_joint_indices, target_joint_angles)
     93             gripper_pos, gripper_orient = self.get_pos_orient(self.right_end_effector if right else self.left_end_effector)

[~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/agent.py](https://file+.vscode-resource.vscode-cdn.net/Users/leohink/Documents/1_PhD/assistive_gym/~/Documents/1_PhD/assistive_gym/assistive-gym/assistive_gym/envs/agents/agent.py) in ik(self, target_joint, target_pos, target_orient, ik_indices, max_iterations, half_range, use_current_as_rest, randomize_limits)
    253         if target_orient is not None and len(target_orient) < 4:
    254             target_orient = self.get_quaternion(target_orient)
--> 255         ik_lower_limits = self.ik_lower_limits if not randomize_limits else self.np_random.uniform(0, self.ik_lower_limits)
    256         ik_upper_limits = self.ik_upper_limits if not randomize_limits else self.np_random.uniform(0, self.ik_upper_limits)
    257         ik_joint_ranges = ik_upper_limits - ik_lower_limits

_generator.pyx in numpy.random._generator.Generator.uniform()

_common.pyx in numpy.random._common.cont()

_common.pyx in numpy.random._common.cont_broadcast_2()

_common.pyx in numpy.random._common.check_array_constraint()

ValueError: high - low < 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant