Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recording on multiple cameras at once #132

Open
collinabidi opened this issue Nov 20, 2019 · 6 comments
Open

Recording on multiple cameras at once #132

collinabidi opened this issue Nov 20, 2019 · 6 comments

Comments

@collinabidi
Copy link

I'm having trouble with recording data when running the baseline_racer.py simulation. I am trying to collect RGB-D data, so I added a second camera to the drone to collect Depth frames that are synchronized to RGB frames.

When I run the basic baseline_racer.py with modified settings for two cameras, it runs for about 5 seconds and then crashes. I tried to create multiple client and individual threads for polling RGB and Depth separately, but it didn't help.

The issue #7 states that there is a bottleneck issue with Unreal, but I was wondering if there was a way around this that doesn't break the simulation.

@madratman
Copy link
Contributor

Can you share your setting.json and the simgetimages call snippet so we can repro this? It should not crash with rgb and depth itself.

@collinabidi
Copy link
Author

collinabidi commented Nov 20, 2019

Before generating settings.json, I added the following code to the write_airsim_neurips_baseline_settings_file() in utils.py:

instance.add_camera(vehicle_name = "drone_1", camera_name = 'depth_cam', relative_pose=Pose(Position(0.25, 0.0, 0.0), Rotation()), image_type = 1, image_width = 320, image_height = 240, fov_horizontal_degrees = 90)

Settings.json:

{
  "ClockSpeed": 1,
  "SeeDocsAt": "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md",
  "SettingsVersion": 1.2,
  "SimMode": "Multirotor",
  "Vehicles": {
    "drone_1": {
      "Cameras": {
        "depth_cam": {
          "CaptureSettings": [
            {
              "FOV_Degrees": 90,
              "Height": 240,
              "ImageType": 1,
              "Width": 320
            }
          ],
          "Pitch": 0.0,
          "Roll": 0.0,
          "X": 0.25,
          "Y": 0.0,
          "Yaw": 0.0,
          "Z": 0.0
        }
      },
      "Pitch": 0.0,
      "Roll": 0.0,
      "VehicleType": "SimpleFlight",
      "X": 0.0,
      "Y": 0.0,
      "Yaw": 0.0,
      "Z": 0.0
    },
    "drone_2": {
      "Pitch": 0.0,
      "Roll": 0.0,
      "VehicleType": "SimpleFlight",
      "X": 0.0,
      "Y": 0.0,
      "Yaw": 0.0,
      "Z": 0.0
    }
  }
}

Here's the callback functions (one for rgb, one for depth) that poll their respective airsim clients.

def image_callback(self):
        # get uncompressed fpv cam image
        request = [airsim.ImageRequest('fpv_cam', 0, False, False)]
        response = self.airsim_client_images.simGetImages(request)
        img_1d = np.fromstring(response[0].image_data_uint8, dtype=np.uint8) 
        img_rgb = img_1d.reshape(response[0].height, response[0].width, 3)

        if self.viz_image_cv2:
            cv2.imshow("img_rgb", img_rgb)        
            cv2.waitKey(1)

        # save if specified
        if self.record_cam_data:
            # write image
            timestamp = round(time.time(),6)
            filename = str('color_ims/' + str("{0:.6f}".format(timestamp) + '.png')
            cv2.imwrite(self.output_dir + '/color_ims/' + filename, img_rgb)
            # save timestamp and filename
            self.rgb_data_associations[timestamp] = filename
    def depth_image_callback(self):
        # get uncompressed depth cam image
        request = [airsim.ImageRequest('depth_cam', 3, False, False)]
        response = self.airsim_client_depth_images.simGetImages(request)
        img_1d = np.fromstring(response[0].image_data_uint8, dtype=np.uint8) 
        img_depth = img_1d.reshape(response[0].height, response[0].width, 3)

        if self.viz_image_cv2:
            cv2.imshow("img_depth", img_depth)        
            cv2.waitKey(1)

        # save if specified
        if self.record_cam_data:
            timestamp = round(time.time(),3)
            filename = 'depth_ims/' + str("{0:.6f}".format(timestamp) + '.png'
            cv2.imwrite(self.output_dir + '/depth_ims/' + filename, img_depth)
            # save timestamp and filename
            self.depth_data_associations[str(timestamp)] = filename

I tried polling a shared client, but that resulted in crashing immediately.

@madratman
Copy link
Contributor

ok, you need a "capture" not a new "camera"

use this:

{
  "ClockSpeed": 1,
  "SeeDocsAt": "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md",
  "SettingsVersion": 1.2,
  "SimMode": "Multirotor",
  "Vehicles": {
    "drone_1": {
      "Cameras": {
        "fpv_cam": {
          "CaptureSettings": [
            {
              "FOV_Degrees": 90,
              "Height": 240,
              "ImageType": 0,
              "Width": 320
            },
            {
              "FOV_Degrees": 90,
              "Height": 240,
              "ImageType": 1,
              "Width": 320
            }
          ],
          "Pitch": 0.0,
          "Roll": 0.0,
          "X": 0.25,
          "Y": 0.0,
          "Yaw": 0.0,
          "Z": 0.0
        }
      },
      "Pitch": 0.0,
      "Roll": 0.0,
      "VehicleType": "SimpleFlight",
      "X": 0.0,
      "Y": 0.0,
      "Yaw": 0.0,
      "Z": 0.0
    },
    "drone_2": {
      "Pitch": 0.0,
      "Roll": 0.0,
      "VehicleType": "SimpleFlight",
      "X": 0.0,
      "Y": 0.0,
      "Yaw": 0.0,
      "Z": 0.0
    }
  }
}

in general, here's another example https://github.com/microsoft/AirSim/blob/master/ros/src/airsim_tutorial_pkgs/settings/front_stereo_and_center_mono.json

Your python code should be

responses = self.airsim_client.simGetImages([
     airsim.ImageRequest("fpv_cam", airsim.ImageType.Scene, False, False),
     airsim.ImageRequest("fpv_cam", airsim.ImageType.DepthPlanner, True)
])
if responses[0].height > 0:
    img_rgb_1d = np.fromstring(response[0].image_data_uint8, dtype=np.uint8) 
    img_rgb = img_rgb_1d.reshape(response[0].height, response[0].width, 3)
if responses[1].height > 0:
    depth_img = airsim.list_to_2d_float_array(responses[1].image_data_float, responses[1].width, responses[1].height)

request = [airsim.ImageRequest('depth_cam', 3, False, False)] is wrong. you're requesting type 3, but you have type 1 (depthplanar) in json. also it was crashing, as there was no type 0 (scene) image. that settings generator doesn't support captures, but anyway, you can use the above file directly

@collinabidi
Copy link
Author

@madratman ah ok, thank you for the clarification.

I cleaned up my code and used your settings.json file, but it's still crashing and I'm getting a different error post-crash:

  File "baseline_racer.py", line 336, in <module>
    main(args)
  File "baseline_racer.py", line 311, in main
    baseline_racer.fly_through_all_gates_at_once_with_moveOnSpline().join()
  File "/home/collin/Projects/AirSim-NeurIPS2019-Drone-Racing/airsimenv/lib/python3.6/site-packages/msgpackrpc/future.py", line 22, in join
    self._loop.start()
  File "/home/collin/Projects/AirSim-NeurIPS2019-Drone-Racing/airsimenv/lib/python3.6/site-packages/msgpackrpc/loop.py", line 22, in start
    self._ioloop.start()
  File "/home/collin/Projects/AirSim-NeurIPS2019-Drone-Racing/airsimenv/lib/python3.6/site-packages/tornado/ioloop.py", line 755, in start
    raise RuntimeError("IOLoop is already running")
RuntimeError: IOLoop is already running

@madratman
Copy link
Contributor

Ensure you are using a single airsim client per thread
see comment in code here https://github.com/microsoft/AirSim-NeurIPS2019-Drone-Racing/blob/master/baselines/baseline_racer.py#L21

@collinabidi
Copy link
Author

collinabidi commented Nov 23, 2019

I am using that code, so I'm not sure why it would be crashing.

I tried to just get depth image to see if it was a bottleneck issue with processing two images at once, but the same RuntimeError: IOLoop ... error occurred.

Not sure if this will help, but here's my baseline_racer.py file.

from argparse import ArgumentParser
import airsimneurips as airsim
import cv2
import threading
import time
import utils
import numpy as np
import math
import os

# drone_name should match the name in ~/Document/AirSim/settings.json
class BaselineRacer(object):
    def __init__(self, drone_name = "drone_1", viz_traj=True, viz_traj_color_rgba=[1.0, 0.0, 0.0, 1.0], viz_image_cv2=True, record_cam_data=False, output_dir='./output/new'):
        self.drone_name = drone_name
        self.gate_poses_ground_truth = None
        self.viz_image_cv2 = viz_image_cv2
        self.viz_traj = viz_traj
        self.record_cam_data = record_cam_data
        self.output_dir = output_dir
        self.viz_traj_color_rgba = viz_traj_color_rgba

        self.airsim_client = airsim.MultirotorClient()
        self.airsim_client.confirmConnection()
        # we need three airsim MultirotorClient objects because the comm lib we use (rpclib) is not thread safe
        # so we poll images in a thread using one airsim MultirotorClient object
        # also poll depth images in a thread using another airsim MultirotorClient object
        # and use another airsim MultirotorClient for querying state commands 
        self.airsim_client_images = airsim.MultirotorClient()
        self.airsim_client_images.confirmConnection()
        self.airsim_client_odom = airsim.MultirotorClient()
        self.airsim_client_odom.confirmConnection()
        self.level_name = None

        # create dicts to store depth/rgb filenames and timestamps so we can associate them
        # key is timestamp, value is filename
        self.rgb_data_associations = {}
        self.depth_data_associations = {}

        # image/depth/odometry threading
        self.image_callback_thread = threading.Thread(target=self.repeat_timer_image_callback, args=(self.image_callback, 0.05))
        self.odometry_callback_thread = threading.Thread(target=self.repeat_timer_odometry_callback, args=(self.odometry_callback, 0.02))
        self.is_image_thread_active = False
        self.is_odometry_thread_active = False

        self.MAX_NUMBER_OF_GETOBJECTPOSE_TRIALS = 10 # see https://github.com/microsoft/AirSim-NeurIPS2019-Drone-Racing/issues/38

    # loads desired level
    def load_level(self, level_name, sleep_sec = 2.0):
        self.level_name = level_name
        self.airsim_client.simLoadLevel(self.level_name)
        self.airsim_client.confirmConnection() # failsafe
        time.sleep(sleep_sec) # let the environment load completely

    # Starts an instance of a race in your given level, if valid
    def start_race(self, tier=3):
        self.airsim_client.simStartRace(tier)

    # Resets a current race: moves players to start positions, timer and penalties reset
    def reset_race(self):
        self.airsim_client.simResetRace()

    # arms drone, enable APIs, set default traj tracker gains
    def initialize_drone(self):
        self.airsim_client.enableApiControl(vehicle_name=self.drone_name)
        self.airsim_client.arm(vehicle_name=self.drone_name)

        # set default values for trajectory tracker gains 
        traj_tracker_gains = airsim.TrajectoryTrackerGains(kp_cross_track = 5.0, kd_cross_track = 0.0, 
                                                            kp_vel_cross_track = 3.0, kd_vel_cross_track = 0.0, 
                                                            kp_along_track = 0.4, kd_along_track = 0.0, 
                                                            kp_vel_along_track = 0.04, kd_vel_along_track = 0.0, 
                                                            kp_z_track = 2.0, kd_z_track = 0.0, 
                                                            kp_vel_z = 0.4, kd_vel_z = 0.0, 
                                                            kp_yaw = 3.0, kd_yaw = 0.1)

        self.airsim_client.setTrajectoryTrackerGains(traj_tracker_gains, vehicle_name=self.drone_name)
        time.sleep(0.2)

    def takeoffAsync(self):
        self.airsim_client.takeoffAsync().join()

    # like takeoffAsync(), but with moveOnSpline()
    def takeoff_with_moveOnSpline(self, takeoff_height = 1.0):
        start_position = self.airsim_client.simGetVehiclePose(vehicle_name=self.drone_name).position
        takeoff_waypoint = airsim.Vector3r(start_position.x_val, start_position.y_val, start_position.z_val-takeoff_height)

        self.airsim_client.moveOnSplineAsync([takeoff_waypoint], vel_max=15.0, acc_max=5.0, add_position_constraint=True, add_velocity_constraint=False, 
            add_acceleration_constraint=False, viz_traj=self.viz_traj, viz_traj_color_rgba=self.viz_traj_color_rgba, vehicle_name=self.drone_name).join()

    # stores gate ground truth poses as a list of airsim.Pose() objects in self.gate_poses_ground_truth
    def get_ground_truth_gate_poses(self):
        gate_names_sorted_bad = sorted(self.airsim_client.simListSceneObjects("Gate.*"))
        # gate_names_sorted_bad is of the form `GateN_GARBAGE`. for example:
        # ['Gate0', 'Gate10_21', 'Gate11_23', 'Gate1_3', 'Gate2_5', 'Gate3_7', 'Gate4_9', 'Gate5_11', 'Gate6_13', 'Gate7_15', 'Gate8_17', 'Gate9_19']
        # we sort them by their ibdex of occurence along the race track(N), and ignore the unreal garbage number after the underscore(GARBAGE)
        gate_indices_bad = [int(gate_name.split('_')[0][4:]) for gate_name in gate_names_sorted_bad]
        gate_indices_correct = sorted(range(len(gate_indices_bad)), key=lambda k: gate_indices_bad[k])
        gate_names_sorted = [gate_names_sorted_bad[gate_idx] for gate_idx in gate_indices_correct]
        self.gate_poses_ground_truth = []
        for gate_name in gate_names_sorted:
            curr_pose = self.airsim_client.simGetObjectPose(gate_name)
            counter = 0
            while (math.isnan(curr_pose.position.x_val) or math.isnan(curr_pose.position.y_val) or math.isnan(curr_pose.position.z_val)) and (counter < self.MAX_NUMBER_OF_GETOBJECTPOSE_TRIALS):
                print(f"DEBUG: {gate_name} position is nan, retrying...")
                counter += 1
                curr_pose = self.airsim_client.simGetObjectPose(gate_name)
            assert not math.isnan(curr_pose.position.x_val), f"ERROR: {gate_name} curr_pose.position.x_val is still {curr_pose.position.x_val} after {counter} trials"
            assert not math.isnan(curr_pose.position.y_val), f"ERROR: {gate_name} curr_pose.position.y_val is still {curr_pose.position.y_val} after {counter} trials"
            assert not math.isnan(curr_pose.position.z_val), f"ERROR: {gate_name} curr_pose.position.z_val is still {curr_pose.position.z_val} after {counter} trials"
            self.gate_poses_ground_truth.append(curr_pose)

    # this is utility function to get a velocity constraint which can be passed to moveOnSplineVelConstraints() 
    # the "scale" parameter scales the gate facing vector accordingly, thereby dictating the speed of the velocity constraint
    def get_gate_facing_vector_from_quaternion(self, airsim_quat, scale = 1.0):
        import numpy as np
        # convert gate quaternion to rotation matrix. 
        # ref: https://en.wikipedia.org/wiki/Rotation_matrix#Quaternion; https://www.lfd.uci.edu/~gohlke/code/transformations.py.html
        q = np.array([airsim_quat.w_val, airsim_quat.x_val, airsim_quat.y_val, airsim_quat.z_val], dtype=np.float64)
        n = np.dot(q, q)
        if n < np.finfo(float).eps:
            return airsim.Vector3r(0.0, 1.0, 0.0)
        q *= np.sqrt(2.0 / n)
        q = np.outer(q, q)
        rotation_matrix = np.array([[1.0-q[2, 2]-q[3, 3],     q[1, 2]-q[3, 0],     q[1, 3]+q[2, 0]],
                                    [    q[1, 2]+q[3, 0], 1.0-q[1, 1]-q[3, 3],     q[2, 3]-q[1, 0]],
                                    [    q[1, 3]-q[2, 0],     q[2, 3]+q[1, 0], 1.0-q[1, 1]-q[2, 2]]])
        gate_facing_vector = rotation_matrix[:,1]
        return airsim.Vector3r(scale * gate_facing_vector[0], scale * gate_facing_vector[1], scale * gate_facing_vector[2])

    def fly_through_all_gates_one_by_one_with_moveOnSpline(self):
        if self.level_name == "Building99_Hard":
            vel_max = 5.0
            acc_max = 2.0

        if self.level_name in ["Soccer_Field_Medium", "Soccer_Field_Easy", "ZhangJiaJie_Medium"] :
            vel_max = 10.0
            acc_max = 5.0

        return self.airsim_client.moveOnSplineAsync([gate_pose.position], vel_max=vel_max, acc_max=acc_max, 
            add_position_constraint=True, add_velocity_constraint=False, add_acceleration_constraint=False, viz_traj=self.viz_traj, viz_traj_color_rgba=self.viz_traj_color_rgba, vehicle_name=self.drone_name)

    def fly_through_all_gates_at_once_with_moveOnSpline(self):
        if self.level_name in ["Soccer_Field_Medium", "Soccer_Field_Easy", "ZhangJiaJie_Medium"] :
            vel_max = 30.0
            acc_max = 15.0

        if self.level_name == "Building99_Hard":
            vel_max = 4.0
            acc_max = 1.0

        return self.airsim_client.moveOnSplineAsync([gate_pose.position for gate_pose in self.gate_poses_ground_truth], vel_max=vel_max, acc_max=acc_max, 
            add_position_constraint=True, add_velocity_constraint=False, add_acceleration_constraint=False, viz_traj=self.viz_traj, viz_traj_color_rgba=self.viz_traj_color_rgba, vehicle_name=self.drone_name)

    def fly_through_all_gates_one_by_one_with_moveOnSplineVelConstraints(self):
        add_velocity_constraint = True
        add_acceleration_constraint = False

        if self.level_name in ["Soccer_Field_Medium", "Soccer_Field_Easy"] :
            vel_max = 15.0
            acc_max = 3.0
            speed_through_gate = 2.5

        if self.level_name == "ZhangJiaJie_Medium":
            vel_max = 10.0
            acc_max = 3.0
            speed_through_gate = 1.0

        if self.level_name == "Building99_Hard":
            vel_max = 2.0
            acc_max = 0.5
            speed_through_gate = 0.5
            add_velocity_constraint = False

        # scale param scales the gate facing vector by desired speed. 
        return self.airsim_client.moveOnSplineVelConstraintsAsync([gate_pose.position], 
                                                [self.get_gate_facing_vector_from_quaternion(gate_pose.orientation, scale = speed_through_gate)], 
                                                vel_max=vel_max, acc_max=acc_max, 
                                                add_position_constraint=True, add_velocity_constraint=add_velocity_constraint, add_acceleration_constraint=add_acceleration_constraint, 
                                                viz_traj=self.viz_traj, viz_traj_color_rgba=self.viz_traj_color_rgba, vehicle_name=self.drone_name)

    def fly_through_all_gates_at_once_with_moveOnSplineVelConstraints(self):
        if self.level_name in ["Soccer_Field_Easy", "Soccer_Field_Medium", "ZhangJiaJie_Medium"]:
            vel_max = 15.0
            acc_max = 7.5
            speed_through_gate = 2.5

        if self.level_name == "Building99_Hard":
            vel_max = 5.0
            acc_max = 2.0
            speed_through_gate = 1.0

        return self.airsim_client.moveOnSplineVelConstraintsAsync([gate_pose.position for gate_pose in self.gate_poses_ground_truth], 
                [self.get_gate_facing_vector_from_quaternion(gate_pose.orientation, scale = speed_through_gate) for gate_pose in self.gate_poses_ground_truth], 
                vel_max=vel_max, acc_max=acc_max, 
                add_position_constraint=True, add_velocity_constraint=True, add_acceleration_constraint=False, 
                viz_traj=self.viz_traj, viz_traj_color_rgba=self.viz_traj_color_rgba, vehicle_name=self.drone_name)

    def image_callback(self):
        # get uncompressed fpv cam rgb and depth image
        responses = self.airsim_client.simGetImages([
            airsim.ImageRequest("fpv_cam", airsim.ImageType.Scene, False, False),
            airsim.ImageRequest("fpv_cam", airsim.ImageType.DepthPlanner, True)])

        if responses[0].height > 0:
            img_rgb_1d = np.fromstring(responses[0].image_data_uint8, dtype=np.uint8) 
            img_rgb = img_rgb_1d.reshape(responses[0].height, responses[0].width, 3)

        if responses[1].height > 0:
            depth_img = airsim.list_to_2d_float_array(responses[1].image_data_float, responses[1].width, responses[1].height)

        #if self.viz_image_cv2:
        #    cv2.imshow("img_rgb", img_rgb)        
        #    cv2.waitKey(1)

        # save if specified
        if self.record_cam_data:
            # write image
            timestamp = round(time.time(),6)
            filename = str('color_ims/' + str("{0:.6f}".format(timestamp)) + '.png')
            cv2.imwrite(self.output_dir + '/color_ims/' + filename, img_rgb)
            self.rgb_data_associations[timestamp] = filename

            # depth image save
            filename = ('depth_ims/' + str("{0:.6f}".format(timestamp)) + '.png')
            cv2.imwrite(self.output_dir + '/depth_ims/' + filename, depth_img)
            self.depth_data_associations[str(timestamp)] = filename

    def odometry_callback(self):
        # get current drone state
        drone_state = self.airsim_client_odom.getMultirotorState()
        # in world frame:
        position = drone_state.kinematics_estimated.position 
        orientation = drone_state.kinematics_estimated.orientation
        linear_velocity = drone_state.kinematics_estimated.linear_velocity
        angular_velocity = drone_state.kinematics_estimated.angular_velocity

    # call task() method every "period" seconds. 
    def repeat_timer_image_callback(self, task, period):
        while self.is_image_thread_active:
            task()
            time.sleep(period)


    def repeat_timer_odometry_callback(self, task, period):
        while self.is_odometry_thread_active:
            task()
            time.sleep(period)

    def start_image_callback_thread(self):
        if not os.path.exists(self.output_dir):
            os.mkdir(self.output_dir)

        if not os.path.exists(self.output_dir + '/color_ims'):
            os.mkdir(self.output_dir + '/color_ims')

        with open(self.output_dir + "/rgb.txt", "w") as file:
            print("created rgb.txt file for associating timestamp with image")
            file.write("#timestamp filename")

        if not os.path.exists(self.output_dir + '/depth_ims'):
            os.mkdir(self.output_dir + '/depth_ims')

        with open(self.output_dir + "/depth.txt", "w") as file:
            print("created depth.txt file for associating timestamp with image")
            file.write("#timestamp filename")

        if not self.is_image_thread_active:
            self.is_image_thread_active = True
            self.image_callback_thread.start()
            print("Started image callback thread")

    def stop_image_callback_thread(self):
        if self.is_image_thread_active:
            # write out association file
            with open("rgb.txt", "a+") as file:
                file.write(str(self.rgb_data_associations))

            with open("depth.txt", "a+") as file:
                file.write(str(self.depth_data_associations))

            self.is_image_thread_active = False
            self.image_callback_thread.join()
            print("Stopped image callback thread.")

    def start_odometry_callback_thread(self):
        if not self.is_odometry_thread_active:
            self.is_odometry_thread_active = True
            self.odometry_callback_thread.start()
            print("Started odometry callback thread")

    def stop_odometry_callback_thread(self):
        if self.is_odometry_thread_active:
            self.is_odometry_thread_active = False
            self.odometry_callback_thread.join()
            print("Stopped odometry callback thread.")

def main(args):
    # ensure you have generated the neurips planning settings file by running python generate_settings_file.py
    baseline_racer = BaselineRacer(drone_name="drone_1", viz_traj=args.viz_traj, viz_traj_color_rgba=[1.0, 1.0, 0.0, 1.0], viz_image_cv2=args.viz_image_cv2, record_cam_data=args.record_cam_data, output_dir=args.output_dir)
    baseline_racer.load_level(args.level_name)
    baseline_racer.start_race(args.race_tier)
    baseline_racer.initialize_drone()
    baseline_racer.takeoff_with_moveOnSpline()
    baseline_racer.get_ground_truth_gate_poses()
    baseline_racer.start_image_callback_thread()
    baseline_racer.start_odometry_callback_thread()

    if args.planning_baseline_type == "all_gates_at_once" :
        if args.planning_and_control_api == "moveOnSpline":
            baseline_racer.fly_through_all_gates_at_once_with_moveOnSpline().join()
        if args.planning_and_control_api == "moveOnSplineVelConstraints":
            baseline_racer.fly_through_all_gates_at_once_with_moveOnSplineVelConstraints().join()

    if args.planning_baseline_type == "all_gates_one_by_one":
        if args.planning_and_control_api == "moveOnSpline":
            baseline_racer.fly_through_all_gates_one_by_one_with_moveOnSpline().join()
        if args.planning_and_control_api == "moveOnSplineVelConstraints":
            baseline_racer.fly_through_all_gates_one_by_one_with_moveOnSplineVelConstraints().join()

    baseline_racer.stop_image_callback_thread()
    baseline_racer.stop_odometry_callback_thread()
    baseline_racer.reset_race()

if __name__ == "__main__":
    parser = ArgumentParser()
    parser.add_argument('--level_name', type=str, choices=["Soccer_Field_Easy", "Soccer_Field_Medium", "ZhangJiaJie_Medium", "Building99_Hard"], default="ZhangJiaJie_Medium")
    parser.add_argument('--planning_baseline_type', type=str, choices=["all_gates_at_once","all_gates_one_by_one"], default="all_gates_at_once")
    parser.add_argument('--planning_and_control_api', type=str, choices=["moveOnSpline", "moveOnSplineVelConstraints"], default="moveOnSpline")
    parser.add_argument('--enable_viz_traj', dest='viz_traj', action='store_true', default=False)
    parser.add_argument('--enable_viz_image_cv2', dest='viz_image_cv2', action='store_true', default=False)
    parser.add_argument('--enable_record_cam_data', dest='record_cam_data', action='store_true', default=False)
    parser.add_argument('--race_tier', type=int, choices=[1,2,3], default=1)
    parser.add_argument('--output_dir', type=str, dest = 'output_dir', default='./output/new')
    args = parser.parse_args()
    main(args)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants