Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some sensors do not work #27

Open
podgorki opened this issue Aug 1, 2023 · 7 comments
Open

Some sensors do not work #27

podgorki opened this issue Aug 1, 2023 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@podgorki
Copy link

podgorki commented Aug 1, 2023

I am running beamng on a windows 11 box and connecting to it across ethernet from my ubuntu 20.04 box.

Env details:
Python==3.7.13
Beamngpy==1.26
Beamng ros integration==0.1.3
Beamngtech==0.28.2.0

I can successfully run and visualize with rviz the default beamn_control example launch. However, when I try and use LiDAR or cameras I am unable to successfully launch a the scenario with sensors from ROS.

Digging into the code it looks like the definitions used in the sensor getters are different to the ones in the beamngpy api.

Take lidar for example

class Lidar:
    """
    An interactive, automated LiDAR sensor, which produces regular LiDAR point clouds, ready for further processing.
    This sensor can be attached to a vehicle, or can be fixed to a position in space. The dir and up parameters are used to set the local coordinate system.
    A requested update rate can be provided, to tell the simulator how often to read measurements for this sensor. If a negative value is provided, the sensor
    will not update automatically at all. However, ad-hoc polling requests can be sent at any time, even for non-updating sensors.

    Args:
        name: A unique name for this LiDAR sensor.
        bng: The BeamNGpy instance, with which to communicate to the simulation.
        vehicle: The vehicle to which this sensor should be attached, if any.
        requested_update_time: The time which should pass between sensor reading updates, in seconds. This is just a suggestion to the manager.
        update_priority: The priority which the sensor should ask for new readings. lowest -> 0, highest -> 1.
        pos: (X, Y, Z) coordinate triplet specifying the position of the sensor, in world space.
        dir: (X, Y, Z) Coordinate triplet specifying the forward direction of the sensor.
        up: (X, Y, Z) Coordinate triplet specifying the up direction of the sensor.
        vertical_resolution: The vertical resolution of this LiDAR sensor.
        vertical_angle: The vertical angle of this LiDAR sensor, in degrees.
        rays_per_second: The number of LiDAR rays per second which this sensor should emit.
        frequency: The frequency of this LiDAR sensor.
        horizontal_angle: The horizontal angle of this LiDAR sensor.
        max_distance: The maximum distance which this LiDAR sensor will detect, in metres.
        is_using_shared_memory: A flag which indicates if we should use shared memory to send/recieve the sensor readings data.
        is_visualised: A flag which indicates if this LiDAR sensor should appear visualised or not.
        is_annotated: A flag which indicates if this LiDAR sensor should return annotation data instead of distance data.
        is_static: A flag which indicates whether this sensor should be static (fixed position), or attached to a vehicle.
        is_snapping_desired: A flag which indicates whether or not to snap the sensor to the nearest vehicle triangle (not used for static sensors).
        is_force_inside_triangle: A flag which indicates if the sensor should be forced inside the nearest vehicle triangle (not used for static sensors).

compared with the getters args

        lidar = bng_sensors.Lidar(offset=position,
                                  direction=direction,
                                  vres=vertical_resolution,
                                  vangle=vertical_angle,
                                  max_dist=max_distance,
                                  shmem=False,
                                  **spec)

Many of the args are incorrect for example, 'offset' does not appear in the api but rather 'pos'.

I updated these and tried to run again but got then find that 'bng' and 'name' are missing.

I attempted to hand the bng instance (I think) created in the BeamNGBridge line 38 through to the Lidar and created a dummy name. With this roslaunch is able to run past this getter but now the game instance on my windows box crashes. I have attached the error log here.

843cd52a-4b9a-488b-8cf8-a20cbe72d39a.zip

I have not attempted to patch the cameras or extensively tested other sensors at this stage.

I am curious how I can use these sensors when there seems to be a mismatch?

@AbdelrahmanElsaidElsawy
Copy link
Contributor

Hello @podgorki, Thank you for choosing BeamNG. I have noticed that ROS bridge is using an old BeamNGpy API, We will fix this issue ASAP. Currently the working sensors are: Damage sensor, IMU, and Gforce sensor. Let us know if you face any issue with the aforementioned sensors.

Thank you for your feedback.

@AbdelrahmanElsaidElsawy
Copy link
Contributor

Hello @podgorki, i found that sensor attachment method was changed, If you want to test Lidar, camera or ultrasonic sensor, you may use the initial release for ROS bridge with BeamNG.Tech version 0.24 and BeamNGpy version 1.22, i tested them out and they worked fine, please let me know if anything didn't work out.

Thank you for your feedback.

@podgorki
Copy link
Author

Hi @AbdelrahmanElsaidElsawy thanks for looking into this. I will have a look at those versions.

Aside from downversioning, I have forked this repo and got cameras, segmentation, instance segmentation, depth, bounding boxes and lidar working. I have made some changes for the ultrasonic and intend oin trying to get the other automation sensor working too. In addition to those changes, I also have added the vehicle position as a tf frame and am in the process of adding tf frames for the added sensors. With the tf frame we can see all the sensors in their correct spots as well as do coordinate frame transformations with ease. Having a lidar to vehicle to map frame made it so I could adjust the lidar point cloud values to be ego centric rather than map centric which is what I would expect for a vehicle sensor.

The fork is here: https://github.com/podgorki/beamng-ros-integration and if it is of use perhaps I could contrib it as a pull request?

@AbdelrahmanElsaidElsawy
Copy link
Contributor

Hi @podgorki, thank you for the contribution for this repository, I'll test your changes, if everything is going well, you may pull a merge request and I'll approve it.

Have a nice day

@AbdelrahmanElsaidElsawy
Copy link
Contributor

AbdelrahmanElsaidElsawy commented Aug 14, 2023

Hi @podgorki, I've tested out the camera and lidar, they are working very well, please apply for a merge request. I'll close this ticket after all the sensors get fixed.

@AbdelrahmanElsaidElsawy AbdelrahmanElsaidElsawy added the bug Something isn't working label Aug 15, 2023
@podgorki
Copy link
Author

Hi @AbdelrahmanElsaidElsawy I am just finalizing the added tf frames to the sensors and will start the pull request then. One thing I am hoping you might be able to clarify is what the units are of beamngpy's dir parameter:

dir: (X, Y, Z) Coordinate triplet specifying the forward direction of the sensor.

are the X, Y, Z values the rotation angles, radians or something else? I am using the dir parameter to set the RPY->quaternions for the static tf frames

@AbdelrahmanElsaidElsawy
Copy link
Contributor

Hi @podgorki, it should be a vector on the map space

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: In progress
Development

No branches or pull requests

2 participants