Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real world application of pylot/perception/point_cloud.py #253

Open
henriklatsch opened this issue Apr 25, 2022 · 2 comments
Open

Real world application of pylot/perception/point_cloud.py #253

henriklatsch opened this issue Apr 25, 2022 · 2 comments

Comments

@henriklatsch
Copy link

I've been trying to use point_cloud.py and utils.py to relocate a point from an RGB image, in a point cloud using global coordinates. I have described the issue further here:
https://stackoverflow.com/questions/71999715/finding-a-point-in-3d-point-cloud-given-a-pixel-in-2d-image

Am I wrong in assuming the code can be utilized for this real world application, or are the methods in fact bugged? Namely get_pixel_location(), and or helper methods. I have tried a number of different inputs, but the output from get_pixel_location() is usually just the transposed pixel matrix plus z = inf.

Should the points around the lidar be oriented to assume the lidar is at origin x = 0, y = 0, z = 0 ?

I am not getting any sensible output from calls to points._to_camera_coordinates() either, the result is a large array of " -inf, -200.9, inf" with a little variation in the middle value.

Hopefully I am using the methods in the wrong manner, and it indeed works. I would appreciate any feedback. Thanks.

@pschafhalter
Copy link
Member

Hi, could you try the following?

  1. Double-check the values of the point cloud data? LIDAR readings that are out-of-range might show up as inf (e.g. when the laser hits the sky as opposed to a building).
  2. The algorithm might be filtering out points which are fine to filter out in our AV setting, but might cause errors for other types of messages. A way to sanity-check this is to set the location and the rotation to (0, 0, 0) for both the LIDAR and the camera setups.

We have a test here which might be a useful point of reference:

def test_point_cloud_get_pixel_location(lidar_points, pixel, expected):
camera_setup = CameraSetup(
'test_setup',
'sensor.camera.depth',
801,
601, # width, height
Transform(location=Location(0, 0, 0), rotation=Rotation(0, 0, 0)),
fov=90)
lidar_setup = LidarSetup('lidar', 'sensor.lidar.ray_cast',
Transform(Location(), Rotation()))
point_cloud = PointCloud(lidar_points, lidar_setup)
location = point_cloud.get_pixel_location(pixel, camera_setup)
assert np.isclose(location.x,
expected.x), 'Returned x value is not the same '
'as expected'
assert np.isclose(location.y,
expected.y), 'Returned y value is not the same '
'as expected'
assert np.isclose(location.z,
expected.z), 'Returned z value is not the same '
'as expected'

@henriklatsch
Copy link
Author

Hey, thanks for the quick response!

I've produced some semi-usable results now, with the following approach:

Setting lidar location to a real world location within the point cloud, followed by setting point cloud origin at the lidarsetup location (the sensor is at x = 0, y = 0, z = 0). Camerasetup is at location x = 0, y = 0, z = 0, and rotation all zero, for both camerasetup and lidarsetup.

In get_pixel_location(): I bypass the fwd_points-variable, and pass self.points to get_closest_point_in_point_cloud(). The call to fwd_points seem to be unaffected by trying to rotate either lidarsetup or camerasetup's yaw values? I've tried to input 0, 90 or 180 to both separately, and the output fwd_points are still the same. Correct me if I'm wrong, but fwd_points should approximately return the other half of self.points, when yaw = +-180 degrees, right?

Since lidar_type is set to 'velodyne', the output from get_pixel_location() is in camera coordinates, or rather it is essentially just the x & y from p3d, and z is the distance to the found location in camera coordinate space.

My workaround to getting an actual world location output, is outputting the 'closest_index' from get_closest_point_in_point_cloud(), and relocation the point in PointCloud.global_points. It usually returns a point within a few meters of what I'm looking for.

The biggest hurdle now, is the seeming lack of ability to "turn" on the yaw axis within the given location. I get the same output point, with both yaw = 0, and yaw = 180. In my specific use case, I'm looking for road signs, and because of their high reflectivity, I can filter the entire point cloud to almost only contain road signs. I can however still not be certain to target the right signs, if more than one sign is present in a small area.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants