Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-implement GpuLaser with cube map to lift vFOV limit #3205

Open
wants to merge 3 commits into
base: gazebo11
Choose a base branch
from

Conversation

Roboterbastler
Copy link
Contributor

Motivation

The current implementation of the GpuRaySensor limits the vertical FOV to 90 degree. This makes it impossible to simulate some types of Lidar sensors with it (e.g. dome Lidar).

This PR re-implements the GpuRaySensor with a different internal method to lift this limitation. The vertical FOV can now be up to 180 degree, so that a full sphere can be covered.

How to test this change

It should now be possible to use the GpuRaySensor with vertical FOVs of up to 180 degrees. You can use this provided example world file for testing with a full sphere (hFOV=360°, vFOV=180°) lidar sensor:
full_sphere_gpu_lidar.world.txt (remove .txt extension)

Apart from that also test other common configurations, especially edge cases (e.g. only 1 vertical ray).

Internal method

Previously an internal camera was rotated around the spin axis of the sensor to capture depth images in a first pass, while in a second pass those were warped to match the laser ray geometry. Due to the limited vertical FOV of this camera, the overall vertical FOV was limited.

The new method uses the internal camera to capture depth images in up to six orientations to build a depth cube map. Those up to six depth images are then used for the laser readings. Compared to the previous method a few things were simplified: The camera aspect ratio is always 1 (square depth images), the camera orientations are fixed and only one single rendering pass is required (previously two passes).
Each face of this virtual cube corresponds to a depth image taken by the internal camera. For reference, these figures show the cube map setup. Azimuth corresponds to the horizontal, elevation to the vertical angle of a laser ray:

Side view Top view
Cube Map (side view) Cube Map (top view)

Only the required faces of the cube are rendered to avoid unnecessary computation.

Limitations

As before this method has the same resolution issues in the case of a very shallow angle of incidence, where you may see "steps" in the scan. This issue is inherent in using a rasterized depth image as a base for the range measurements. To mitigate this issue the resolution of the internally used depth camera can be increased, however that comes with a longer processing time. I introduced a minimum texture size of 1024x1024 to avoid a strong effect when the number of samples is low, I found this value to be a good compromise. This minimal texture size was already there before and set to 2048 but there it was using fewer cameras.

Compare these two images of the VLP-16 sensor (roslaunch velodyne_description example.launch gpu:=true) to see that the results are "similarly good/bad" (note: the angle of incidence on the ground plane is extremely shallow here, so the effect is clearly visible. In the average case it should look much better):

Current implementation New cube map implementation
VLP-16_old_version VLP-16_new_version

Code

The changes mainly take place in the GpuRaySensor and GpuLaser classes. I left the public interface of these classes unchanged, unsure if that is sufficient to meet the API compatibility criterion. I'm also not really sure about how to test the ABI compatibility. Here I could use some help by the reviewers.

A few unit tests were added, the existing unit and integration tests seem to pass.

@Roboterbastler
Copy link
Contributor Author

Roboterbastler commented Apr 11, 2022

I managed to check the compatibility of libgazebo_sensors and libgazebo_rendering with the abi-compliance-checker.
The libgazebo_sensors seems fine, for libgazebo_rendering I added back the GpuLaser::cameraCount member.

Now there are only a few issues remaining for members that I removed from GpuLaserPrivate: Should I also add them back although unused or is it okay to break the API of GpuLaserPrivate because it is only meant to be used internally?

/// \param[in] _cameraCount The number of cameras required to generate
/// the rays
public: void SetCameraCount(const unsigned int _cameraCount);
public: void SetCameraCount(const unsigned int _cameraCount) GAZEBO_DEPRECATED(11.10);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What version number should I put here? The current one or the next one?

@Roboterbastler
Copy link
Contributor Author

Anybody available to look at this? :) @scpeters

Looking at the failing checks, anything that looks to you as if introduced by this PR?

@ernestmc
Copy link

This is an interesting contribution to extend the vertical fov of the gpu lidar simulation. Can someone from the Gazebo team take a look? @chapulina @nkoenig

@scpeters
Copy link
Member

Anybody available to look at this? :) @scpeters

Looking at the failing checks, anything that looks to you as if introduced by this PR?

I don't see any issues with CI, but I unfortunately am not familiar enough with the GpuLaser to review these changes

@peci1
Copy link
Contributor

peci1 commented Aug 22, 2023

Thanks for the PR, it looks useful.

I'm also not qualified to assess the GPU parts of the PR. After a quick skim trough it, there are however some issues I can tell without actually understanding the rendering changes.

First, do not remove or change lines of code that do not need to be changed. There's a lot of substitutions of virtual with override, removals of forward declarations of namespace members etc. Do not touch these. The PR should be the minimal changeset that achieves what you want.

Changes to structs ending with Private are okay. There's no API/ABI guarantee about them.

Also, I'm not sure whether it is okay to deprecate a function in an already released Gazebo. And there will be no other Gazebo classic version, so putting a future version number doesn't make sense.

NB: I'm not with Open Robotics and I'm not a maintainer here. I'm just a passerby who likes the proposed changes and would like to help at least a bit to get them merged.

@Roboterbastler
Copy link
Contributor Author

Thank you for the feedback peci1,

yes, I think some of the changes I could revert if that helps getting this merged. If I find the time I can go over the changes again.

About the deprecation of the function, maybe I could also leave it as is but it having no effect at all should at least be documented.

@peci1
Copy link
Contributor

peci1 commented Aug 29, 2023

Hmm, going through the code once again, I'm more and more convinced it balances on the very edge of needing to break API or ABI compatibility. The contribution guidelines require that API and ABI are not broken by PRs (the part with PRing against master branch no longer applies as there will be no other major release). To get an idea about what changes can break ABI, see this REP (it is withdrawn, but the described concepts are more or less valid). Technically, I think this PR could be changed so that ABI is kept. But I'm a bit skeptical about API. You change and remove some private functions (although non-virtual, so ABI should be compatible). The question whether private functions are part of API is a difficult one and I'm not sure how Gazebo maintainers see it.

To be on the safe side, I'd suggest a different approach. Let's not touch rendering::GpuLaser and leave it as a rendering renderer with limited vertical FOV. And create a brand new rendering::GpuCubeMapLaser (or a similar name) that can handle larger FOVs. In sensors::GpuRaySensor, you could then dynamically choose between these two implementations based on the required vertical FOV. This would have the benefit of not changing behavior of existing simulations, while adding possibilities impossible to do before.

I see a single problem with this approach: public sensors::GpuRaySensor::LaserCamera() function, which should return an instance of the old GpuLaser class. In case the sensor would use the cube-map implementation instead, it wouldn't have anything to return. Of course, it returns a pointer, but I'm not sure if there isn't code that relies on the instance to be non-null if the sensor exists (yes, the code should check it, but you know it's simple to forget about it). But my view is that returning null is okay. Again, old code with <90° FOV would work the same. Only cases with higher FOV would be affected, but such cases were not supported before. So it can, in the end, show that there are e.g. a few GUI plugins that segfault given a >90° FOV lidar, but I guess it's okay and should be easy to find by just running Gazebo and trying as much visualizations as possible.

What do you think about the suggested approach?

@Roboterbastler
Copy link
Contributor Author

Thanks for your thoughts/suggestions and I appreciate your interest in this! After re-evaluating the situation however, I think at the moment it does not really make sense to put more work into this branch since I expect the chances of merging it are quite small in the end. The problem is that so far no maintainer seems to have capacity to look into this change, which is not just a small bugfix.

Maybe chances would be higher to merge this into the new Gazebo version (if applicable, I haven't looked into it's code) were it could also be integrated more properly without workarounds. But personally I'm staying with Gazebo classic at the moment so I think I'm not going in that direction very soon.

Unless we find somebody who could review and merge it I would just keep this PR open for interested people to take the code and build their own Gazebo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants