Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimental support to sensors and depth camera example #249

Draft
wants to merge 16 commits into
base: devel
Choose a base branch
from

Conversation

diegoferigo
Copy link
Member

This PR provides the initial support to rendering-based sensors #199. This implementation has not been really straightforward and it required quite a lot of time to understand the multithreading implementation of upstream's Sensors system and rendering stack. In particular, it does not seem to me that the simulator ensures a prompt image generation right after a single run and I had to enhance the synchronization mechanism to lock execution when needed.

In more detail, this PR:

  • Vendors the upstream Sensors system.
  • Customizes the vendored Sensors system to be compatible with reproducible execution, i.e. being able to extract sensory data right after a simulator run.
  • Enables loading the Sensors system with World::enableSensors.
  • Adds new DepthCamera class.
  • Allows extracting a depth camera object from a model with Model::depthCamera.
  • Adds a new test for the depth camera.

Caveats and WIPs;

  • In the DepthCamera class we need to know what's the rate of the server in order to prepare the images. Right now it's hardcoded. We should create a new component for the World entity with this information.
  • For being more efficient, the processing of the image runs asynchronously than caller code. I played with std::promise and std::future for locking the caller execution only if data is required.
  • Caveat: I use an unordered map with simulation time as key. Using double is a very bad idea, so I converted the time to nanoseconds. It should provide a decent tradeoff.
  • Comment: I tried to reduce copies of the image as much as possible. The image callback fills a buffer in the DepthCamera class, that is returned to the user as const reference. The same buffer is moved from promise to promise as time passes. This approach is not bullet-proof and should be further tested.
  • Caveat: it's possible that multiple image callbacks are called concurrently in different threads. Due to the buffer handling, the callback code is protected with a mutex. The problem is that the mutex is not assured to preserve the order of acquisition. We should keep this in mind and find better solutions.
  • The execution is terribly slow. In the test and compiling in Release, 1 second of simulation with a single cube falling and a depth camera that does not get any image takes roughly 2.5 seconds. It's a start, though :)

This PR is the first step to get a heightmap of the terrain gazebosim/gz-sim#337. It's not directly related to that, being more general to the supported sensors, but it allowed me to study the rendering stack and it was natural to put an extra effort and integrate it into gym-ignition.

cc @traversaro early feedback is welcome

@traversaro
Copy link
Member

traversaro commented Oct 1, 2020

I wanted to open an issue upstream on this but for the time being I did not had the time. However, the fundamental problems is that the https://github.com/ignitionrobotics/ign-sensors library does strongly couple two different functionalities:

  • A: Generation of sensor measurements from lower level abstractions (ign-physics, ign-rendering)
  • B: Publishing of sensor measurements on ignition-transport topics

For gym-ignition, we are actually interested just in A, and ideally we would like to disable B unless strictly necessary. This hard coupling is probably an heritage of the sensors codebase from Classic Gazebo, and is particularly problematic from sensors such as IMUs and Six-Axis FT sensors that in real life can reach update rates of ~KHz, and in which the non-determinism introduced by ignition-transport communication can drastically affect controller performances.

@diegoferigo
Copy link
Member Author

[...] the fundamental problems is that the https://github.com/ignitionrobotics/ign-sensors library does strongly couple two different functionalities:

  • A: Generation of sensor measurements from lower level abstractions (ign-physics, ign-rendering)
  • B: Publishing of sensor measurements on ignition-transport topics

For gym-ignition, we are actually interested just in A, and ideally we would like to disable B unless strictly necessary. [...]

Yes this is the situation I discovered in the sensors / rendering stack. It seems that the current logic about topics is that if you don't define a custom name, a default one is used.

It seems that this behaviour is deeply rooted in the SDF specification, see //sensor/topic that is mandatory. I suspect that asking to upstream that an empty string would result to a transport-free topic is out of discussion.

is particularly problematic from sensors such as IMUs and Six-Axis FT sensors that in real life can reach update rates of ~KHz

I'm not sure if this is what you meant, but the slow execution is only related to rendering-based sensors. The stack for IMUs is quite different (see ImuSensor.cc and Imu.cc) and I think that update rates in the order of KHz can be already achieved regardless of transport. Being able to get rid of transport if not needed is of course desirable in any case, I agree.

@traversaro
Copy link
Member

It seems that this behaviour is deeply rooted in the SDF specification, see //sensor/topic that is mandatory. I suspect that asking to upstream that an empty string would result to a transport-free topic is out of discussion.

To be honest, I don't know. There is a huge effort in reusing part of the SDF format also for Drake, and in that context I think it does not make a lot of sense to have topic as a mandatory element.

@traversaro
Copy link
Member

traversaro commented Oct 1, 2020

is particularly problematic from sensors such as IMUs and Six-Axis FT sensors that in real life can reach update rates of ~KHz

I'm not sure if this is what you meant, but the slow execution is only related to rendering-based sensors. The stack for IMUs is quite different (see ImuSensor.cc and Imu.cc) and I think that update rates in the order of KHz can be already achieved regardless of transport. Being able to get rid of transport if not needed is of course desirable in any case, I agree.

I was referring to cases such as the one described in osrf/subt#307 and https://osrf-migration.github.io/ignition-gh-pages/#!/ignitionrobotics/ign-gazebo/pull-requests/498/page/1 . It does not matter how fast an operation is, unless there is an explicit synchronization under heavy operating system load you may have undesirable behaviors.

@diegoferigo
Copy link
Member Author

diegoferigo commented Oct 1, 2020

It seems that this behaviour is deeply rooted in the SDF specification, see //sensor/topic that is mandatory. I suspect that asking to upstream that an empty string would result to a transport-free topic is out of discussion.

To be honest, I don't know. There is a huge effort in reusing part of the SDF format also for Drake, and in that context I think it does not make a lot of sense to have topic as a mandatory element.

We could give it a chance, I think we all agree that it would be really beneficial and publishing to the network should be an optional behaviour. If a change of behaviour is too much, maybe a precise string that prevents any streaming could get accepted (e.g. <topic>{false,False,0,null}</topic>).

Even more, since the library is sold as Ignition Sensors is an open source library that provides a set of sensor and noise models accessible through a C++ interface and it would be really cool if there would be native support of getting pointers to sensors from C++ code with no transport involved also from ign-gazebo.

I had to vendor the Sensors system and add a custom component to obtain a pointer to the sensor, and I suspect I should do the same for all other sensors that use their custom system. Sensors is only for the rendering sensors, then there's IMU and other with standalone systems. Of course we'd like to avoid the need of vendoring
all of them.

I was referring to cases such as the one described in osrf/subt#307 and https://osrf-migration.github.io/ignition-gh-pages/#!/ignitionrobotics/ign-gazebo/pull-requests/498/page/1 . It does not matter how fast an operation is, unless there is an explicit synchronization under heavy operating system load you may have undesirable behaviors.

Yep, I remember. Sorry if it was not clear, this PR already does not use transport. I didn't specify it in the first post, but the callback I mentioned are connected directly to the sensor objects, not the published topics.

@diegoferigo
Copy link
Member Author

diegoferigo commented Oct 1, 2020

BTW, I get the following failure in CI:

[Err] [Ogre2RenderEngine.cc:338] Unable to open display:
terminate called after throwing an instance of 'Ogre::RenderingAPIException'
what(): OGRE EXCEPTION(3:RenderingAPIException): Couldn`t open X display in GLXGLSupport::getGLDisplay at /var/lib/jenkins/workspace/ogre-2.1-debbuilder/repo/RenderSystems/GL3Plus/src/windowing/GLX/OgreGLXGLSupport.cpp (line 789)

I suspect we need to use xvfb or similar if we want rendering tests to succeed, similarly to what needs to be done for rendering OpenAI Gym environments in colab https://stackoverflow.com/a/52600239.

Edit: fixed using https://github.com/The-Compiler/pytest-xvfb.

@traversaro
Copy link
Member

Yep, I remember. Sorry if it was not clear, this PR already does not use transport. I didn't specify it in the first post, but the callback I mentioned are connected directly to the sensor objects, not the published topics.

Ah cool, I did not realized there is the callback mechanism. In that case (by vendoring the Sensors system) it is possible to avoid the problem.

@diegoferigo diegoferigo force-pushed the feature/depth_camera branch 3 times, most recently from 0a8c39e to d85529c Compare November 2, 2020 13:03
@diegoferigo
Copy link
Member Author

diegoferigo commented Jul 7, 2021

This PR is stale for long time now, and I have no plans to start working on it soon. There's some outstanding work upstream in gazebosim/gz-sim#793 that will enable Ignition Fortress to run server and gui in a single process. This change could greatly simplify the implementation of sensors, and Fortress could be a good candidate for the introduction of sensors support.

After a quick look to gazebosim/gz-sim#793, a possible alternative (and much simpler) implementation of sensors is the following:

  • Do not support the upstream Sensors system.
  • Try to get the RenderUtil class somehow, or allocate a new one (check https://github.com/ignitionrobotics/ign-gazebo/pull/793/files#r648756831).
  • Develop our own method to create the objects of the supported sensors getting the logic from Sensors::CreateSensor. This way, we would already have the sensor objects and we no longer have to pass through the ECM as in this PR.
  • Create our wrapper classes of the sensors like DepthCamera in this PR.

Then, if I understood, a new event could be emitted to trigger the sensors rendering, and we could also control the rate. We should try to aim to make the sensor rendering rate independent from the simulator rate, because otherwise the simulation would be too slow (as experienced in this PR). Though, this might be a desired feature (e.g. to generate tight datasets), so I'd keep this use case as valid.

If this works, we don't have to maintain a vendored Sensor system like proposed in this PR, which is a pain to maintain. In any case, I don't plan to work on this either very soon (we're still on Dome in master and Edifice in devel, this proposal requires Fortress). It's just a flow of thoughts I had while I was reviewing gazebosim/gz-sim#793 that I wanted to log somewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants