Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry Regarding Grasp Generation Discrepancy in Multi-View Reconstruction Point cloud #92

Open
Choi-jk opened this issue Jan 16, 2024 · 3 comments

Comments

@Choi-jk
Copy link

Choi-jk commented Jan 16, 2024

Description:
Thanks for the excellent work on this project. I am deeply interested in its potential applications. Currently, I am integrating your model into my multi-view reconstruction method for grasp generation. However, I have observed an issue where the multi-view point cloud, accumulated from multiple depth images, produces a suboptimal number of grasps (zero or < 10) compared to the single-view point cloud generated from a single view depth image (~hundreds).

Question:
I am seeking clarification on why this discrepancy is occurring.

Grasp Generation Process for Multi-View Point Cloud:

  1. Rendered depth map from multiple cameras
  2. Converted each depth map to a point cloud (in world coordinates)
  3. Combined all point clouds into one large point cloud
  4. Created virtual cameras facing the center of the point cloud (assuming the object is at the center)
  5. Converted the consolidated point cloud to each virtual camera's coordinates and used it as input for GraspNet

Grasp Generation Process for Single-View Point Cloud:

  1. Rendered depth map from a single camera
  2. Converted the depth map to a point cloud (in world coordinates)
  3. Converted this point cloud to the camera's coordinates and used it as input for GraspNet

I would appreciate any insights or suggestions to address this issue. Your assistance is invaluable.
Thank you for your time and support.

@chenxi-wang
Copy link
Collaborator

Are the poses of your virtual cameras similar to the ones of the original data? The viewpoint may influence the performance.

@Choi-jk
Copy link
Author

Choi-jk commented Jan 18, 2024

Yes, the virtual camera poses we employed closely resemble the ones I use for generating point clouds. Just to clarify, we also conducted tests using actual camera poses, and the results were similar.

@chenxi-wang
Copy link
Collaborator

I've not tested the baseline model on multi-view point clouds. Could the output candidates be filtered by collision detection?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants