Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List of objects in seen ans unseen test set #107

Open
1mingW opened this issue May 7, 2024 · 4 comments
Open

List of objects in seen ans unseen test set #107

1mingW opened this issue May 7, 2024 · 4 comments

Comments

@1mingW
Copy link

1mingW commented May 7, 2024

Hi,
Do you have a list of objects that are used in seen and unseen test sets? I retrained a model and want to do some experiment on seen and unseen objects, it would be great if there is some document that lists which object belongs to which test set.

@chenxi-wang
Copy link
Collaborator

Hi, you can find the objects from object_ids.txt in each scene folder, and the object names can be found here.

@Fang-Haoshu
Copy link
Member

@1mingW Hi, for the object list, you can also find the object list for both train and test using the graspnet_api. I remember there is a function that you can obtain all object ids given the input scene number. And the first 100 scenes are training set.

@1mingW
Copy link
Author

1mingW commented Jun 2, 2024

@Fang-Haoshu and @chenxi-wang Thank you for the answers!
I have another problem about how to collect the depth images. In real experiment, did you get point cloud directly or reconstructed with depth image? I was trying to reconstruct point clouds, but I got depth images with dtype uint16 by realsense pipeline and the depth value was not correct. I notice the depth images in your dataset have dtype int32. How did you save or collect the images?

@1mingW
Copy link
Author

1mingW commented Jun 3, 2024

I found I can actually reconstruct point clouds from depth images in uint16 format if deleting some values that are too large. But the quality of the point cloud is not as good as in your dataset:
Screenshot from 2024-06-03 16-41-33
the RGB images have wrong color (which is a Realsense problem) and are shifted from the depth images. Did you also meet such problems while collecting data?
And I am still wondering if you use reconstructed point cloud or directly get point cloud from realsense topic (if you use ROS)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants