Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What parameters is used when rendering CAPE dataset? #120

Open
MIkeR794 opened this issue Mar 28, 2024 · 0 comments
Open

What parameters is used when rendering CAPE dataset? #120

MIkeR794 opened this issue Mar 28, 2024 · 0 comments

Comments

@MIkeR794
Copy link

Hello, thank you very much for your work. I have a question about the rendering process of the CAPE dataset.

I noticed that there is a section in ICON for processing Thuman2 data (in render_batch.py). In the code, there are camera parameters:

# Camera Center
self.center = np.array([0, 0, 1.6])
self.direction = np.array([0, 0, -1])
self.right = np.array([1, 0, 0])
self.up = np.array([0, 1, 0])

At the same time, there is code to calculate the scale factor:


scan_scale = 1.8 / (vertices.max(0)[up_axis] - vertices.min(0)[up_axis])

I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views.
image

So, I would like to ask what parameters you used when rendering CAPE dataset?
Looking forward to your reply. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant