-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Just single Pixel rendering #108
Comments
Hey Hi again, are you able to achive this ? |
Hey. I'm not sure what is the actual need here. If you want to get the rendering for a particular pixel, is there anything preventing you from rendering a image and get the pixel color from it? |
Yeah, I tell you what is stopping us. We don't just need the pixel; we need that Ray of Gaussians with different opacity projected onto the image on that particular pixel. In short, I need the corresponding (x,y,z) position of the point from the point cloud that produced this particular pixel. |
Ah I see. The current master does not support that because everything is fused in CUDA. However, we are working on an upgrade to the CUDA backend in this PR #172, where a pure python implementation is support so that you can easily get that information. In the code below we are accumulating all Gaussians that contribute to each pixel in Python: gsplat/gsplat/experimental/cuda/_torch_impl.py Lines 267 to 282 in f4d455e
If you are in a hurry need then you can just fetch that PR and use it. The PR itself is already seriously verified. Just we are thinking a best way of merging it without too much of interrupt on the master |
Thank you so much ❤️ I'll look into it. |
I'm facing issues while I'm trying to run the example script alone
could you please help me solve this ? |
That is a dependence we are not open-source yet. So that means the example script in this PR is not directly executable. As I mentioned above, we are figuring out the best way of merging it in without outsourced dependence. However it should be fairly easy to swap out |
Thanks for your reply. I'm working on it, and in the meantime, I have a small doubt: is there any |
If what you are looking for is a minimal script only to render a splat, we don't have it yet but we will add it together with that PR soon. But it should not be very hard to create one quickly from that standalone example script in this PR. |
Thank
Thank you for your work. I was successfully able to train the model and I have one doubt about where you are calling this |
That's used in this function: gsplat/gsplat/experimental/test.py Lines 303 to 305 in f4d455e
Which is a exact substitute of the Note that if you call |
When the training is happening, I clearly see that rasterization is happening; however, when I try to run the rasterization alone, I keep ending up with a big, plain single-color image. Could you please help me? rendering Script :
But when I plot the render_colors, I end up with the same color on all pixels. |
You probably don't want |
Yeah, I changed that, but still, I'm facing the same issue. Could you please tell me how I can pass my own camera extrinsic from transforms? json, when I try to load it, it shows grad_fn is required, so I tried gradient = true, false, and still I can't pass my own camera extrinsic. So far, I'm passing dummy. |
@liruilong940607 Thank you for your great work. I am wondering if this something it will do, or information that will be possible to get? From another thread, I asked for this but I do not think the community knows this yet as it might yet to be implemented. But as I read this, it seems that this will be supported soon with this PR? See below comment if you think it will solve it;
I've created a script for COLMAP that consolidates the points3d.bin, camera.bin, and images.bin files into a single file. This simplifies the interpretation of data, clarifying contributions to pixel coordinates, image filenames, and their indices. Example such as below;
I am looking to do the same with GSplat, if possible. Would be great to know how many times a point has been split, from what point they were split from and from what image those splitted points are based on, (or, if I at least know from which initial point the splits are based on, I can "backtrack" this to the COLMAP index) then it would be possible for me to create a complete index, all the way from the splats, to the very first point from COLMAP and their associatied image. In the end, I am looking to project the gaussian splat points ONTO the images as an overlay to see how well they match against the images and compare them with the COLMAP sparse keypoints overlayed on the same images. If I can get some sort of index data, then that would be golden! |
Hi @abrahamezzeddine, the splitting of the Gaussians are control on the python side, not CUDA, so it should be very doable to track the IDs of the GSs by hacking the python code. |
Hello @liruilong940607 |
Gsplat repo will have a big update very soon, in which we will put a standalone training script in. For now I could only point you to the nerfstudio repo. The splitting & cloning & pruning is happening here in nerfstudio's implementation. So I guess what you want is to maintain a tensor storing the initial IDs, and split/clone/prune that tensor together with all other attributes of GS. |
Hello @vye16, @liruilong940607, @kerrj,
Is it possible to render just single pixel which returns the color of each single ray from different colmap camera positions using gsplat?
I am interested in getting final color at particular location for different view directions, by providing GS pointcloud, colmap camera poses, "location of target pixel for tracking in first fully rasterized image using first colmap camera pose from colmap pose sequence" or "3D location in scene (how to find it?)".
The text was updated successfully, but these errors were encountered: