Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about relighting effects #5

Open
wangmingyang4 opened this issue Nov 8, 2022 · 6 comments
Open

Questions about relighting effects #5

wangmingyang4 opened this issue Nov 8, 2022 · 6 comments

Comments

@wangmingyang4
Copy link

wangmingyang4 commented Nov 8, 2022

The algorithm does not have an ideal effect on the indoor dataset.
As shown in the figure below, the effect of relighting is not very good. There is a "hole" on the surface of the chair.
I guss this is due to some inaccuracies in geometry estimation.
I would appreciate it if receiving your reply.
@r00tman

ori_img:
image

relighting:
000000
000046
fg_normal_000046

@r00tman
Copy link
Owner

r00tman commented Nov 8, 2022

Hi, thank you for your interest in the project!

The method was designed primarily for outdoor scene relighting, and we didn't test it on indoor scenes ourselves.
This means that we don't explicitly support view-dependent effects such as reflections, speculars, and non-natural illumination sources.
I'm quite surprised that the method still worked to some degree in your case, where these assumptions don't hold.
In your shared video, video compression even manages to hide most of the artifacts that are visible on the shared images, except for the semi-transparency in the chair seat.
Judging from the normals, it seems like the method learned reflected geometry to emulate reflections, same as it usually is with the original NeRF.
And as for the hole in the chair, it could be either of the strong speculars or not having enough training views/lighting conditions.
I suggest using data where there's less speculars in the current hole region and adding more training data.
Adding more training data should also reduce artifacts in normals on the floor.

Again, it's quite exciting to see NeRF-OSR almost working on the indoor scenes.
So if you have any progress or more questions, please write here!

Best,
Viktor

@r00tman
Copy link
Owner

r00tman commented Nov 8, 2022

Also it seems like the rendered image is cropped a bit.
This is a known issue, as the code assumes 1044x775 resolution for views that have no ground-truth image.
A quick fix would be to change these default values here: https://github.com/r00tman/NeRF-OSR/blob/main/data_loader_split.py#L100
Then the rendered images would render the whole view as in your ori_img.

Also it seems like the rendering camera poses for the videos were generated using transition_path.py script.
Please note that it only does linear blend of the transforms, instead of the proper interpolation.
Hence, the resulting views might look sheared or distorted when camera rotation changes significantly.
A quick fix would be to add one-two more in-between views to arr in such cases, then it would work better.

@wangmingyang4
Copy link
Author

Thanks for your reply!

@wangmingyang4
Copy link
Author

During the test, your technique can synthesise novel images at arbitrary camera viewpoints and scene illumination; the user directly supplies the desired camera pose and the scene illumination, either from an environment map or directly via SH coefficients.
I see that the code you provided is to load the SH coefficient for testing. Is there any code for testing by loading an environment map (such as *hdr or *exr) ?
I am looking for documentation, or code or an example of how to extract spherical harmonics coefficients from a light-probe HDR file.
Can anyone help?

@qizipeng
Copy link

qizipeng commented Nov 25, 2022

Hi, I also think this is wonderful work for generating novel views and relighting images from an optimized neural network. However, i also have a similar issue that is the NeRF takes the default values as the env param of testing or validating images and does not optimize them it, so ..... if this, the validation images or test images have no means. Although, I also agree that this work is very inspired me!!!

@r00tman
Copy link
Owner

r00tman commented Nov 28, 2022

Hi. Thank you for the kind words!

You can change environments used for testing with --test_env argument (implemented here).
There you can either provide a folder with per-view SH environments or a path to a single SH environment written in a .txt.
Even if you provide a single SH environment, you can still rotate it around the building by using --rotate_test_env argument in addition to --test_env.

Default value for the environment is taken from one of the runs of the method on our data. When we used completely random inialisation values, the model often diverged. Using these coefficients instead resulted in more consistent and better training results on the tested scenes. Coincidentally, they are also used for rendering views where no other envmap is found, which are validation and test views (when --test_env argument is not provided).

To use an external LDR/HDR envmap, you would first need to convert it to SH coefficients. The conversion will just fit the closest SH coefficients with least squares. The script for that is not yet in the repo, but I'll upload it soon, as well as the instructions on how to reproduce our numerical results from the paper. The latter involves using external environment maps and this SH conversion step too, so it should be helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants