Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor rendering quality result for custom dynerf-like dataset #119

Open
yangqing-yq opened this issue Apr 14, 2024 · 7 comments
Open

Poor rendering quality result for custom dynerf-like dataset #119

yangqing-yq opened this issue Apr 14, 2024 · 7 comments

Comments

@yangqing-yq
Copy link

yangqing-yq commented Apr 14, 2024

video_rgb.mp4
@yangqing-yq
Copy link
Author

yangqing-yq commented Apr 14, 2024

the input is 66 mp4 files recorded from 66 cameras in different angles. 2 samples as below:
0000
0000

@yangqing-yq
Copy link
Author

the initial point cloud also seems normal: points3D_downsample2.ply
snapshot00

@yangqing-yq
Copy link
Author

after training , the result is like::

[ITER 14000] Evaluating test: L1 0.13829423837801988 PSNR 14.392739352057962 [14/04 19:28:46]

[ITER 14000] Evaluating train: L1 0.07283851898768369 PSNR 18.669121798347025 [14/04 19:28:48]

@yangqing-yq
Copy link
Author

Metric evaluation progress:
Scene: output/dynerf/nb66/ SSIM : 0.3143795
Scene: output/dynerf/nb66/ PSNR : 14.3533792
Scene: output/dynerf/nb66/ LPIPS-vgg: 0.6337457
Scene: output/dynerf/nb66/ LPIPS-alex: 0.6007410
Scene: output/dynerf/nb66/ MS-SSIM: 0.2912356
Scene: output/dynerf/nb66/ D-SSIM: 0.3543822

@guanjunwu
Copy link
Collaborator

Hi, I would like to know:

  1. How may cameras do you use? How do you set your cameras (like forward-facing of sphere)? How many frames in each camera?
  2. what is results of coarse stage? You can try the longer coarse stage to recover the static background. As I wished, coarse stage will recover static part and training 3D Gaussians for debugging. fine stage will try to learn the deformation.
  3. what config file do you use? hypernerf?(which is designed for 150-500 frames each camera)

@yangqing-yq
Copy link
Author

@guanjunwu

1. How may cameras do you use? How do you set your cameras (like forward-facing of sphere)? How many frames in each camera?

66 cameras. sphere above ground . 10 seconds x 30 fps = 300 frames

2. what is results of coarse stage? You can try the longer coarse stage to recover the static background. As I wished, coarse stage will recover static part and training 3D Gaussians for debugging. fine stage will try to learn the deformation.

Yes. trying longer coarse stage.

3. what config file do you use? hypernerf?(which is designed for 150-500 frames each camera)

use default one with batch size=2
ModelHiddenParams = dict(
kplanes_config = {
'grid_dimensions': 2,
'input_coordinate_dim': 4,
'output_coordinate_dim': 16,
'resolution': [64, 64, 64, 150]
},
multires = [1,2],
defor_depth = 0,
net_width = 128,
plane_tv_weight = 0.0002,
time_smoothness_weight = 0.001,
l1_time_planes = 0.0001,
no_do=False,
no_dshs=False,
no_ds=False,
empty_voxel=False,
render_process=False,
static_mlp=False

)
OptimizationParams = dict(
dataloader=True,
iterations = 14000,
batch_size=4,
coarse_iterations = 3000,
densify_until_iter = 10_000,
opacity_reset_interval = 60000,
opacity_threshold_coarse = 0.005,
opacity_threshold_fine_init = 0.005,
opacity_threshold_fine_after = 0.005,
# pruning_interval = 2000
)

@guanjunwu
Copy link
Collaborator

Hi, I think the config is ok.
Meanwhile, I think the initial point clouds are not dense. Why there are not any background points?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants