Skip to content

princeton-computational-imaging/Neural-Volume-Super-Resolution

Repository files navigation

Neural Volume Super Resolution

Requitements

Begin by setting up the dependencies. You can create a conda environment using conda env create -f environment.yml. Then update the root path in the local configuration file, and remove its .example suffix. Install torchsearchsorted by following instructions from their README.

Super-resolve volumetric scene(s)

Our framework includes three learned components: A decoder model and a feature-plane super-resolution model shared between all 3D scenes, and an individual set of feature planes per 3D scene. You can experiment with our code in different levels, by following the directions starting from any of the 3 possible stages below (directions marked with * should only be perfomed if starting from the stage they appear in):

Train everything from scratch

  1. Download our training scenes dataset.
  2. Download the desired (synthetic) test scene from the NeRF dataset and put all scenes in a dataset folder.
  3. Update the configuration file. Add the desired test scene name(s) to the training list. Update the scene name(s) in the evaluation list and update the paths to the scenes dataset folder and to storing the new models in the configuration file.
  4. Run python train_nerf.py --config config/TrainModels.yml

Super-resolve a new test scene

Use pre-trained decoder and plane super-resolution models while learning feature planes corresponding to a new 3D scene.

  1. Download our pre-trained models file and unzip it.
  2. *Download our training scenes dataset.
  3. *Download the desired (synthetic) test scene from the NeRF dataset and put all scenes in a dataset folder.
  4. Learn the feature planes representation for a new test scene:
    1. Update the configuration file. Add the desired test scene name(s) to the training list. Then update the scene name(s) in the evaluation list, as well as the paths to the scenes dataset folder, pre-trained models folder and to storing the new scene feature planes in the configuration file.
    2. Run python train_nerf.py --config config/Feature_Planes_Only.yml
  5. Jointly refine all three modules:
    1. Update the desired scene name (training and evaluation), as well as the paths to the scenes dataset folder, pre-trained models folder (decoder and SR), learned scene feature planes (from the previous step) and to storing the refined models in the configuration file.
    2. Run python train_nerf.py --config config/RefineOnTestScene.yml

Evaluate a pre-learned test scene

Use pre-trained decodeer and SR models, coupled with the learned feature-plane representation:

  1. *Download one of our pre-trained models and unzip it, then download the corresponding (synthetic or real world) scene from the NeRF dataset.
  2. Run:
    python train_nerf.py --load-checkpoint <path to pre-trained models folder> --eval video --results_path <path to save output images and video>
    

Optionally, to resume training in any of the first two stages, use the --load-checkpoint argument followed by the path to the saved model folder, and omit the --config argument.

Contributing / Issues?

Feel free to raise GitHub issues if you find anything concerning. Pull requests adding additional features are welcome too.

LICENSE

This code is available under the MIT License. The code was forked from the nerf-pytorch repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages