Skip to content

Learning-based partial point cloud completion system using kernel points convolution

Notifications You must be signed in to change notification settings

no-materials/kpcn

Repository files navigation

KPCN - Kernel Point Completion Network

KPCN is a learning-based system for point cloud completion consisting of an autoencoder-structured neural network. The encoder uses a local 3D point convolution operator which takes sphere neighborhoods as input and processes them with weights spatially located by a small set of kernel points. In this way, local spatial relationships of the data are considered and encoded efficiently, contrarily to previous shape completion encoder structures which use a global scope during feature extraction. Aside from the rigid version, the convolution operator used also provides a deformable version, that learns local shifts effectively deforming the convolution kernels to make them fit the point cloud geometry. In addition, a regular subsampling per layer method is adapted which in combination with the radius neighbourhoods provides robustness to noise and varying densities. The decoder of the implemented system is a hybrid structure which combines the advantages of fully-connected layers and folding operators, producing a multi-resolution output.

The implemented model was trained in eight object categories from the ShapeNet2048 dataset, was evaluated extensively on synthetic models as well as sensor scanning data and is a canditate for fast completion of extremely sparse real-world sensor data. Additionaly, the model as an integrated module of a wider system was found to be able to improve application results, of which there is need of more complete inputs. In this particular thesis the method was used to solve the problems point cloud registration and similar model retrieval.

Installation

For help on installation, refer to INSTALL.md. Note that KPCN has been tested only on Linux. Windows is currently not supported as the code uses tensorflow custom operations. CUDA & cuDNN are required.

Datasets

ShapenetBenchmark2048

Download from this link.

kitti

Download KITTI data in the kitti folder on Google Drive

Common commands

For the following common commands, path placeholders are used. These are explained here:

  • <saving_path>: Log directory of the used model. It contains the model's config file, model checkpoints, visualisation plots and training/validation/test results. Name after the timestamp of the creation of the model's instance, i.e. /kpcn/results/Log_2019-11-13_13-28-41.
  • <dataset_path>: Directory which contains unprocessed and processed data (pickle files) of a dataset. In the case of ShapeNetBenchmark2048 it should also contain three .list files which enlist the models used for each training/validation/test split.

Replace the path placeholders in the commands below with your relevant ones.

Train

python train_ShapeNetBenchmark2048.py --saving_path <saving_path> --dataset_path <dataset_path> --snap -1  # use snap = -1 to choose last model snapshot

Test

python test_model.py --on_val --saving_path <saving_path> --dataset_path <dataset_path> --snap -1
  • The on_val flag denotes the use of the validation split for the purpose of testing. Executing the above command without the on_val flag would run the test on the test split. Note that the test split does not contain any ground-truth, and therefore the command ultimately executes a "Similar model retrieval' task.
  • The calc_tsne flag can also be used in the above command. It enables the code for the calculation and visualisation of the val/test split's latent space T-SNE embedding.
  • The noise argument can also be used in the above command. It accepts a float which defines the standard deviation of the normal distribution used as additive noise during the data augmentation phase. These can be used for evaluating the robustness of the model.

Test kitti registration

Before testing kitti registration with the following command, make sure you have already completed the kitti models using the above script (test_model.py), with the dataset_path arguement pointing to the kitti dataset directory. Upon successful completion, the completed kitti models should reside in the /<saving_path>/visu/kitti/completions directory, and so the following command can be run:

python kitti_registration.py --plot_freq 20 --saving_path <saving_path> --dataset_path <dataset_path>
  • The argument plot_freq specifies the frequency registrations would be plotted
  • The script internally uses the ICP algorithm for registration, and so parameters of the ICP algorithm can be adjusted (type python kitti_registration.py -h for more options.)

Visualise deformations

An interactive mini-application for visualising the rigid and deformable kernel of chosen layers on input partial point clouds. The subsampled point cloud of each chosen layer is also displayed.

python visualize_deformations.py --saving_path <saving_path> --dataset_path <dataset_path> --snap -1

About

Learning-based partial point cloud completion system using kernel points convolution

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published