Skip to content

guohaoxiang/NH-Rep

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NH-Rep: Implicit Conversion of Manifold B-Rep Solids by Neural Halfspace Representation

This is the official implementation of SIGGRAPH Asia 2022 paper:

Guo H X, Liu Y, Pan H, Guo B N. Implicit Conversion of Manifold B-Rep Solids by Neural Halfspace Representation.

Paper | Project Page

Abstract: We present a novel implicit representation -- neural halfspace representation (NH-Rep), to convert manifold B-Rep solids to implicit representation. NH-Rep is a Boolean tree built on a set of implicit functions represented by neural network, and the composite Boolean function is capable of representing solid geometry while preserving sharp features. We propose an efficient algorithm to extract the Boolean tree from a Manifold B-Rep solid and devise a neural-network-based optimization approach to compute implicit functions. We demonstrate the high quality offered by our conversion algorithm on ten thousand manifold B-Rep CAD models that contain various curved patches including NURBS, and the superiority of our learning approach over other representative implicit conversion algorithms in terms of surface reconstruction, sharp feature preservation, signed distance field approximation, and robustness to various surface geometry, as well as applications supported by NH-Rep.

The code has been tested on a Ubuntu 18.04 server with CUDA 10.2 installed.

Installation

Please first clone this repo with its submodules:

    $ git clone --recursive https://github.com/guohaoxiang/NH-Rep.git

Then set up the environment via Docker or Anaconda.

Via Docker

This is the most convenient way to try NH_Rep, everything is already settled down in the docker.

    $ docker pull horaceguo/pytorchigr:isg
    $ docker run --runtime=nvidia --ipc=host --net=host -v PATH_TO_NH-REP/:/workspace -t -i horaceguo/pytorchigr:isg
    $ cd /workspace

Then you can convert the points sampled on B-Rep model in input_data to implicit representation:

    $ cd code/conversion
    $ python run.py --conf setup.conf --pt ../data/output_data

The training will take about 8 minutes to finish. Currently we only support training with one gpu, you can set gpu id via --gpu flag. The output neural implicit function (broken_bullet_50k_model_h.pt) are stored in folder data/output_data, whose zero surface can be extracted with our iso-surface generator:

    $ cd PATH_TO_NH-REP/data/output_data
    $ /usr/myapp/ISG -i broken_bullet_50k_model_h.pt -o broken_bullet_50k.ply -d 8

You will find the feature-preserving zero-surface mesh (broken_bullet_50k.ply) in data/output_data.

Via Conda

You can also setup the environment with conda:

    $ conda env create -f environment.yml
    $ conda activate nhrep

Meanwhile, you need to build iso-surface generator mannually, please refer here. The built executable file lies in code/IsoSurfacing/build/App/console_pytorch/ISG_console_pytorch.

After that, you can conduct implicit conversion and iso-surface extraction as mentioned above.

Data downloading

We provide the pre-processed ABC dataset used for training NH-Rep, you can download it from BaiduYun or OneDrive, which can be extracted by 7-Zip. Please unzip it under the data folder. For each model, there will be 3 input items:

*_50k.xyz: 50,000 sampled points of the input B-Rep, can be visualized with MeshLab.
*_50k_mask.txt: (patch_id + 1) of sampled points.
*_50k_csg.conf: Boolean tree built on the patches, stored in nested lists. 'flag_convex' indicates the convexity of the root node. 

For example, data/input_data/broken_bullet_50k_csg.conf looks like:

csg{
    list = [0,1,[2,3,4,],],
    flag_convex = 1,
}

The operation of root node is op(convex) = max, the root node contains 2 patch leaf node 'p_0' and 'p_1', and a child tree node. The child tree node contains 3 patch leaf node 'p_2', 'p_3' and 'p_4'. So the Boolean tree looks like:

     max
   /  |  \
  /   |   \    
p_0  p_1  min 
        /  |  \
       /   |   \ 
      p_2 p_3  p_4

If you want to generate our training data from the raw ABC dataset, please refer here.

[Optional] You can also download the output of NH-Rep from BaiduYun or OneDrive, and unzip it under data folder. For each model, there will be 2 outputs:

*_50k_model_h.pt: implicit function of root node stored with TorchScript.
*_50k.ply: extracted zero surface of the implicit function.

With the provided output data, you can skip training and directly go to the evaluation part.

Training for the whole dataset

To convert the whole dataset to neural halfspace representation by training from scratch, run:

    $ cd PATH_TO_NH-REP/code/conversion
    $ python run.py --conf setup_all.conf --pt ../data/output_data

As there are totally over 10,000 models, the training will take a long long time. We recommend you to use multiple gpus for training. To do this, simply create more *.conf files and distribute 'fileprefix_list' of setup_all.conf into each of them.

Evaluation

To conduct evaluation, you need to firstly build a point-sampling tool.

    $ cd PATH_TO_NH-REP/code/evaluation/MeshFeatureSample
    $ mkdir build && cd build
    $ cmake ..
    $ make

Then you can evaluate the conversion quality (CD, HD, NAE, FCD, FAE) of the broken_bullet model:

    $ cd PATH_TO_NH-REP/code/evaluation
    $ python evaluation.py 

To evaluate the whole dataset, please download 'eval_data' from BaiduYun or OneDrive, and unzip it under the data folder, then run:

    $ python evaluation.py --name_list all_names.txt

Statistics will be stored in eval_results.csv.

The .ptangle file used for evaluation stores position and dihedral angle (in degree) of points uniformly sampled on sharp features of a model.

To evaluate the DE and IoU metric, you need to download ground truth mesh data from BaiduYun or OneDrive, and unzip it under the root folder. You also need to build code/IsoSurfacing, then switch to folder PATH_TO_NH-REP/data/output_data, run:

    $ python eval_de_iou.py

DE and IoU will be stored in the *_eval.txt files.

Citation

If you use our code for research, please cite our paper:

@article{Guo2022nhrep,
  title={Implicit Conversion of Manifold B-Rep Solids by Neural Halfspace Representation},
  author={Guo, Hao-Xiang and Yang, Liu and Pan, Hao and Guo, Baining},
  journal={ACM Transactions on Graphics (TOG)},
  year={2022},
  publisher={ACM New York, NY, USA}
}

License

MIT Licence

Contact

Please contact us (Haoxiang Guo guohaoxiangxiang@gmail.com, Yang Liu yangliu@microsoft.com) if you have any question about our implementation.

Acknowledgement

This implementation takes IGR as references. Our codes also include happly, yaml-cpp, cxxopts and Geometric Tools. We thank the authors for their excellent work.

About

Official implementation for paper 'NH-Rep: Neural Halfspace Representations for Implicit Conversion of B-Rep Solids'

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages