Skip to content

SamsungLabs/point_based_clothing

Repository files navigation

Point-Based Modeling of Human Clothing

Paper | Project page | Video

This is an official PyTorch code repository of the paper "Point-Based Modeling of Human Clothing" (accepted to ICCV, 2021).

Setup

Build docker

  • Prerequisites: your nvidia driver should support cuda 10.2, Windows or Mac are not supported.
  • Clone repo:
    • git clone https://github.com/izakharkin/point_based_clothing.git
    • cd point_based_clothing
    • git submodule init && git submodule update
  • Docker setup:
  • Download 10_nvidia.json and place it in the docker/ folder
  • Create docker image:
    • Build on your own: run 2 commands
  • Inside the docker container: source activate pbc

Download data

  • Download the SMPL neutral model from SMPLify project page:
    • Register, go to the Downloads section, download SMPLIFY_CODE_V2.ZIP, and unpack it;
    • Move smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl to data/smpl_models/SMPL_NEUTRAL.pkl.
  • Download models checkpoints (~570 Mb): Google Drive and place them to the checkpoints/ folder;
  • Download a sample data we provide to check the appearance fitting (~480 Mb): Google Drive, unpack it, and place psp/ folder to the samples/ folder.

Custom data

To run our pipeline on custom data (images or videos):

  • run our fork of Graphonomy to obtain clothing segmentation mask in our format;
  • run e.g. SMPLify or any other suitable method to obtain the SMPL-parameters (3D body pose and shape ground truth).

We recommend to run these methods on internet_images/ test dataset to make sure that your outputs exactly match the format inside internet_images/segmentations/cloth and internet_images/smpl/results.

Run

We provide scripts for geometry fitting and inference and appearance fitting and inference.

Geometry (outfit code)

Fitting

To fit a style outfit code to a single image one can run:

python fit_outfit_code.py --config_name=outfit_code/psp

The learned outfit codes are saved to out/outfit_code/outfit_codes_<dset_name>.pkl by default. The visualization of the process is in out/outfit_code/vis_<dset_name>/:

  • Coarse fitting stage: four outfit codes initialized randomly and being optimized simultaneosly.

outfit_code_fitting_coarse

  • Fine fitting stage: mean of found outfit codes is being optimized further to possibly imrove the reconstruction.

outfit_code_fitting_fine

Note: visibility_thr hyperparameter in fit_outfit_code.py may affect the quality of result point cloud (e.f. make it more sparse). Feel free to tune it if the result seems not perfect.

vis_thr_360

Inference

outfit_code_inference

To further infer the fitted outfit style on the train or on new subjects please see infer_outfit_code.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Appearance (neural descriptors)

Fitting

To fit a clothing appearance to a sequence of frames one can run:

python fit_appearance.py --config_name=appearance/psp_male-3-casual

The learned neural descriptors ntex0_<epoch>.pth and neural rendering network weights model0_<epoch>.pth are saved to out/appearance/<dset_name>/<subject_id>/<experiment_dir>/checkpoints/ by default. The visualization of the process is in out/appearance/<dset_name>/<subject_id>/<experiment_dir>/visuals/.

Inference

appearance_inference

To further infer the fitted clothing point cloud and its appearance on the train or on new subjects please see infer_appearance.ipynb. To run jupyter notebook server from the docker, run this inside the container:

jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser 

Q&A

Question:

Trying to obtain the final point cloud generated during the outfit_coding module. Is there a way to save the 3D point clouds used to generate the output images/videos when running fit outfit code?

Answer:

There is no implemented function for that out-of-the-box, but one can access the point clouds themselves via working with data dicts:

  • during outfit code fitting, you could start by saving cloth_pcd to a file and see if it is in the format you need. This should be a point cloud predicted by a draping network from a current outfit_code.
  • during inference, you could start right from the notebook, the third code cell. You can access the point cloud of the clothing by just returning cloth_pcd from infer_pid() function.

The actual place where the draping network predicts the clothing point cloud from an outfit_code is only one - it is here inside the forward_pass() function.

Citation

If you find our work helpful, please do not hesitate to cite us:

@InProceedings{Zakharkin_2021_ICCV,
    author    = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor},
    title     = {Point-Based Modeling of Human Clothing},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14718-14727}
}

Non-commercial use only.

Related projects

We also thank the authors of Cloth3D and PeopleSnapshot datasets.