Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate VIBE on own dataset #370

Open
ChennyDeng opened this issue Aug 6, 2023 · 7 comments
Open

Evaluate VIBE on own dataset #370

ChennyDeng opened this issue Aug 6, 2023 · 7 comments

Comments

@ChennyDeng
Copy link

Hi,

Thank you for providing such an amazing platform! It is really helpful, and I truly appreciate your time and effort in creating these well-organized documentations.

I am trying to evaluate VIBE on a new dataset. I wonder if mmhuman3d has released any instructions on ‘how to preprocess a new dataset into the required preprocessed npz files that VIBE needs for storing and loading?’ If not, could you kindly provide some hints on how to sort the raw data into the relevant format that VIBE can accept?

Thank you in advance for clarifying this! I wish you a lovely day!

Chenny

@Dipankar1997161
Copy link

You need smpl values from VIBE right? Then I think, you should check VIBE repo directly, all you have to do is Run VIBE on your own dataset images or video and the files will be generated automatically.

I did that way

@ChennyDeng
Copy link
Author

Hi,

Thank you for your response. I believe I am seeking a tutorial on converting a new dataset into the mmhuman3d preprocessed format. Once preprocessed, the data should be in an .npz file (https://mmhuman3d.readthedocs.io/en/main/preprocess_dataset.html).

I understand that mmdetection3d has provided relevant documentation (https://mmdetection3d.readthedocs.io/en/dev/tutorials/customize_dataset.html). I am curious whether the author will release similar instructions for mmhuman3d. If not, I kindly inquire whether the author could provide instructions on how to convert a new dataset into the preprocessed npz format that mmhuman3d VIBE can accept.

Thank you for your clarification.

@Wei-Chen-hub
Copy link
Collaborator

Hi @ChennyDeng ,
Thanks for your interest in MMHuman3D, for dataset format, you can check for the HumanData format

I am currently working with different datasets to HumanData to support human preception tasks, my branch: convertors currently has several new converters that converts various dataset into HumanData. As datasets can be very different, there is no standardized way to write a data converter, you may look into the code for some insight.

Hope this reply is relevant to you.

@ChennyDeng
Copy link
Author

Hi @Wei-Chen-hub ,

I want to express my appreciation for your incredible work. I've noticed that you've begun developing a converter to transform CIMI4D into the HumanData format (https://github.com/open-mmlab/mmhuman3d/blob/convertors/mmhuman3d/data/data_converters/cimi4d.py). However, it seems that this converter is still a work in progress. I'm curious if you have a timeline in mind for its completion. If you're open to it, I'd be more than willing to assist in creating the script.

Thanks for your time and effort in producing this amazing work :)

Best regards,
Chenny

@Wei-Chen-hub
Copy link
Collaborator

Hi @ChennyDeng ,

Thanks for your appreciation!

Sadly, for CIMI4D, the coding process is paused as the I didn't find official visualization code (or post processing code) and I have no experience with the raw mocap data. (e.g. ".bvh" files). If you have any insights in continue converting this dataset to HumanData (i.e. extract smpl/smplx data, regress keypoints 3d and project to camera space etc.), please go ahead and push a MR. I would be more than happy to provide any support in developing process as well as 2d-overlay verfication of HumanData.

For CIMI4D, you can send author an email request for raw data, after downloading and unzipping, the dataset should look like this.

D:\datasets\cimi4d\
├── ChangSha_V1.0\
│   ├── 20220707_A_174cm68kg18age_M\
│   │   ├── A001\
│   │   │   ├──A001.bvh
│   │   │   ├──A001.mbx
│   │   │   ├──A001_label.h5py
│   │   │   ├──A001_pos.csv
│   │   │   ├──A001_rot.csv
│   │   │   ├──A_shape.json
│   │   │   ├──Sparse_scene_points.pcd
│   │   │   └──lidar_p.txt
│   │   └── A002\
│   ├── 20220708_B_158cm57kg20age_F\
│   │   ├── B002\
│   ├── ...
└── XMU_V1.0\
    ├── XMU_0930_001_V1_0\
    │   ├── img_fps20\
    │   ├──Dense_scene_mesh.ply
    │   ├──Dense_scene_points.pcd
    │   ├──Sparse_scene_points.pcd
    │   ├──cimi4d_XMU_0930_001_V1.pkl
    │   ├──climbingXMU001_zpc001.bvh
    │   ├──climbingXMU001_zpc001.mbx
    │   ├──climbingXMU001_zpc001_pos.csv
    │   ├──climbingXMU001_zpc001_rot.csv
    │   └──lidar_p.txt
    ├── XMU_0930_002_V1_0\
    ├── ...

And code to call converter is

python tools/convert_datasets.py \
    --datasets cimi4d \
    --root_path <cimi4d dataset path> \
    --output_path <output path> \
    --modes train

@ChennyDeng
Copy link
Author

Hi @Wei-Chen-hub,

Thanks for getting back to me so quickly. I'm really excited about the project and would be more than happy to help out. Could we schedule a time to have a conversation about the converter script? If that suits you, please feel free to reach out to me via this email address I'm looking forward to our discussion and the possibility of working together.

Best regards,
Chenny

@Wei-Chen-hub
Copy link
Collaborator

Wei-Chen-hub commented Sep 4, 2023

Hi @ChennyDeng ,

I have sent an email at 25 Aug, but would like to check if that finds you well.
If you didn't receive that email, please contact my address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants