Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time to execute raw_pose_preprocessing #41

Open
chinnusai25 opened this issue Mar 20, 2023 · 2 comments · May be fixed by #63
Open

Time to execute raw_pose_preprocessing #41

chinnusai25 opened this issue Mar 20, 2023 · 2 comments · May be fixed by #63

Comments

@chinnusai25
Copy link

chinnusai25 commented Mar 20, 2023

It appears that the time taken to execute raw_pose_preprocessing.py over all the amass data as 3-4 days. Is it expected or am I doing something wrong.

@EricGuo5513
Copy link
Owner

It does take much time, especially when you use CPU. You could try to use GPU, and open several thread doing this.

felixdivo added a commit to felixdivo/HumanML3D that referenced this issue Jun 24, 2023
@felixdivo felixdivo linked a pull request Jun 24, 2023 that will close this issue
@fhan235
Copy link

fhan235 commented Aug 9, 2023

Hi, I think the for loop in 'raw_pose_processing.ipynb' may not be appropriate, the SMPL body model can process sequence data which is more efficient.

with torch.no_grad():
    for fId in range(0, frame_number, down_sample):
        root_orient = torch.Tensor(bdata['poses'][fId:fId+1, :3]).to(comp_device) # controls the global root orientation
        pose_body = torch.Tensor(bdata['poses'][fId:fId+1, 3:66]).to(comp_device) # controls the body
        pose_hand = torch.Tensor(bdata['poses'][fId:fId+1, 66:]).to(comp_device) # controls the finger articulation
        betas = torch.Tensor(bdata['betas'][:10][np.newaxis]).to(comp_device) # controls the body shape
        trans = torch.Tensor(bdata['trans'][fId:fId+1]).to(comp_device)    
        body = bm(pose_body=pose_body, pose_hand=pose_hand, betas=betas, root_orient=root_orient)
        joint_loc = body.Jtr[0] + trans
        pose_seq.append(joint_loc.unsqueeze(0))

I change the above code to the following, eliminating the for loop.

down_sample = int(fps / ex_fps)
with torch.no_grad():
    root_orient = torch.Tensor(bdata['poses'][::down_sample, :3]).to(comp_device) # controls the global root orientation
    pose_body = torch.Tensor(bdata['poses'][::down_sample, 3:66]).to(comp_device) # controls the body
    pose_hand = torch.Tensor(bdata['poses'][::down_sample, 66:]).to(comp_device) # controls the finger articulation
    betas = torch.Tensor(bdata['betas'][:10][np.newaxis]).repeat((pose_hand.shape[0], 1)).to(comp_device) # controls the body shape
    trans = torch.Tensor(bdata['trans'][::down_sample]).to(comp_device)    
    body = bm(pose_body=pose_body, pose_hand=pose_hand, betas=betas, root_orient=root_orient)
    joint_loc = body.Jtr[:,:22] + trans[:, None]
    pose_seq.append(joint_loc)

This can reduce the processing time enormously

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants