Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Style Transfer Jitter #190

Open
cdasomers opened this issue Oct 8, 2021 · 3 comments
Open

Style Transfer Jitter #190

cdasomers opened this issue Oct 8, 2021 · 3 comments

Comments

@cdasomers
Copy link

I'm experimenting with the pre-trained style-transfer model with little success.

Why can I only get good results from the animations you provided for your demo (and the files in the xia_mocap folder)? Clips edited from the mocap_bfa folder, and other animations we retargeted to the CMU skeleton don't work. They produce extremely jittery content, as if the style translated into noise.

The attached video shows typical results. This is what happens when I apply a short walking section of one of the mocab_bfa files as style (left) and content (right) to 7 of the animations in the test_data folder.

Is there any special animation preprocessing required, or assumptions about world location/movement direction? Are there inconsistencies in the BVH content between sources that we need to be aware of?

Thank you.

old500_style_content.mp4
@cdasomers
Copy link
Author

cdasomers commented Oct 15, 2021

Follow up:
Having experimented with the pre-trained network for a week I have realize it only works well when the content animation (and perhaps the style animation) is from the original training data. Perhaps it's not expected to generalize? It is still useful to style transfer using content and style animations the system is trained with to fill in combinations that were never captured.

Is this the intended use case?

@miaoYuanyuan
Copy link

Follow up: Having experimented with the pre-trained network for a week I have realize it only works well when the content animation (and perhaps the style animation) is from the original training data. Perhaps it's not expected to generalize? It is still useful to style transfer using content and style animations the system is trained with to fill in combinations that were never captured.

Is this the intended use case?

Hi, I meet the same issue, when I change the style with myself data , all result is not good even with ill pose. I see you say that "It is still useful to style transfer using content and style animations the system is trained with to fill in combinations that were never captured.", are you trained the network with your own dataset? after training, test the same dataset is ok?

@cdasomers
Copy link
Author

We haven't trained it on our own data yet because we have trouble getting our animations into the BVH format. We are looking instead to bypass this step. In the meantime I've experimented with training the model on subsets of the xia dataset to see what happens. This seems to work ok even on content animations the model was not trained on. So I'm still unsure why the results are jittery when they are. Perhaps there are incompatibilities with the animation representation. Even the BFA animation data they used seems to have an issue in this respect.

When I say "It is still useful to style transfer using content and style animations the system is trained with to fill in combinations that were never captured." I'm referring to experiments I performed on the xia dataset they trained the pre-trained network provided with the code. In the video below I transferred each of the demo animations onto each of the others to create a style transfer matrix. The results are pretty good for locomotion.

matrix_smaller.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants