Skip to content

Latest commit

 

History

History
21 lines (13 loc) · 1.08 KB

README.md

File metadata and controls

21 lines (13 loc) · 1.08 KB

IM-NET PyTorch

A PyTorch implementation of "Learning Implicit Fields for Generative Shape Modeling" by Zhiqin Chen and Hao Zhang

Demo

Interpolation between digits

The implicit network learns shape boundaries rather than pixel distributions, so interpolation between digits looks like one digit morphing into another. In a regular autoencoder, interpolation would look like the first digit fading out and the second digit fading in.

Super-resolution + interpolation

We can also sample outputs at a higher resolution than the training data. Here is an MNIST interpolation at 128x128 pixels instead of the regular 28x28. This looks significantly less pixelated than the above interpolations.