Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Illustrative; not for merge] How to prefer float16 as the main float type #1802

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Birch-san
Copy link

Currently, if your intention is to build a CoreML model which targets float16 (very likely if you're targeting ANE):
the process for tracing+converting the model is "trace in float32; casts to float16 will be added during conversion, then we try to optimize away those casts". I think.

This does have an (admittedly minor) downside of having to load a float32 model (and start with 32-bit weights), only to ultimately throw half of them away.

It also relies on optimization passes being effective in eliding all the casts. And you have to wait for them (slightly).

The main downside for me was when trying to debug failures: the ops were complicated by lots of casts, and looked more different from the initial torchscript than they needed to be.

To simplify all this: I changed everywhere I could find, the convention:
"Python floats will be interpreted as np.float32 np.float16".

I'm not proposing to merge this or anything, but it actually wasn't many places that the changes needed to be made, in order to successfully compile stable-diffusion, using a model that was traced by torchscript in float16.
Note: on PyTorch, the CPU device doesn't implement float16 operations, so this trick requires one to trace the model in float16 via the MPS device instead.

If it turns out it's not just me who finds this useful:
I wonder whether this could be exposed somehow as a configurable option? "default float width".

…loat literal, or how to convert ints to floats), to simplify the conversion process (rather than starting in fp32, spraying fp16 casts everywhere, and trying to remove them during conversion).
@aseemw
Copy link
Collaborator

aseemw commented Mar 12, 2023

If you start with a float 16 traced torch model, does it work out of the box? Or you need the changes in this PR to convert such a model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants