Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Float32 #2212

Open
seabbs opened this issue May 1, 2024 · 2 comments
Open

Support for Float32 #2212

seabbs opened this issue May 1, 2024 · 2 comments

Comments

@seabbs
Copy link

seabbs commented May 1, 2024

I see in various places issues using Float32 (i.e. https://discourse.julialang.org/t/issue-with-float32-precision-in-turing-model-sampling/108160) with the problem being that areas of the code hard code Float64.

I've looked but can't see an issue for this. Is there one? If not what work would need to be done to make this possible and could it be itemised to make it easier to chunk out into small parts?

In principle this seems like something that could be tackled by newer contributors and so making that easy seems like it would be worth the effort? That being said I have no idea and it could be far more complicated.

@seabbs
Copy link
Author

seabbs commented May 1, 2024

Update: I just did a quick scan across the ecosystem in the TuringLang org and I see a couple of issues/PRs attacking this (TuringLang/Bijectors.jl#266) so maybe I am just looking in the wrong place?

@torfjelde
Copy link
Member

I don't think there is a unifying issue unfortunately, but we can make this the one:)

It is definitively possible to achieve. For example, it's already possible to evaluate a model using, say, Float32 if one is willing to do some work.
But it requires quite some effort to make consistent across code-bases to the point where it's easy to just run the full inference procedure using Float32.

There are a few aspects to this:

  • The user needs to write evertyhing in a way that's compatible with whatever type we want. Specifically, that means something like @model demo(::Type{T}=Float64) where T and make sure that all the numbers inside of the model are of type T, e.g. call zeros(T, n) and so on.
  • We need to enable ways of specifying that Turing.jl should use some time T everywhere internally, e.g. through either the sampler or the sample method itself.
  • We need to replace all constructions of VarInfo (used internally in Turing.jl to keep track of realizations) with a corresponding one that takes the desired type into account.

@torfjelde torfjelde changed the title Question: Steps required to support Float32 Support for Float32 May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants