You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've looked but can't see an issue for this. Is there one? If not what work would need to be done to make this possible and could it be itemised to make it easier to chunk out into small parts?
In principle this seems like something that could be tackled by newer contributors and so making that easy seems like it would be worth the effort? That being said I have no idea and it could be far more complicated.
The text was updated successfully, but these errors were encountered:
Update: I just did a quick scan across the ecosystem in the TuringLang org and I see a couple of issues/PRs attacking this (TuringLang/Bijectors.jl#266) so maybe I am just looking in the wrong place?
I don't think there is a unifying issue unfortunately, but we can make this the one:)
It is definitively possible to achieve. For example, it's already possible to evaluate a model using, say, Float32 if one is willing to do some work.
But it requires quite some effort to make consistent across code-bases to the point where it's easy to just run the full inference procedure using Float32.
There are a few aspects to this:
The user needs to write evertyhing in a way that's compatible with whatever type we want. Specifically, that means something like @model demo(::Type{T}=Float64) where T and make sure that all the numbers inside of the model are of type T, e.g. call zeros(T, n) and so on.
We need to enable ways of specifying that Turing.jl should use some time T everywhere internally, e.g. through either the sampler or the sample method itself.
We need to replace all constructions of VarInfo (used internally in Turing.jl to keep track of realizations) with a corresponding one that takes the desired type into account.
torfjelde
changed the title
Question: Steps required to support Float32
Support for Float32
May 7, 2024
I see in various places issues using
Float32
(i.e. https://discourse.julialang.org/t/issue-with-float32-precision-in-turing-model-sampling/108160) with the problem being that areas of the code hard codeFloat64
.I've looked but can't see an issue for this. Is there one? If not what work would need to be done to make this possible and could it be itemised to make it easier to chunk out into small parts?
In principle this seems like something that could be tackled by newer contributors and so making that easy seems like it would be worth the effort? That being said I have no idea and it could be far more complicated.
The text was updated successfully, but these errors were encountered: