You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our examples, there should be an easy way to switch to lower precision for feature storage and/or training.
Let's say we load a dataset and the memory consumption is high. The user may want to cast the features tensor to float16 or bfloat16. However, it is not easy to do that with our existing abstractions. Either the Feature or TorchBasedFeature class or the BuiltinDataset should have a way to specify the feature datatype either during loading or cast after loading.
The text was updated successfully, but these errors were encountered:
mfbalin
changed the title
[GraphBolt] There should be an easy way to cast to fp16 or bf16
[GraphBolt] There should be an easy way to use fp16 or bf16
Apr 20, 2024
馃敤Work Item
IMPORTANT:
Project tracker: https://github.com/orgs/dmlc/projects/2
Description
In our examples, there should be an easy way to switch to lower precision for feature storage and/or training.
Let's say we load a dataset and the memory consumption is high. The user may want to cast the features tensor to float16 or bfloat16. However, it is not easy to do that with our existing abstractions. Either the
Feature
orTorchBasedFeature
class or theBuiltinDataset
should have a way to specify the feature datatype either during loading or cast after loading.The text was updated successfully, but these errors were encountered: