Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computation simplification with precision reduction #27

Open
SuperSashka opened this issue Mar 20, 2023 · 2 comments
Open

Computation simplification with precision reduction #27

SuperSashka opened this issue Mar 20, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers
Milestone

Comments

@SuperSashka
Copy link
Member

In some computer vision applications it is viable to reduce precision from, for example, float64 to float32/float16 or even to int. Maybe, it would be more faster to compute first guess in, for example, float16 and move to float64 to find finer solution.

Cf. Lazy computation mode #25

@SuperSashka SuperSashka added enhancement New feature or request good first issue Good for newcomers labels Mar 20, 2023
@SuperSashka SuperSashka added this to the Faster speed milestone Mar 20, 2023
@SuperSashka
Copy link
Member Author

We got memory issue with tensors of size (52,125,125)~800k points. Gradients are of size 16GB. It seems that we have to somehow mitigate it. =)

@SuperSashka
Copy link
Member Author

SuperSashka commented Aug 16, 2023

We need to up this issue and take this issue as the next target.

We have to make sure that we can set floating point precision as an argument.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants