Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Significantly increase precision when using binary floats/doubles #65

Open
dumblob opened this issue Aug 29, 2021 · 1 comment
Open

Comments

@dumblob
Copy link

dumblob commented Aug 29, 2021

A recent research has shown significant precision improvement for binary float computations if used as in https://github.com/rutgers-apl/rlibm-32 . This new approach is (much) faster than the current state of the art (incl. Herbie and especially Pherbie) and does mostly "hide" the effects of undefined binary float behavior.

I'd say vsl shall adopt this approach for many (most?) of its functionality (it'd become less important for functions accepting decimal floats, but I think vsl will probably stay mostly binary float - oriented).

The big question is, whether this approach is "transferrable" and "generic enough" so that it could be used internally by the V compiler in -prod mode for all (i.e. arbitrary) binary float expressions (in addition to having decimal float literals as default 😉).

@dumblob
Copy link
Author

dumblob commented Sep 17, 2021

I was again thinking about making "real-like" numbers somehow pluggable because it seems not many people are interested in changing the currently default binary floats to something sane like decimal floats.

In C++ it's easy - one writes just one include https://github.com/stillwater-sc/universal and done. It's all exactly the number you need. This is impossible with V but would be a perfect fit for AI, HPC etc. where one has specific needs for each task but more importantly it'd fit what CPUs offer (e.g. POWER seems to offer at least 3 wildly different floating point numbers)!

Maybe something like the proposal in vlang/v#5180 (comment) to (almost*) disallow any floats (esp. literals and comparison operators) by default and provide a pluggable way to choose the default.

*there are many ways how to make this work out-of-the-box (especially for REPL):

  1. either issue a warning (except for REPL) that e.g. decimal floats were chosen and to make the warning disappear you should explicitly choose yourself (e.g. by importing a given module or by providing a command-line flag or whatever)
  2. issue no warning and crash compilation instead (solved e.g. by importing a given module or by providing a command-line flag or whatever)
  3. either (a) or (b) but with the twist that you could "assert" in code that the desired number representations were chosen (hm, actually such assert could be valuable independently from the discussion about defaults)
  4. function "tag" designating the required number representation (much like we have tags influencing representation of structs, we'd have tags influencing representation of floats)
  5. ...

Again, I'm certain the default shall be safe (do what intuitively seems appropriate) and explicit - and I know only of one number representation which satisfies this requirement - it's decimal floats. Everything above is just a convenience layer on top of the default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant