Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DecimalNum: reduce DEFAULT_PRECISION from 32 to 16 or less #1086

Open
nimo23 opened this issue Jul 30, 2023 · 2 comments
Open

DecimalNum: reduce DEFAULT_PRECISION from 32 to 16 or less #1086

nimo23 opened this issue Jul 30, 2023 · 2 comments
Labels
enhancement Issue describing or discussing an enhancement for this library

Comments

@nimo23
Copy link
Contributor

nimo23 commented Jul 30, 2023

Despite the discussion about #916, what we really can do to increase the default performance immediately and noticeably, is to decrease the precision of DecimalNum#DEFAULT_PRECISION:

public final class DecimalNum implements Num {

    private static final int DEFAULT_PRECISION = 32;

to

public final class DecimalNum implements Num {

    private static final int DEFAULT_PRECISION = 16; // or less?

Reasons:

  • Too high a precision leads to excessive computational effort and has de facto no added value. If so, please prove it!
  • Most cryptocurrency trading platforms and analysis tools typically operate with limited accuracy, often to 8 or 16 decimal places.
@nimo23 nimo23 added the enhancement Issue describing or discussing an enhancement for this library label Jul 30, 2023
@nimo23 nimo23 changed the title DecimalNum: reduce DEFAULT_PRECISION from 32 to 12 or less DecimalNum: reduce DEFAULT_PRECISION from 32 to 16 or less Jul 30, 2023
@nimo23
Copy link
Contributor Author

nimo23 commented Aug 6, 2023

Looking for example, at the smallest unit of ETH (=wei) which is equivalent to 10^-18 ETH ( = 0.000000000000000001 ETH). So maybe using a precision of 20 instead of 32? I don't know if this matters to choose a far smaller one. For example, in DoubleNum we only use a default precision of 6 digits.

It would be also good to use the same default precision for both DecimalNum and DoubleNum, i.e. use a precision of 12 (or 20?) for both Num-types - with this, we can even go further were it really does matter to use DecimalNum, where e.g. "DecimalNum" should really be preferred or vice versa. It didn't make sense to have different precisions for both num types in the first place, since there could never be an equal choice between them. To sum up: It makes sense to use the same precision for both kinds of Num-ypes, otherwise the user doesn't have an equal choice between them.

@TheCookieLab
Copy link
Member

I'm open to exploring this further, although am confused by the line:

Too high a precision leads to excessive computational effort and has de facto no added value. If so, please prove it!

Is this a prompt to yourself to "prove" the value of this change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Issue describing or discussing an enhancement for this library
Projects
None yet
Development

No branches or pull requests

2 participants