Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust does not comply with IEEE 754 floats: arithmetic can produce signaling NaN #107247

Closed
Muon opened this issue Jan 24, 2023 · 22 comments
Closed
Labels
A-floating-point Area: Floating point numbers and arithmetic C-bug Category: This is a bug. T-lang Relevant to the language team, which will review and decide on the PR/issue.

Comments

@Muon
Copy link

Muon commented Jan 24, 2023

The compiler optimizes x * 1.0 to x, which is incorrect whenever x is a signaling NaN. IEEE 754 mandates that the result is a quiet NaN.

Example (rustc 1.66, with -C opt-level=2): https://godbolt.org/z/5qa5Go17M

This is almost certainly an LLVM bug. It's been filed over there under llvm/llvm-project#43070. I don't know which optimization pass is responsible for it.

@Muon Muon added the C-bug Category: This is a bug. label Jan 24, 2023
@CAD97
Copy link
Contributor

CAD97 commented Jan 24, 2023

Interestingly enough, MSVC does the same transformation even with /fp:strict.

I've access to the actual text of IEEE 754-2019 through my current university (I suppose I could've grabbed 754-2008 instead, but I grabbed the latest revision), so here's the actual relevant quotes:

§6.2 ¶3: Signaling NaNs shall be reserved operands that signal the invalid operation exception (see §7.2) for every general-computational and signaling-computational operation except for the conversions described in §5.12.

§6.2 ¶2: Under default exception handling, any operation signaling an invalid operation exception and for which a floating-point result is to be delivered, except as stated otherwise, shall deliver a quiet NaN. For non-default treatment, see §8.

§7.2 ¶2: For operations producing results in floating-point format, the default result of an operation that signals the invalid operation exception shall be a quiet NaN that should provide some diagnostic information (see §6.2). These operations are:
a) any general-computational operation on a signaling NaN (see §6.2), except for some conversions (see §5.12)

§5.12 is about conversions between floating-point data and character sequences, so is not relevant. Multiplication is a "formatOf general-computational operation".

Thus, I agree that the standard mandates that signalingNaN * 1.0 should (raise an invalid operation exception and) evaluate to a quiet NaN, and that the transformation to just return signalingNaN is incorrect.

If we relax the language specification from "default floating point environment" to "implementation defined floating point environment" and allow the language to arbitrarily change the exception handling mode, then we can justify the transformation, by first setting the exception handling mode for invalid operation to substitute(signalingNaN), doing signalingNaN * 1.0 (signaling an invalid operation exception, handling that by substituting signalingNaN as the result), and then setting the exception handling mode back to whatever it was previously (probably mayRaiseFlag in this world).

FWIW, alive2 times out for float and considers the transformation valid for half. From LLVM LangRef:

The default LLVM floating-point environment assumes that floating-point instructions do not have side effects. Results assume the round-to-nearest rounding mode. No floating-point exception state is maintained in this environment. Therefore, there is no attempt to create or preserve invalid operation (SNaN) or division-by-zero exceptions [emphasis mine].

The benefit of this exception-free assumption is that floating-point operations may be speculated freely without any other fast-math relaxations to the floating-point model.

I tried using the (experimental?) constrained floating point intrinsics, but I don't think alive2 supports the strictfp attribute. As far as I can tell, we need !"fpexcept.strict" in order to avoid removing operations which may raise a floating point exception. But even with !"fpexcept.ignore", that's just defined as "may assume that the exception status flags will not be read and that floating-point exceptions will be masked", which I don't think is strong enough to assume signaling NaNs don't exist (invalid operation exceptions won't be raised), and the change from signaling to quiet should need to be preserved under those semantics.

Of note from the Rust docs, though, is that a bunch of f32 methods say roughly

This follows the IEEE 754-2008 semantics for operation, except for handling of signaling NaNs; this function handles all NaNs the same way [...]

I think there's a fair chance that the Rust opsem will make no distinction between signaling and quiet NaNs. I can somewhat justify a perverse reading of IEEE 754-1985 that allows all binary encoded NaNs to be quiet; 754-2008 and 754-2019 of course mandate a specific interpretation of the mantissa MSB as indicating quiet (set) or signaling (cleared), but Rust does support targets such as MIPS which do not agree with the 2008 definition.

@CAD97
Copy link
Contributor

CAD97 commented Jan 24, 2023

I would cc T-opsem and/or A-mir-formality but I don't think either have a pingable role.

For lack of a better classification,

@rustbot modify labels +T-lang

@rustbot rustbot added the T-lang Relevant to the language team, which will review and decide on the PR/issue. label Jan 24, 2023
@workingjubilee workingjubilee added the A-floating-point Area: Floating point numbers and arithmetic label Jan 27, 2023
@workingjubilee
Copy link
Contributor

As much as I am usually in favor of pedantic adherence of IEEE754, it is extremely dubious that we would want to guarantee that this transformation would not occur, as Rust does not meaningfully support floating point exceptions and thus does not support the utility of this exception to the identity.

@thomcc
Copy link
Member

thomcc commented Jan 27, 2023

I think the argument would be that sNaN vs qNaN is still a meaningful difference even if exceptions don't exist.

@workingjubilee
Copy link
Contributor

Plausible.

@CAD97
Copy link
Contributor

CAD97 commented Jan 27, 2023

Yes, that's exactly it. It's not observable just via f32::total_cmp like the sign bit, but it's actually prescribed by the spec, unlike the sign bit1.

It's very likely that the strictest bound we'll get for Rust is that floating point operations returning NaN return a nondeterministic2 quiet NaN. This is (almost3) compliant with the IEEE spec in the mayRaiseFlag4 exception handling mode, so long as you allow implementer's discretion to choose quiet NaN payloads as permitting nondeterminism5.

We're not concerned about the rest of the NaN payload here (though that's also its own unresolved question), just the signaling bit. Unlike the rest of the NaN payload an even sign, the IEEE standard does explicitly require that no operation produce a NaN with the signalling bit unquiet; all operations must produce quiet NaNs.

Wasm is brought up as a target with nondeterministic NaNs, but it actually does only ever produce quiet NaNs. The spec states that "There is no observable difference between quiet and signalling NaNs," but this is only in respect to the fact that exception flags are not exposed in wasm. Separately, wasm defines the concept of an arithmetic NaN, which exactly matches the IEEE definition of quiet NaNs. It then requires all arithmetic operations to produce arithmetic NaNs (and if all NaN inputs are canonical NaNs, for the output to be a canonical NaN6).

Speaking on behalf of myself only, I would place slim odds favoring LLVM and subsequently Rust deciding to treat all NaNs as being quiet, willfully ignoring the signaling bit and violating the spec in a minor way (producing sNaN from arithmetic operations) in the name of performance. This seems to me the simplest way to reconcile the current behavior7 with the IEEE spec.

LLVM's documentation on the constrained floating-point operations states that mixing the unconstrained and constrained operations is not allowed, and that when in strictfp mode you can get the normal behavior by setting the appropriate metadata arguments. This isn't quite accurate in practice: by my testing, the strict intrinsics don't (currently) perform this reduction from x * 1.0 to x. It would be an interesting experiment to apply strictfp annotation everywhere and use the default-modality constrained floating-point operation intrinsics instead of the builtins, to determine the current overhead of actually asking LLVM for the correct behavior. I have my suspicions that it'll be higher than necessary just for sNaN quieting, unfortunately, as strictfp function calls have no way to be weakened down from fully strict.

More recent discussion on the LLVM discourse forum: https://discourse.llvm.org/t/semantics-of-nan/66729

Footnotes

  1. Which makes it somewhat odd imho that the floating point total order sorts quiet/signalling NaN next to each other but positive/negative to opposite extremes. You'd think the bit prescribed by the standard would be more important than the one left fully unspecified.

  2. As with freeze undef: demonic nondeterminism each time an operation producing NaN is evaluated, but using the result of a single operation is consistent with itself. This forbids duplicating floating point operations.

  3. Technically LLVM optimizations are also allowed to raise floating point exceptions spuriously, even when no floating point operations are executed, so that's nonconforming, as per the IEEE spec, status flags are raised without an exception being signaled only at the user's request.

  4. This is the mode which allows a floating point exception to nondeterministically raise or not raise exception flags.

  5. Brought up in LLVM discussion is that you could NaN-box the instruction pointer to implement the error introspection suggested by the spec, which is effectively nondeterministic at any higher level than true bare metal machine code. This has convinced me that allowing nondeterminism is a reasonable interpretation of the IEEE spec. The spec similarly does not specify the sign of a NaN result.

  6. This terminology is quite unfortunate. To the IEEE spec, all binary floats are canonical. To wasm, a canonical NaN is one with the first payload bit set (making the NaN quiet) and no other bits. In other places, a canonical NaN is required to have an unset sign bit, but wasm allows both positive and negative NaNs to be canonical.

  7. w.r.t. normalizing sNaN to qNaN, anyway. I have no proposal for the exception flags better than stating that entering Rust code immediately sets all fp exception flags to a demonically nondeterministic value.

@Muon
Copy link
Author

Muon commented Jan 27, 2023

Floating-point operations returning a nondeterministically chosen quiet NaN is perfectly compliant. Also bear in mind that Rust itself does not implement the full scope (in particular, flags) of IEEE 754, it only claims that floating-point arithmetic operations follow it.

I do not imagine there being a significant performance hit to disabling that "optimization". The only thing multiplication by 1 does is quiet a signaling NaN. Same with x + -0.0, which also gets miscompiled to x. Luckily, it doesn't transform x + 0.0 to x as well; that one would be a much more egregious error, since -0 + +0 = +0.

Which makes it somewhat odd imho that the floating point total order sorts quiet/signalling NaN next to each other but positive/negative to opposite extremes. You'd think the bit prescribed by the standard would be more important than the one left fully unspecified.

That's so totalOrder can be implemented efficiently! The ordering you get from it is the same as the natural order from interpreting the float as a sign-magnitude integer (assuming you're using the recommended [but not mandated] choice of signaling bit).

@CAD97
Copy link
Contributor

CAD97 commented Jan 27, 2023

Floating-point operations returning a nondeterministically chosen quiet NaN is perfectly compliant.

Yes, but it's been expressed a couple times that it'd be nice to have deterministic results.

I do not imagine there being a significant performance hit to disabling that "optimization". The only thing multiplication by 1 does is quiet a signaling NaN.

The reason I expect it to be more expensive is not due to what we actually want to disable, but what will get prevented in the crossfire. strictfp in LLVM is a heavy and experimental hammer meant for supporting floating point environment access; that it also suppresses the contraction which fails to quiet the NaN even when ignoring floating point exceptions is more due to disabling basically all optimizations around floating point operations rather than actually respecting IEEE semantics better.

recommended [but not mandated] choice of signaling bit

Huh yeah, you're right; I somehow got it in my head that the latter standard revisions prescribed the interpretation of the signaling bit rather than just providing a recommendation for interchange.

This means the (questionable) interpretation where your binary format for floats contains no encodings for signaling NaN ("should"), but your operations do "support" signaling NaN ("shall") should they be given one (impossible) is still (arguably) somewhat remotely viable.

Providing such an implementation of IEEE floating point would I think be an accurate description of the current behavior (modulo FFI visibility of floating point exception flags).

@Muon
Copy link
Author

Muon commented Jan 28, 2023

Yes, but it's been expressed a couple times that it'd be nice to have deterministic results.

It's not possible to guarantee that a specific qNaN is produced everywhere, even just on the current tier 1 platforms. In particular, the default qNaN on x86 always has the sign bit set (0xFFC00000 for f32), but it's always cleared on aarch64 (0x7FC00000 for f32).

This means the (questionable) interpretation where your binary format for floats contains no encodings for signaling NaN ("should"), but your operations do "support" signaling NaN ("shall") should they be given one (impossible) is still (arguably) somewhat remotely viable.

Firstly, we shouldn't consider ignoring sNaNs, because they are in fact something that exists and are in fact supported by the targets. This miscompilation causes the following to behave differently between debug and release: https://godbolt.org/z/e7qvYEPhc EDIT: simpler version: https://godbolt.org/z/j9h44d979. Secondly, encoding support for them is mandated by IEEE 754-2019, section 3.4:

The representation r of the floating-point datum, and value v of the floating-point datum represented, are inferred from the constituent fields as follows:
a) If E = 2ʷ − 1 and T ≠ 0, then r is qNaN or sNaN and v is NaN regardless of S and then d₁ shall exclusively distinguish between qNaN and sNaN (see 6.2.1).

That is, the first bit of the significand must always determine whether it is an sNaN or a qNaN, but the standard doesn't mandate that a set bit means it is a qNaN.

@CAD97
Copy link
Contributor

CAD97 commented Jan 28, 2023

It's not possible to guarantee that a specific qNaN is produced everywhere

miscompilation causes the following to behave differently between debug and release

So is nondeterminism okay or not?

By the standard1, qNaN * 1.0 may nondeterministically return any qNaN. Restating, bits(qNaN * 1.0) == bits(qNaN * 1.0) may be nondeterministically true or false. The same goes for any NaN-producing operation. The standard treats the sign and payload bits of NaN identically: no restrictions.

I'm not talking about whether results are portable between different targets; I'm talking about whether results are deterministic within a single execution.

And this is in fact a very important condition to discuss. NaN selection being either "deterministic across all targets" or "nondeterministic on all targets" are fine for optimizations, because all targets have the same behavior, and thus our target independent IR has a single behavior we can optimize over. If NaN selection must be deterministic within a single target but changes between targets, this changes the set of optimizations we're able to do.

Consider again the example of z * 1.0. We don't want to optimize this to just z, because this is incorrect when z is sNaN. But we can optimize it to quiet(z), where quiet is "if NaN, set $d_1$ to quiet"... but not if we have to respect the target's unknown NaN selection behavior. The target could only ever produce a single qNaN, or it could follow the NaN propagation rules suggested by the standard; we have no way to optimize floating-point operations that could produce NaN in a target independent manner. A transformation from z * 1.0 * 1.0 to z * 1.0 produces the same abstract value according to the standard, but also could change the concrete representation produced according to the standard.

This is why I say that the strongest semantic I think Rust will get is that every time a NaN is produced by a computational operation, that NaN shall be nondeterministically2 selected from the set of all qNaN3. This gives us a single semantic to optimize over which is validly refined by every correct implementation of the IEEE standard.

supported by the targets

If you consider Rust as targeting LLVM, no, not really. LLVM makes no effort to preserve the signaling quality of NaN. Whether the disclaimer just covers fp exceptions or also covers the signaling bit is debatable, but in practice this miscompilation is exactly due to LLVM pretending sNaN doesn't exist.

The strictfp operations are still experimental, and llvm.canonicalize still causes an instselect error on most codegen backends. For most intents and purposes, LLVM doesn't support sNaN.

That is, the first bit of the significand must always determine whether it is an sNaN or a qNaN

I'm not saying that this reading of the standard is good. But so long as no bit other than $d_1$ influences whether $r$ is sNaN or qNaN, it's possible to argue that this clause is satisfied. If that's the case, I can say that both set and unset mean qNaN and be technically conforming by extremely questionable rules lawyering.

Footnotes

  1. And very explicitly on wasm.

  2. This nondeterminism is probably demonic, meaning that if a selection that causes UB exists, that is the selection made. As a result, your program must be defined for all possible selections in order to be defined.

  3. Defining what the set of qNaN is is the other question. I maintain that all NaN is a possible choice, despite likely being in violation of the IEEE standard. It's essentially a given that Rust requires tonearest rounding mode and non-stop exception handling, and makes no guarantees about fp exception status flags. Under this environment, the only way to observe sNaN vs qNaN is by observing the signaling bit directly.

@Muon
Copy link
Author

Muon commented Jan 29, 2023

Nondeterminism is okay; the standard does not in any way forbid it. However, the examples I gave do not rely on it. On a compliant implementation, they must succeed on all executions. The original example I gave simply iterated through every NaN until it found one which is changed by multiplication by 1, which must necessarily happen once it finds an sNaN. It fails to find one under optimizations. My second example generates a qNaN, turns it into an sNaN by toggling the signaling bit (this always works on any compliant platform), and then multiplies it by 1, and then asserts that it is in fact changed (it must be, since it must be quieted). This also fails under optimizations.

Assuming that we're not promising anything more than IEEE 754 does, we can optimize z * 1.0 to quiet(z). (Note that we can also optimize (z * 1.0) * 2.0 to z * 2.0.) When optimizing nondeterministic programs, all we care about is that the final set of executions (or rather, observable behaviors) is a subset of the initial set of executions. (In fact, we're already committed to that when we generate hardware FPU instructions. The hardware is not a nondeterministic IEEE 754 machine, so preserving full equivalence under that model isn't happening anyway.) Although multiplying by 1 a second time is permitted to change the representation, it is equally possible for that to not happen, so we have the freedom to choose.

For most intents and purposes, LLVM doesn't support sNaN.

Yes, this is an LLVM bug, as I've mentioned previously. LLVM is non-compliant, which is a serious problem.

@RalfJung
Copy link
Member

RalfJung commented Jan 30, 2023

To my knowledge, LLVM makes no attempt to guarantee signalling NaN correctness. Is there a bugreport on their side to confirm whether they even see this as a miscompilation?

EDIT: Oh, the LangRef is actually explicit about this point. So yeah doesn't look like Rust has a lot of choice short-term here; this would require a long-term project to make LLVM IEEE-compliant and possibly expose more target-level NaN guarantees along the way.

My current rationalization of this behavior is that Rust simply does not implement the parts of the spec that distinguish signalling and non-signalling NaNs. IOW, x * 1.0, on a NaN input, is allowed to return any NaN output, with any sign, and any signalling bit. Under that model, the optimization is correct. (That's basically what @CAD97 wrote, if we also apply the footnotes.)

Yes, this means Rust would not be IEE 754 compliant -- though only in ways that popular C compilers are also already non-compliant, for whatever that is worth.

LLVM is non-compliant, which is a serious problem.

Why is it a serious problem? (Not a rhetorical question. Given that this behavior is widespread among C compilers, and hard to observe in the floating point environment Rust programs pretty much have to run in, I do not see what the serious practical issues are that are being caused by Rust implementing a weaker spec, where NaN outputs are picked non-deterministically from the set of all NaNs, signalling or quiet. The vast majority of floating point code does not care about signalling vs quiet NaNs, so making them pay for the niche code that does -- by having fewer optimizations -- is not a obvious win either.)

@CAD97
Copy link
Contributor

CAD97 commented Jan 30, 2023

Is there a bugreport on their [LLVM's] side to confirm whether they even see this as a miscompilation?

That's

At the moment it seems to be leaning slightly towards it being a documentation bug that the disclaimer about fp exceptions doesn't permit not distinguishing between sNaN and qNaN, but it's also possible that the other resolution of handling NaN more carefully will be taken. (Unfortunately, the llvm.canonicalize intrinsic causing an instselect error on most backends makes this a bit difficult...)

@Muon
Copy link
Author

Muon commented Jan 31, 2023

EDIT: Oh, the LangRef is actually explicit about this point. So yeah doesn't look like Rust has a lot of choice short-term here; this would require a long-term project to make LLVM IEEE-compliant and possibly expose more target-level NaN guarantees along the way.

Yes, it's an LLVM problem for now. I am not so sure how long-term it would be in terms of implementation. In the very least we should document our noncompliance.

My current rationalization of this behavior is that Rust simply does not implement the parts of the spec that distinguish signalling and non-signalling NaNs. IOW, x * 1.0, on a NaN input, is allowed to return any NaN output, with any sign, and any signalling bit. Under that model, the optimization is correct. (That's basically what @CAD97 wrote, if we also apply the footnotes.)

Certainly. However, it is noncompliant.

Why is it a serious problem? (Not a rhetorical question. Given that this behavior is widespread among C compilers, and hard to observe in the floating point environment Rust programs pretty much have to run in, I do not see what the serious practical issues are that are being caused by Rust implementing a weaker spec, where NaN outputs are picked non-deterministically from the set of all NaNs, signalling or quiet. The vast majority of floating point code does not care about signalling vs quiet NaNs, so making them pay for the niche code that does -- by having fewer optimizations -- is not a obvious win either.)

It's a serious problem for a number of reasons. From a practical perspective, signaling NaNs (with trapping exceptions) are in principle very useful debugging tools, but no one can use them because compilers don't implement them properly. There are many times I have personally wanted to use them for their intended purpose (debugging and sentinel values), only to be thwarted by gcc and clang. Any compliant FPU can be used this way, it's just a matter of language support. (I understand that LLVM is a long way away from actually letting you control the FPU properly, but they're working on it.) Performance-wise, losing this optimization/miscompilation is negligible, since it only affects cases where you are

  1. Operating on a possibly-sNaN value, and
  2. Just multiplying it by 1.

As soon as you do anything else, you can delete the multiplication by 1 (or division by 1, or addition of -0.0).

Also, C compilers are hardly a good role model for what to do with floating-point. They have been implementing invalid floating-point optimizations and causing numerical errors and reproducibility headaches since forever. It's only recently (with much suffering on part of my seniors in the verification arena) that they've been somewhat tamed. Surely Rust can do better?

@CAD97
Copy link
Contributor

CAD97 commented Jan 31, 2023

Even if the incorrect production of sNaN is fixed, Rust is unlikely to ever guarantee that floating point exceptions are not raised spuriously. Providing this guarantee is actually a significant deal, as it is required to allow speculative evaluation of floating point operations. Speculative evaluation is the simplest way to justify loop invariant code motion, where a computation invariant across loop iterations is hoisted outside the loop. This is one of the major benefits achieved by marking references as dereferenceable noalias, in fact.

Clarifying whether we acknowledge that sNaN is a thing is a reasonable medium term goal. Supporting optional alternative floating point exception handling and environment access is a far long term aspiration that it's not even really worth considering until LLVM has support for such. Yes, it'd be nice to have, but ultimately IEEE-754 is more a spec for chip FPU behavior (w.r.t. fp environment and exception status flags) than it is higher level languages which want to do larger scale optimizations than possible when the fp environment isn't static.

@RalfJung
Copy link
Member

RalfJung commented Jan 31, 2023

Also, C compilers are hardly a good role model for what to do with floating-point.

Is there a language that is a good role model, in the sense that it guarantees full IEEE compliance and even supports fp environments and exceptions?

@Muon
Copy link
Author

Muon commented Jan 31, 2023

Even if the incorrect production of sNaN is fixed, Rust is unlikely to ever guarantee that floating point exceptions are not raised spuriously.

Although I'm unsure under which circumstance a spurious FP exception would be raised, this isn't really a dealbreaker? Currently there's no handling for it at all.

Yes, it'd be nice to have, but ultimately IEEE-754 is more a spec for chip FPU behavior (w.r.t. fp environment and exception status flags) than it is higher level languages which want to do larger scale optimizations than possible when the fp environment isn't static.

All of the functionality of IEEE 754 is intended for use by software. What else would be using it? The FP environment only matters inasmuch as software is interacting with it.

Is there a language that is a good role model, in the sense that it guarantees full IEEE compliance and even supports fp environments and exceptions?

Language? C99 and later support the whole thing with #pragma STDC FENV_ACCESS set. Compiler? I think some proprietary compilers do it (Intel, HP, IBM, Oracle). (I can't personally vet them I'm afraid.)

@RalfJung
Copy link
Member

In the very least we should document our noncompliance.

Fully agreed on that one. The way I view this, better support for these special float features is a feature addition to Rust that will need some design work, to be considered together with things that go beyond what IEEE promises (e.g. making NaNs less non-deterministic). rust-lang/unsafe-code-guidelines#237 attempts to keep an overview. Not sure what to do with all the individual issues that consider various aspects of this and are littered with confusion caused by a lack of overview...

@RalfJung RalfJung changed the title Invalid floating-point optimization of x * 1.0 to x Rust does not comply with IEEE 754 floats: arithmetic can produce signaling NaN Jan 31, 2023
@workingjubilee
Copy link
Contributor

Yes, I have been told there is a C-or-Fortran-or-both compiler with software support that is fully FP compliant or at least near enough to return very precise errors (the "main purpose" of NaN existing is to allow you to record such errors and roughly where they are in software)... unfortunately, it's closed-source, as far as I am aware.

@paulabrudanandrei
Copy link

My dumb question is why are we trying to fully support the IEEE floating standard? PartialCmp and PartialEq were introduced just for floats, and most languages aren't fully IEEE float compatible i.e. the C++ standard isn't.

@RalfJung
Copy link
Member

RalfJung commented Aug 4, 2023

Closing as a duplicate of #73328, where we are tracking documenting our NaN guarantees. (We will almost certainly document that operations with SNaN inputs can produce SNaN outputs. That is LLVM semantics, ans they are unlikely to be willing to change that without a solid usecase.)

@RalfJung RalfJung closed this as completed Aug 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-floating-point Area: Floating point numbers and arithmetic C-bug Category: This is a bug. T-lang Relevant to the language team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

7 participants