NVIDIA, ARM, and Intel recently introduced two new FP8 formats, as described in the paper: https://arxiv.org/abs/2209.05433. The first of the two FP8 dtypes, E5M2, was added in https://reviews.llvm.org/D133823. This change adds the second of the two: E4M3.
There is an RFC for adding the FP8 dtypes here: https://discourse.llvm.org/t/rfc-add-apfloat-and-mlir-type-support-for-fp8-e5m2/65279. I spoke with the RFC's author, Stella, and she gave me the go ahead to implement the E4M3 type. The name of the E4M3 type in APFloat is Float8E4M3FN, as discussed in the RFC. The "FN" means only Finite and NaN values are supported.
Unlike E5M2, E4M3 has different behavior from IEEE types in regards to Inf and NaN values. There are no Inf values, and NaN is represented when the exponent and mantissa bits are all 1s. To represent these differences in APFloat, I added an enum field, fltNonfiniteBehavior, to the fltSemantics struct. The possible enum values are IEEE754 and NanOnly. Only Float8E4M3FN has the NanOnly behavior.
After this change is submitted, I plan on adding the Float8E4M3FN type to MLIR, in the same way as E5M2 was added in https://reviews.llvm.org/D133823.
Reviewed By: bkramer
Differential Revision: https://reviews.llvm.org/D137760
(Re-Apply with fixes to clang MicrosoftMangle.cpp)
This is a first step towards high level representation for fp8 types
that have been built in to hardware with near term roadmaps. Like the
BFLOAT16 type, the family of fp8 types are inspired by IEEE-754 binary
floating point formats but, due to the size limits, have been tweaked in
various ways in order to maximally use the range/precision in various
scenarios. The list of variants is small/finite and bounded by real
hardware.
This patch introduces the E5M2 FP8 format as proposed by Nvidia, ARM,
and Intel in the paper: https://arxiv.org/pdf/2209.05433.pdf
As the more conformant of the two implemented datatypes, we are plumbing
it through LLVM's APFloat type and MLIR's type system first as a
template. It will be followed by the range optimized E4M3 FP8 format
described in the paper. Since that format deviates further from the
IEEE-754 norms, it may require more debate and implementation
complexity.
Given that we see two parts of the FP8 implementation space represented
by these cases, we are recommending naming of:
* `F8M<N>` : For FP8 types that can be conceived of as following the
same rules as FP16 but with a smaller number of mantissa/exponent
bits. Including the number of mantissa bits in the type name is enough
to fully specify the type. This naming scheme is used to represent
the E5M2 type described in the paper.
* `F8M<N>F` : For FP8 types such as E4M3 which only support finite
values.
The first of these (this patch) seems fairly non-controversial. The
second is previewed here to illustrate options for extending to the
other known variant (but can be discussed in detail in the patch
which implements it).
Many conversations about these types focus on the Machine-Learning
ecosystem where they are used to represent mixed-datatype computations
at a high level. At that level (which is why we also expose them in
MLIR), it is important to retain the actual type definition so that when
lowering to actual kernels or target specific code, the correct
promotions, casts and rescalings can be done as needed. We expect that
most LLVM backends will only experience these types as opaque `I8`
values that are applicable to some instructions.
MLIR does not make it particularly easy to add new floating point types
(i.e. the FloatType hierarchy is not open). Given the need to fully
model FloatTypes and make them interop with tooling, such types will
always be "heavy-weight" and it is not expected that a highly open type
system will be particularly helpful. There are also a bounded number of
floating point types in use for current and upcoming hardware, and we
can just implement them like this (perhaps looking for some cosmetic
ways to reduce the number of places that need to change). Creating a
more generic mechanism for extending floating point types seems like it
wouldn't be worth it and we should just deal with defining them one by
one on an as-needed basis when real hardware implements a new scheme.
Hopefully, with some additional production use and complete software
stacks, hardware makers will converge on a set of such types that is not
terribly divergent at the level that the compiler cares about.
(I cleaned up some old formatting and sorted some items for this case:
If we converge on landing this in some form, I will NFC commit format
only changes as a separate commit)
Differential Revision: https://reviews.llvm.org/D133823
This is a first step towards high level representation for fp8 types
that have been built in to hardware with near term roadmaps. Like the
BFLOAT16 type, the family of fp8 types are inspired by IEEE-754 binary
floating point formats but, due to the size limits, have been tweaked in
various ways in order to maximally use the range/precision in various
scenarios. The list of variants is small/finite and bounded by real
hardware.
This patch introduces the E5M2 FP8 format as proposed by Nvidia, ARM,
and Intel in the paper: https://arxiv.org/pdf/2209.05433.pdf
As the more conformant of the two implemented datatypes, we are plumbing
it through LLVM's APFloat type and MLIR's type system first as a
template. It will be followed by the range optimized E4M3 FP8 format
described in the paper. Since that format deviates further from the
IEEE-754 norms, it may require more debate and implementation
complexity.
Given that we see two parts of the FP8 implementation space represented
by these cases, we are recommending naming of:
* `F8M<N>` : For FP8 types that can be conceived of as following the
same rules as FP16 but with a smaller number of mantissa/exponent
bits. Including the number of mantissa bits in the type name is enough
to fully specify the type. This naming scheme is used to represent
the E5M2 type described in the paper.
* `F8M<N>F` : For FP8 types such as E4M3 which only support finite
values.
The first of these (this patch) seems fairly non-controversial. The
second is previewed here to illustrate options for extending to the
other known variant (but can be discussed in detail in the patch
which implements it).
Many conversations about these types focus on the Machine-Learning
ecosystem where they are used to represent mixed-datatype computations
at a high level. At that level (which is why we also expose them in
MLIR), it is important to retain the actual type definition so that when
lowering to actual kernels or target specific code, the correct
promotions, casts and rescalings can be done as needed. We expect that
most LLVM backends will only experience these types as opaque `I8`
values that are applicable to some instructions.
MLIR does not make it particularly easy to add new floating point types
(i.e. the FloatType hierarchy is not open). Given the need to fully
model FloatTypes and make them interop with tooling, such types will
always be "heavy-weight" and it is not expected that a highly open type
system will be particularly helpful. There are also a bounded number of
floating point types in use for current and upcoming hardware, and we
can just implement them like this (perhaps looking for some cosmetic
ways to reduce the number of places that need to change). Creating a
more generic mechanism for extending floating point types seems like it
wouldn't be worth it and we should just deal with defining them one by
one on an as-needed basis when real hardware implements a new scheme.
Hopefully, with some additional production use and complete software
stacks, hardware makers will converge on a set of such types that is not
terribly divergent at the level that the compiler cares about.
(I cleaned up some old formatting and sorted some items for this case:
If we converge on landing this in some form, I will NFC commit format
only changes as a separate commit)
Differential Revision: https://reviews.llvm.org/D133823
LLVM contains a helpful function for getting the size of a C-style
array: `llvm::array_lengthof`. This is useful prior to C++17, but not as
helpful for C++17 or later: `std::size` already has support for C-style
arrays.
Change call sites to use `std::size` instead.
Differential Revision: https://reviews.llvm.org/D133429
This reverts commit ef82063207.
- It conflicts with the existing llvm::size in STLExtras, which will now
never be called.
- Calling it without llvm:: breaks C++17 compat
Previously APFloat::convertToDouble may be called only for APFloats that
were built using double semantics. Other semantics like single precision
were not allowed although corresponding numbers could be converted to
double without loss of precision. The similar restriction applied to
APFloat::convertToFloat.
With this change any APFloat that can be precisely represented by double
can be handled with convertToDouble. Behavior of convertToFloat was
updated similarly. It make the conversion operations more convenient and
adds support for formats like half and bfloat.
Differential Revision: https://reviews.llvm.org/D102671
This is an alternate fix (see D87835) for a bug where a NaN constant
gets wrongly transformed into Infinity via truncation.
In this patch, we uniformly convert any SNaN to QNaN while raising
'invalid op'.
But we don't have a way to directly specify a 32-bit SNaN value in LLVM IR,
so those are always encoded/decoded by calling convert from/to 64-bit hex.
See D88664 for a clang fix needed to allow this change.
Differential Revision: https://reviews.llvm.org/D88238
Patch IEEEFloat::isSignificandAllZeros and IEEEFloat::isSignificandAllOnes to behave correctly in the case that the size of the significand is a multiple of the width of the integerParts making up the significand.
The patch to IEEEFloat::isSignificandAllOnes fixes bug 34579, and the patch to IEEE:Float:isSignificandAllZeros fixes the unit test "APFloatTest.x87Next" I added here. I have included both in this diff since the changes are very similar.
Patch by Andrew Briand
We shift the significand right on a truncation, but that needs to be made NaN-safe:
always set at least 1 bit in the significand.
https://llvm.org/PR43907
See D88238 for the likely follow-up (but needs some plumbing fixes before it can proceed).
Differential Revision: https://reviews.llvm.org/D87835
Behavior of IEEEFloat::roundToIntegral is aligned with IEEE-754
operation roundToIntegralExact. In partucular this function now:
- returns opInvalid for signaling NaNs,
- returns opInexact if the result of rounding differs from argument.
Differential Revision: https://reviews.llvm.org/D75246
Summary:
These implement the usual IEEE-style floating point comparison
semantics, e.g. +0.0 == -0.0 and all operators except != return false
if either argument is NaN.
Subscribers: arsenm, jvesely, nhaehnle, hiraditya, dexonsmith, kerbowa, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75237
Summary:
We already have overloaded binary arithemetic operators so you can write
A+B etc. This patch lets you write -A instead of neg(A).
Subscribers: hiraditya, dexonsmith, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D75236
Add support for converting Signaling NaN, and a NaN Payload from string.
The NaNs (the string "nan" or "NaN") may be prefixed with 's' or 'S' for defining a Signaling NaN.
A payload for a NaN can be specified as a suffix.
It may be a octal/decimal/hexadecimal number in parentheses or without.
Differential Revision: https://reviews.llvm.org/D69773
`APFLoat::convertFromString` returns `Expected` result, which must be
"checked" if the LLVM_ENABLE_ABI_BREAKING_CHECKS preprocessor flag is
set.
To mark an `Expected` result as "checked" we must consume the `Error`
within.
In many cases, we are only interested in knowing if an error occured,
without the need to examine the error info. This is achieved, easily,
with the `errorToBool()` API.
Up until now, the arguments to `fusedMultiplyAdd` are passed by
reference. We must save the `Addend` value on the beginning of the
function, before we modify `this`, as they may be the same reference.
To fix this, we now pass the `addend` parameter of `multiplySignificand`
by value (instead of by-ref), and have a default value of zero.
Fix PR44051.
Differential Revision: https://reviews.llvm.org/D70422
Constructor invocations such as `APFloat(APFloat::IEEEdouble(), 0.0)`
may seem like they accept a FP (floating point) value, but the overload
they reach is actually the `integerPart` one, not a `float` or `double`
overload (which only exists when `fltSemantics` isn't passed).
This may lead to possible loss of data, by the conversion from `float`
or `double` to `integerPart`.
To prevent future mistakes, a new constructor overload, which accepts
any FP value and marked with `delete`, to prevent its usage.
Fixes PR34095.
Differential Revision: https://reviews.llvm.org/D70425
Enlarge the size of ExponentType from 16bit integer to 32bit. This is
required to prevent exponent overflow/underflow.
Note that IEEEFloat size and alignment don't change in 64bit or 32bit
compilation targets (and in turn, neither does APFloat).
Fixes PR34851.
Differential Revision: https://reviews.llvm.org/D69771
Fix incorrect determination of the bigger number out of the two
subtracted, while subnormal numbers are involved.
Fixes PR44010.
Differential Revision: https://reviews.llvm.org/D69772
This patch has three related fixes to improve float literal lexing:
1. Make AsmLexer::LexDigit handle floats without a decimal point more
consistently.
2. Make AsmLexer::LexFloatLiteral print an error for floats which are
apparently missing an "e".
3. Make APFloat::convertFromString use binutils-compatible exponent
parsing.
Together, this fixes some cases where a float would be incorrectly
rejected, fixes some cases where the compiler would crash, and improves
diagnostics in some cases.
Patch by Brandon Jones.
Differential Revision: https://reviews.llvm.org/D57321
llvm-svn: 357214
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
llvm-svn: 351636
Summary: The APFloat and Constant APIs taking an APInt allow arbitrary payloads,
and that's great. There's a convenience API which takes an unsigned, and that's
silly because it then directly creates a 64-bit APInt. Just change it to 64-bits
directly.
At the same time, add ConstantFP NaN getters which match the APFloat ones (with
getQNaN / getSNaN and APInt parameters).
Improve the APFloat testing to set more payload bits.
Reviewers: scanon, rjmccall
Subscribers: jkorous, dexonsmith, kristina, llvm-commits
Differential Revision: https://reviews.llvm.org/D55460
llvm-svn: 348791
Summary:
These new intrinsics have the semantics of the `minimum` and `maximum`
operations specified by the latest draft of IEEE 754-2018. Unlike
llvm.minnum and llvm.maxnum, these new intrinsics propagate NaNs and
always treat -0.0 as less than 0.0. `minimum` and `maximum` lower
directly to the existing `fminnan` and `fmaxnan` ISel DAG nodes. It is
safe to reuse these DAG nodes because before this patch were only
emitted in situations where there were known to be no NaN arguments or
where NaN propagation was correct and there were known to be no zero
arguments. I know of only four backends that lower fminnan and
fmaxnan: WebAssembly, ARM, AArch64, and SystemZ, and each of these
lowers fminnan and fmaxnan to instructions that are compatible with
the IEEE 754-2018 semantics.
Reviewers: aheejin, dschuff, sunfish, javed.absar
Subscribers: kristof.beyls, dexonsmith, kristina, llvm-commits
Differential Revision: https://reviews.llvm.org/D52764
llvm-svn: 344437
Summary:
Unnormal values are a feature of some very old x87 processors. We handle
them correctly for the most part -- the only exception was an unnormal
value whose significand happened to be zero. In this case the APFloat
was still initialized as normal number (category = fcNormal), but a
subsequent toString operation would assert because the math would
produce nonsensical values for the zero significand.
During review, it was decided that the correct way to fix this is to
treat all unnormal values as NaNs (as that is what any >=386 processor
will do).
The issue was discovered because LLDB would crash when trying to print
some "long double" values.
Reviewers: skatkov, scanon, gottesmm
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D41868
llvm-svn: 331884
The method IEEEFloat::convertFromStringSpecials() does not recognize
the "+Inf" and "-Inf" strings but these strings are printed for
the double Infinities by the IEEEFloat::toString().
This patch adds the "+Inf" and "-Inf" strings to the list of recognized
patterns in IEEEFloat::convertFromStringSpecials().
Re-landing after fix.
Reviewers: sberg, bogner, majnemer, timshen, rnk, skatkov, gottesmm, bkramer, scanon, anna
Reviewed By: anna
Subscribers: mkazantsev, FlameTop, llvm-commits, reames, apilipenko
Differential Revision: https://reviews.llvm.org/D38030
llvm-svn: 321054
fmod specification requires the sign of the remainder is
the same as numerator in case remainder is zero.
Reviewers: gottesmm, scanon, arsenm, davide, craig.topper
Reviewed By: scanon
Subscribers: wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D39225
llvm-svn: 317081
The method IEEEFloat::convertFromStringSpecials() does not recognize
the "+Inf" and "-Inf" strings but these strings are printed for
the double Infinities by the IEEEFloat::toString().
This patch adds the "+Inf" and "-Inf" strings to the list of recognized
patterns in IEEEFloat::convertFromStringSpecials().
Reviewers: sberg, bogner, majnemer, timshen, rnk, skatkov, gottesmm, bkramer, scanon
Reviewed By: skatkov
Subscribers: apilipenko, reames, llvm-commits
Differential Revision: https://reviews.llvm.org/D38030
llvm-svn: 316156
This should fix the bug https://bugs.llvm.org/show_bug.cgi?id=12906
To print the FP constant AsmWriter does the following:
1) convert FP value to String (actually using snprintf function which is locale dependent).
2) Convert String back to FP Value
3) Compare original and got FP values. If they are not equal just dump as hex.
The problem happens on the 2nd step when APFloat does not expect group delimiter or
fraction delimiter other than period symbol and so on, which can be produced on the
first step if LLVM library is used in an environment with corresponding locale set.
To fix this issue the locale independent APFloat:toString function is used.
However it prints FP values slightly differently than snprintf does. Specifically
it suppress trailing zeros in significant, use capital E and so on.
It results in 117 test failures during make check.
To avoid this I've also updated APFloat.toString a bit to pass make check at least.
Reviewers: sberg, bogner, majnemer, sanjoy, timshen, rnk
Reviewed By: timshen, rnk
Subscribers: rnk, llvm-commits
Differential Revision: https://reviews.llvm.org/D32276
llvm-svn: 300943
Summary:
This patch changes the layout of DoubleAPFloat, and adjust all
operations to do either:
1) (IEEEdouble, IEEEdouble) -> (uint64_t, uint64_t) -> PPCDoubleDoubleImpl,
then run the old algorithm.
2) Do the right thing directly.
1) includes multiply, divide, remainder, mod, fusedMultiplyAdd, roundToIntegral,
convertFromString, next, convertToInteger, convertFromAPInt,
convertFromSignExtendedInteger, convertFromZeroExtendedInteger,
convertToHexString, toString, getExactInverse.
2) includes makeZero, makeLargest, makeSmallest, makeSmallestNormalized,
compare, bitwiseIsEqual, bitcastToAPInt, isDenormal, isSmallest,
isLargest, isInteger, ilogb, scalbn, frexp, hash_value, Profile.
I could split this into two patches, e.g. use
1) for all operatoins first, then incrementally change some of them to
2). I didn't do that, because 1) involves code that converts data between
PPCDoubleDoubleImpl and (IEEEdouble, IEEEdouble) back and forth, and may
pessimize the compiler. Instead, I find easy functions and use
approach 2) for them directly.
Next step is to implement move multiply and divide from 1) to 2). I don't
have plans for other functions in 1).
Differential Revision: https://reviews.llvm.org/D27872
llvm-svn: 292839