This adds a little hasSVEorSME() helper, and as a NFC updates existing
code to use it. The assertions get[Min|Max]SVEVectorSizeInBits() are
also now corrected to use hasSVEorSME() rather than just hasSVE().
Differential Revision: https://reviews.llvm.org/D138575
When the SME attributes tell that a function is or may be executed in Streaming
SVE mode, we currently need to be conservative and disable _any_ vectorization
(fixed or scalable) because the code-generator does not yet support generating
streaming-compatible code.
Scalable auto-vec will be gradually enabled in the future when we have
confidence that the loop-vectorizer won't use any SVE or NEON instructions
that are illegal in Streaming SVE mode.
Reviewed By: paulwalker-arm
Differential Revision: https://reviews.llvm.org/D135950
Add a compile-time flag for enabling streaming mode.
When streaming mode is enabled, lower basic loads and stores of fixed-width vectors;
to generate code that is compatible to streaming mode.
Differential Revision: https://reviews.llvm.org/D133433
They're roughly ARMv8.6. This works in the .td file, but in
AArch64TargetParser.def, marking them v8.6 brings in support for the SM4
cryptographic hash and we don't actually have that. So TargetParser side
they're marked as v8.5, with the extra features (BF16 and I8MM added manually).
Finally, A16 supports the HCX extension in addition to v8.6. This has no
TargetParser implications.
This patch adds an option --reserve-regs-for-regalloc, so we can reserve a list
of physical registers. These registers will not be used by register allocator,
but can still be used as ABI requests such as passing arguments to function
call.
Its main purpose is simulating high register pressure by reserving many physical
registers. So it will be much easier to test and debug register allocation
changes.
Differential Revision: https://reviews.llvm.org/D132717
Part of initial Arm64EC patchset.
Arm64EC code needs to use functions with a different name, to avoid
using the x64 versions.
Differential Revision: https://reviews.llvm.org/D125417
Part of patchset to add initial support for ARM64EC.
I'm not completely sure I understand the reason for this restriction,
but Microsoft documentation says that asynchronous signals clobber these
registers, so we can't ever use them.
As far as I know, none of these registers have any hardcoded meaning, so
reserving them shouldn't have any significant side-effects.
Differental Revision: https://reviews.llvm.org/D125413
The new flag -aarch64-insert-extract-base-cost can be used to
set the value of AArch64Subtarget::getVectorInsertExtractBaseCost(),
for the purposes of experimentation.
Differential Revision: https://reviews.llvm.org/D124835
Add support for the Ampere Computing Ampere1 core.
Ampere1 implements the AArch64 state and is compatible with ARMv8.6-A.
Differential Revision: https://reviews.llvm.org/D117112
This diff splits fuse-literals feature and enables fuse-adrp-add by default,
in particular, it adjusts instruction scheduling to place ADRP+ADD pairs together.
This also enables the linker to apply the relaxations described in
d2ca58c54b.
Differential revision: https://reviews.llvm.org/D120104
Test plan: make check-all
Reland of D120906 after sanitizer failures.
This patch aims to reduce a lot of the boilerplate around adding new subtarget
features. From the SubtargetFeatures tablegen definitions, a series of calls to
the macro GET_SUBTARGETINFO_MACRO are generated in
ARM/AArch64GenSubtargetInfo.inc. ARMSubtarget/AArch64Subtarget can then use
this macro to define bool members and the corresponding getter methods.
Some naming inconsistencies have been fixed to allow this, and one unused
member removed.
This implementation only applies to boolean members; in future both BitVector
and enum members could also be generated.
Differential Revision: https://reviews.llvm.org/D120906
This patch aims to reduce a lot of the boilerplate around adding new subtarget
features. From the SubtargetFeatures tablegen definitions, a series of calls to
the macro GET_SUBTARGETINFO_MACRO are generated in
ARM/AArch64GenSubtargetInfo.inc. ARMSubtarget/AArch64Subtarget can then use
this macro to define bool members and the corresponding getter methods.
Some naming inconsistencies have been fixed to allow this, and one unused
member removed.
This implementation only applies to boolean members; in future both BitVector
and enum members could also be generated.
Differential Revision: https://reviews.llvm.org/D120906
Discussed extensively on D98232. The functionality introduced in D35816
never worked correctly. In D98232, it was fixed, but, as it was
introducing a large compile-time regression, and the value of the
original patch was called into doubt, we disabled it by default
everywhere. A year later, it appears that caused no grief, so it seems
safe to remove the disabled code.
This should be accompanied by re-opening bug 26810.
Differential Revision: https://reviews.llvm.org/D121128
This wraps up from D119053. The 2 headers are moved as described,
fixed file headers and include guards, updated all files where the old
paths were detected (simple grep through the repo), and `clang-format`-ed it all.
Differential Revision: https://reviews.llvm.org/D119876
This removes `HasPAUTH` from `AArch64SubTarget`, as it seems to be a
redundant, unused copy of `HasPAuth`.
Differential Revision: https://reviews.llvm.org/D117782
Given how small the function is and how often it gets used it
makes more sense to live in the header file.
Differential Revision: https://reviews.llvm.org/D117883
This family of instructions includes CPYF (copy forward), CPYB (copy
backward), SET (memset) and SETG (memset + initialise MTE tags), with
some sub-variants to indicate whether address translation is done in a
privileged or unprivileged way. For the copy instructions, you can
separately specify the read and write translations (so that kernels
can safely use these instructions in syscall handlers, to memcpy
between the calling process's user-space memory map and the kernel's
own privileged one).
The unusual thing about these instructions is that they write back to
multiple registers, because they perform an implementation-defined
amount of copying each time they run, and write back to _all_ the
address and size registers to indicate how much remains to be done
(and the code is expected to loop on them until the size register
becomes zero). But this is no problem in LLVM - you just define each
instruction to have multiple outputs, multiple inputs, and a set of
constraints tying their register numbers together appropriately.
This commit introduces a special subtarget feature called MOPS (after
the name the spec gives to the CPU id field), which is a dependency of
the top-level 8.8-A feature, and uses that to enable most of the new
instructions. The SETMG instructions also depend on MTE (and the test
checks that).
Differential Revision: https://reviews.llvm.org/D116157
This patch introduces support for targetting the Armv9.3-A architecture,
which should map to the existing Armv8.8-A extensions.
Differential Revision: https://reviews.llvm.org/D116158
This is the first commit in a series that implements support for
"armv8.8-a" architecture. This should contain all the necessary
boilerplate to make the 8.8-A architecture exist from LLVM and Clang's
point of view: it adds the new arch as a subtarget feature, a definition
in TargetParser, a name on the command line, an appropriate set of
predefined macros, and adds appropriate tests. The new architecture name
is supported in both AArch32 and AArch64.
However, in this commit, no actual _functionality_ is added as part of
the new architecture. If you specify -march=armv8.8a, the compiler
will accept it and set the right predefines, but generate no code any
differently.
Differential Revision: https://reviews.llvm.org/D115694
When this pass was originally implemented, the fix pass was enabled
using a llvm command-line flag. This works fine, except in the case of
LTO, where the flag is not passed into the linker plugin in order to
enable the function pass in the LTO backend.
Now LTO exists, the expectation now is to use target features rather
than command-line arguments to control code generation, as this ensures
that different command-line arguments in different files are correctly
represented, and target-features always get to the LTO plugin as they
are encoded into LLVM IR.
The fall-out of this change is that the fix pass has to always be added
to the backend pass pipeline, so now it makes no changes if the function
does not have the right target feature to enable it. This should make a
minimal difference to compile time.
One advantage is it's now much easier to enable when compiling for a
Cortex-A53, as CPUs imply their own individual sets of target-features,
in a more fine-grained way. I haven't done this yet, but it is an
option, if the fix should be enabled in more places.
Existing tests of the user interface are unaffected, the changes are to
reflect that the argument is now turned into a target feature.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D114703
The support for neoverse-512tvb mirrors the same option available in GCC[1].
There is no functional effect for this option yet.
This patch ensures the driver accepts "-mcpu=neoverse-512tvb", and enough
plumbing is in place to allow the new option to be used in the future.
[1]https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
Differential Revision: https://reviews.llvm.org/D112406
This change introduces subtarget features to predicate certain
instructions and system registers that are available only on
'A' profile targets. Those features are not present when
targeting a generic CPU, which is the default processor.
In other words the generic CPU now means the intersection of
'A' and 'R' profiles. To maintain backwards compatibility we
enable the features that correspond to -march=armv8-a when the
architecture is not explicitly specified on the command line.
References: https://developer.arm.com/documentation/ddi0600/latest
Differential Revision: https://reviews.llvm.org/D110065
This patch introduces a new function:
AArch64Subtarget::getVScaleForTuning
that returns a value for vscale that can be used for tuning the cost
model when using scalable vectors. The VScaleForTuning option in
AArch64Subtarget is initialised according to the following rules:
1. If the user has specified the CPU to tune for we use that, else
2. If the target CPU was specified we use that, else
3. The tuning is set to "generic".
For CPUs of type "generic" I have assumed that vscale=2.
New tests added here:
Analysis/CostModel/AArch64/sve-gather.ll
Analysis/CostModel/AArch64/sve-scatter.ll
Transforms/LoopVectorize/AArch64/sve-strict-fadd-cost.ll
Differential Revision: https://reviews.llvm.org/D110259
Following on from an earlier patch that introduced support for -mtune
for AArch64 backends, this patch splits out the tuning features
from the processor features. This gives us the ability to enable
architectural feature set A for a given processor with "-mcpu=A"
and define the set of tuning features B with "-mtune=B".
It's quite difficult to write a test that proves we select the
right features according to the tuning attribute because most
of these relate to scheduling. I have created a test here:
CodeGen/AArch64/misched-fusion-addr-tune.ll
that demonstrates the different scheduling choices based upon
the tuning.
Differential Revision: https://reviews.llvm.org/D111551
This patch ensures that we always tune for a given CPU on AArch64
targets when the user specifies the "-mtune=xyz" flag. In the
AArch64Subtarget if the tune flag is unset we use the CPU value
instead.
I've updated the release notes here:
llvm/docs/ReleaseNotes.rst
and added tests here:
clang/test/Driver/aarch64-mtune.c
Differential Revision: https://reviews.llvm.org/D110258
This patch adds patterns to match the following with INC/DEC:
- @llvm.aarch64.sve.cnt[b|h|w|d] intrinsics + ADD/SUB
- vscale + ADD/SUB
For some implementations of SVE, INC/DEC VL is not as cheap as ADD/SUB and
so this behaviour is guarded by the "use-scalar-inc-vl" feature flag, which for SVE
is off by default. There are no known issues with SVE2, so this feature is
enabled by default when targeting SVE2.
Reviewed By: david-arm
Differential Revision: https://reviews.llvm.org/D111441
armv9-a, armv9.1-a and armv9.2-a can be targeted using the -march option
both in ARM and AArch64.
- Armv9-A maps to Armv8.5-A.
- Armv9.1-A maps to Armv8.6-A.
- Armv9.2-A maps to Armv8.7-A.
- The SVE2 extension is enabled by default on these architectures.
- The cryptographic extensions are disabled by default on these
architectures.
The Armv9-A architecture is described in the Arm® Architecture Reference
Manual Supplement Armv9, for Armv9-A architecture profile
(https://developer.arm.com/documentation/ddi0608/latest).
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D109517
v8.4 says that normal loads/stores of 128-bytes are single-copy atomic if
they're properly aligned (which all LLVM atomics are) so we no longer need to
do a full RMW operation to guarantee we got a clean read.
When back-deploying Swift async code we can't always toggle the flag showing an
extended frame is present because it will confuse unwinders on systems released
before this feature. So in cases where the code might run there, we `or` in a
mask provided by the runtime (as an absolute symbol) telling us whether the
unwinders can cope.
When deploying only for newer OSs, we can still hard-code the bit-set for
greater efficiency.
FeaturePMU was created in AArch64 to accommodate one missing system
register, PMMIR_EL1, in commit ffcd7698ae.
However, the Performance Monitors extension already had a target
feature, which is called FeaturePerfMon. Therefore, FeaturePMU is
redundant.
This patch removes FeaturePMU and merges its contents into
FeaturePerfMon.
Reviewed By: dnsampaio
Differential Revision: https://reviews.llvm.org/D109246
The Scalable Matrix Extension (SME) introduces a new execution mode
called Streaming SVE mode. In streaming mode a substantial subset of the
SVE and SVE2 instruction set is available, along with new outer product,
load, store, extract and insert instructions that operate on the new
architectural register state for the matrix.
To support streaming mode this patch introduces a new subtarget feature
+streaming-sve. If enabled, the subset of SVE(2) instructions are
available. The existing behaviour for SVE(2) remains unchanged, the
subset of instructions that are legal in streaming mode are enabled if
either +sve[2] or +streaming-sve is specified. Instructions that are
illegal in streaming mode remain predicated on +sve[2].
The SME target feature has been updated to imply +streaming-sve rather
than +sve.
The following changes are made to the SVE(2) tests:
* For instructions that are legal in streaming mode:
- added RUN line to verify +streaming-sve enables the instruction.
- updated diagnostic to 'instruction requires: streaming-sve or sve'.
* For instructions that are illegal in streaming-mode:
- added RUN line to verify +streaming-sve does not enable the
instruction.
SVE(2) instructions that are legal in streaming mode have:
if !HaveSVE[2]() && !HaveSME() then UNDEFINED;
at the top of the pseudocode in the XML.
The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06/SVE-Instructions
Reviewed By: sdesmalen, david-arm
Differential Revision: https://reviews.llvm.org/D106272
First patch in a series adding MC layer support for the Arm Scalable
Matrix Extension.
This patch adds the following features:
sme, sme-i64, sme-f64
The sme-i64 and sme-f64 flags are for the optional I16I64 and F64F64
features.
If a target supports I16I64 then the following instructions are
implemented:
* 64-bit integer ADDHA and ADDVA variants (D105570).
* SMOPA, SMOPS, SUMOPA, SUMOPS, UMOPA, UMOPS, USMOPA, and USMOPS
instructions that accumulate 16-bit integer outer products into 64-bit
integer tiles.
If a target supports F64F64 then the FMOPA and FMOPS instructions that
accumulate double-precision floating-point outer products into
double-precision tiles are implemented.
Outer products are implemented in D105571.
The reference can be found here:
https://developer.arm.com/documentation/ddi0602/2021-06
Reviewed By: CarolineConcatto
Differential Revision: https://reviews.llvm.org/D105569