diff options
author | Hal Finkel <hfinkel@anl.gov> | 2014-03-13 07:58:58 +0000 |
---|---|---|
committer | Hal Finkel <hfinkel@anl.gov> | 2014-03-13 07:58:58 +0000 |
commit | ab849adec4467646aaf25239dc78f47fe5076479 (patch) | |
tree | 0bb1eb26f4ea30d566593283719045e2075ff7b5 /lib/Target/PowerPC/PPCRegisterInfo.td | |
parent | 79c15b23c9c67f306d4d4514b46b2d006d2049d4 (diff) | |
download | llvm-ab849adec4467646aaf25239dc78f47fe5076479.tar.gz llvm-ab849adec4467646aaf25239dc78f47fe5076479.tar.bz2 llvm-ab849adec4467646aaf25239dc78f47fe5076479.tar.xz |
[PowerPC] Initial support for the VSX instruction set
VSX is an ISA extension supported on the POWER7 and later cores that enhances
floating-point vector and scalar capabilities. Among other things, this adds
<2 x double> support and generally helps to reduce register pressure.
The interesting part of this ISA feature is the register configuration: there
are 64 new 128-bit vector registers, the 32 of which are super-registers of the
existing 32 scalar floating-point registers, and the second 32 of which overlap
with the 32 Altivec vector registers. This makes things like vector insertion
and extraction tricky: this can be free but only if we force a restriction to
the right register subclass when needed. A new "minipass" PPCVSXCopy takes care
of this (although it could do a more-optimal job of it; see the comment about
unnecessary copies below).
Please note that, currently, VSX is not enabled by default when targeting
anything because it is not yet ready for that. The assembler and disassembler
are fully implemented and tested. However:
- CodeGen support causes miscompiles; test-suite runtime failures:
MultiSource/Benchmarks/FreeBench/distray/distray
MultiSource/Benchmarks/McCat/08-main/main
MultiSource/Benchmarks/Olden/voronoi/voronoi
MultiSource/Benchmarks/mafft/pairlocalalign
MultiSource/Benchmarks/tramp3d-v4/tramp3d-v4
SingleSource/Benchmarks/CoyoteBench/almabench
SingleSource/Benchmarks/Misc/matmul_f64_4x4
- The lowering currently falls back to using Altivec instructions far more
than it should. Worse, there are some things that are scalarized through the
stack that shouldn't be.
- A lot of unnecessary copies make it past the optimizers, and this needs to
be fixed.
- Many more regression tests are needed.
Normally, I'd fix these things prior to committing, but there are some
students and other contributors who would like to work this, and so it makes
sense to move this development process upstream where it can be subject to the
regular code-review procedures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@203768 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/Target/PowerPC/PPCRegisterInfo.td')
-rw-r--r-- | lib/Target/PowerPC/PPCRegisterInfo.td | 43 |
1 files changed, 43 insertions, 0 deletions
diff --git a/lib/Target/PowerPC/PPCRegisterInfo.td b/lib/Target/PowerPC/PPCRegisterInfo.td index f1ecda198f..339d4e4d71 100644 --- a/lib/Target/PowerPC/PPCRegisterInfo.td +++ b/lib/Target/PowerPC/PPCRegisterInfo.td @@ -16,6 +16,8 @@ def sub_gt : SubRegIndex<1, 1>; def sub_eq : SubRegIndex<1, 2>; def sub_un : SubRegIndex<1, 3>; def sub_32 : SubRegIndex<32>; +def sub_64 : SubRegIndex<64>; +def sub_128 : SubRegIndex<128>; } @@ -52,6 +54,23 @@ class VR<bits<5> num, string n> : PPCReg<n> { let HWEncoding{4-0} = num; } +// VSRL - One of the 32 128-bit VSX registers that overlap with the scalar +// floating-point registers. +class VSRL<FPR SubReg, string n> : PPCReg<n> { + let HWEncoding = SubReg.HWEncoding; + let SubRegs = [SubReg]; + let SubRegIndices = [sub_64]; +} + +// VSRH - One of the 32 128-bit VSX registers that overlap with the vector +// registers. +class VSRH<VR SubReg, string n> : PPCReg<n> { + let HWEncoding{4-0} = SubReg.HWEncoding{4-0}; + let HWEncoding{5} = 1; + let SubRegs = [SubReg]; + let SubRegIndices = [sub_128]; +} + // CR - One of the 8 4-bit condition registers class CR<bits<3> num, string n, list<Register> subregs> : PPCReg<n> { let HWEncoding{2-0} = num; @@ -86,6 +105,16 @@ foreach Index = 0-31 in { DwarfRegNum<[!add(Index, 77), !add(Index, 77)]>; } +// VSX registers +foreach Index = 0-31 in { + def VSL#Index : VSRL<!cast<FPR>("F"#Index), "vs"#Index>, + DwarfRegAlias<!cast<FPR>("F"#Index)>; +} +foreach Index = 0-31 in { + def VSH#Index : VSRH<!cast<VR>("V"#Index), "vs" # !add(Index, 32)>, + DwarfRegAlias<!cast<VR>("V"#Index)>; +} + // The reprsentation of r0 when treated as the constant 0. def ZERO : GPR<0, "0">; def ZERO8 : GP8<ZERO, "0">; @@ -204,6 +233,20 @@ def VRRC : RegisterClass<"PPC", [v16i8,v8i16,v4i32,v4f32], 128, V12, V13, V14, V15, V16, V17, V18, V19, V31, V30, V29, V28, V27, V26, V25, V24, V23, V22, V21, V20)>; +// VSX register classes (the allocation order mirrors that of the corresponding +// subregister classes). +def VSLRC : RegisterClass<"PPC", [v4i32,v4f32,f64,v2f64], 128, + (add (sequence "VSL%u", 0, 13), + (sequence "VSL%u", 31, 14))>; +def VSHRC : RegisterClass<"PPC", [v4i32,v4f32,f64,v2f64], 128, + (add VSH2, VSH3, VSH4, VSH5, VSH0, VSH1, VSH6, VSH7, + VSH8, VSH9, VSH10, VSH11, VSH12, VSH13, VSH14, + VSH15, VSH16, VSH17, VSH18, VSH19, VSH31, VSH30, + VSH29, VSH28, VSH27, VSH26, VSH25, VSH24, VSH23, + VSH22, VSH21, VSH20)>; +def VSRC : RegisterClass<"PPC", [v4i32,v4f32,f64,v2f64], 128, + (add VSLRC, VSHRC)>; + def CRBITRC : RegisterClass<"PPC", [i1], 32, (add CR2LT, CR2GT, CR2EQ, CR2UN, CR3LT, CR3GT, CR3EQ, CR3UN, |