diff options
author | Nadav Rotem <nrotem@apple.com> | 2012-12-24 09:40:33 +0000 |
---|---|---|
committer | Nadav Rotem <nrotem@apple.com> | 2012-12-24 09:40:33 +0000 |
commit | ace0c2fad7c581367cc2519e1d773bca37fc9fec (patch) | |
tree | 74ec5cfe661003e97716f8c9ef848ef45ff58ed9 /test/CodeGen/X86/fold-vex.ll | |
parent | 9e5329d77e590f757dbd8384f418e44df9dbf91a (diff) | |
download | llvm-ace0c2fad7c581367cc2519e1d773bca37fc9fec.tar.gz llvm-ace0c2fad7c581367cc2519e1d773bca37fc9fec.tar.bz2 llvm-ace0c2fad7c581367cc2519e1d773bca37fc9fec.tar.xz |
Some x86 instructions can load/store one of the operands to memory. On SSE, this memory needs to be aligned.
When these instructions are encoded in VEX (on AVX) there is no such requirement. This changes the folding
tables and removes the alignment restrictions from VEX-encoded instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@171024 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'test/CodeGen/X86/fold-vex.ll')
-rw-r--r-- | test/CodeGen/X86/fold-vex.ll | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/test/CodeGen/X86/fold-vex.ll b/test/CodeGen/X86/fold-vex.ll new file mode 100644 index 0000000000..60e500b419 --- /dev/null +++ b/test/CodeGen/X86/fold-vex.ll @@ -0,0 +1,16 @@ +; RUN: llc < %s -mcpu=corei7-avx -march=x86-64 | FileCheck %s + +;CHECK: @test +; No need to load from memory. The operand will be loaded as part of th AND instr. +;CHECK-NOT: vmovaps +;CHECK: vandps +;CHECK: ret + +define void @test1(<8 x i32>* %p0, <8 x i32> %in1) nounwind { +entry: + %in0 = load <8 x i32>* %p0, align 2 + %a = and <8 x i32> %in0, %in1 + store <8 x i32> %a, <8 x i32>* undef + ret void +} + |