summaryrefslogtreecommitdiff
path: root/test/CodeGen/MSP430
diff options
context:
space:
mode:
authorChris Lattner <sabre@nondot.org>2011-02-13 22:25:43 +0000
committerChris Lattner <sabre@nondot.org>2011-02-13 22:25:43 +0000
commit0a9481f44fe4fc76e59109992940a76b2a3f9b3b (patch)
tree58e330925b67825f38c827f416eb9dc2e5d9ee1e /test/CodeGen/MSP430
parenteafbe659f8cd88584bef5f7ad2500b42227d02ab (diff)
downloadllvm-0a9481f44fe4fc76e59109992940a76b2a3f9b3b.tar.gz
llvm-0a9481f44fe4fc76e59109992940a76b2a3f9b3b.tar.bz2
llvm-0a9481f44fe4fc76e59109992940a76b2a3f9b3b.tar.xz
Enhance ComputeMaskedBits to know that aligned frameindexes
have their low bits set to zero. This allows us to optimize out explicit stack alignment code like in stack-align.ll:test4 when it is redundant. Doing this causes the code generator to start turning FI+cst into FI|cst all over the place, which is general goodness (that is the canonical form) except that various pieces of the code generator don't handle OR aggressively. Fix this by introducing a new SelectionDAG::isBaseWithConstantOffset predicate, and using it in places that are looking for ADD(X,CST). The ARM backend in particular was missing a lot of addressing mode folding opportunities around OR. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@125470 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'test/CodeGen/MSP430')
-rw-r--r--test/CodeGen/MSP430/Inst16mm.ll2
1 files changed, 1 insertions, 1 deletions
diff --git a/test/CodeGen/MSP430/Inst16mm.ll b/test/CodeGen/MSP430/Inst16mm.ll
index d4ae811ac8..2337c2c0f2 100644
--- a/test/CodeGen/MSP430/Inst16mm.ll
+++ b/test/CodeGen/MSP430/Inst16mm.ll
@@ -64,6 +64,6 @@ entry:
%0 = load i16* %retval ; <i16> [#uses=1]
ret i16 %0
; CHECK: mov2:
-; CHECK: mov.w 2(r1), 6(r1)
; CHECK: mov.w 0(r1), 4(r1)
+; CHECK: mov.w 2(r1), 6(r1)
}