summaryrefslogtreecommitdiff
path: root/test/CodeGen/X86/memset64-on-x86-32.ll
diff options
context:
space:
mode:
authorEvan Cheng <evan.cheng@apple.com>2011-01-07 19:35:30 +0000
committerEvan Cheng <evan.cheng@apple.com>2011-01-07 19:35:30 +0000
commita5e1362f968568d66d76ddcdcff4ab98e203a48c (patch)
tree53e266c315432b49be8ad6f3a2d2a5873265ab53 /test/CodeGen/X86/memset64-on-x86-32.ll
parent1434f66b2e132a707e2c8ccb3350ea13fb5aa051 (diff)
downloadllvm-a5e1362f968568d66d76ddcdcff4ab98e203a48c.tar.gz
llvm-a5e1362f968568d66d76ddcdcff4ab98e203a48c.tar.bz2
llvm-a5e1362f968568d66d76ddcdcff4ab98e203a48c.tar.xz
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'test/CodeGen/X86/memset64-on-x86-32.ll')
-rw-r--r--test/CodeGen/X86/memset64-on-x86-32.ll3
1 files changed, 2 insertions, 1 deletions
diff --git a/test/CodeGen/X86/memset64-on-x86-32.ll b/test/CodeGen/X86/memset64-on-x86-32.ll
index 5a0e893e3b..3f069b4a1a 100644
--- a/test/CodeGen/X86/memset64-on-x86-32.ll
+++ b/test/CodeGen/X86/memset64-on-x86-32.ll
@@ -1,5 +1,6 @@
; RUN: llc < %s -mtriple=i386-apple-darwin -mcpu=nehalem | grep movups | count 5
-; RUN: llc < %s -mtriple=x86_64-apple-darwin -mcpu=core2 | grep movups | count 5
+; RUN: llc < %s -mtriple=i386-apple-darwin -mcpu=core2 | grep movl | count 20
+; RUN: llc < %s -mtriple=x86_64-apple-darwin -mcpu=core2 | grep movq | count 10
define void @bork() nounwind {
entry: