summaryrefslogtreecommitdiff
path: root/lib/Target/X86/X86ISelLowering.cpp
Commit message (Collapse)AuthorAge
* [x86] Add handling for splat-like widenings of v16i8 shuffles.Chandler Carruth2014-06-28
| | | | | | | | | | | | | | | | These show up really frequently, not the least with actual splats. =] We lowered these quite badly before. The new code path tries to widen i8 shuffles to i16 shuffles in a splat-like way. There are still some inefficiencies in our i16 splat logic though, so we aren't really done here. Also, for certain patterns (bit of a gather-and-splat) we still generate pretty silly code, and I've left a fixme for addressing it. However, I'm not actually worried about this code pattern as much. The old shuffle lowering generates a 29 instruction monstrosity for it that should execute much more slowly. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211974 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Fix another bug hit when bootstrapping with the new shuffleChandler Carruth2014-06-27
| | | | | | | | | | | | | lowering. For maximum irony, I had already discovered this bug, diagnosed it, and left FIXMEs about it in the test cases. =[ I just failed to go back over those until after i had reduced a bootstrap miscompile down to a single TU, stared at the assembly for an hour, and figured out the bug. Again. Oh well. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211955 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Fix a miscompile in the new shuffle lowering uncovered byChandler Carruth2014-06-27
| | | | | | | | | a bootstrap. I managed to mis-remember how PACKUS worked on x86, and was using undef for the high bytes instead of zero. The fix is fairly obvious. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211922 91177308-0d34-0410-b5e6-96231b3b80d8
* Clean up unused variable warning in release build.Alexander Kornienko2014-06-27
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211902 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Clean up some unused variables, especially in release builds.Chandler Carruth2014-06-27
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211894 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Teach the target combine step to aggressively fold pshufd insturcions.Chandler Carruth2014-06-27
| | | | | | | | | | | | | Summary: This allows it to fold pshufd instructions across intervening half-shuffles and other noise. This pattern actually shows up in the generic lowering tests, but I've also added direct tests using intrinsics to make sure that the specific desired functionality is working even if the lowering stuff changes in the future. Differential Revision: http://reviews.llvm.org/D4292 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211892 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Teach the target-specific combining how to aggressively foldChandler Carruth2014-06-27
| | | | | | | | | | | | | | | | | | half-shuffles, even looking through intervening instructions in a chain. Summary: This doesn't happen to show up with any test cases I've found for the current shuffle lowering, but previous attempts would benefit from this and it seems generally useful. I've tested it directly using intrinsics, which also shows that it will work with hand vectorized code as well. Note that even though pshufd isn't directly used in these tests, it gets exercised because we combine some of the half shuffles into a pshufd first, and then merge them. Differential Revision: http://reviews.llvm.org/D4291 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211890 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Teach the X86 backend to DAG-combine SSE2 shuffles that areChandler Carruth2014-06-27
| | | | | | | | | | | | | | | | | trivially redundant. This fixes several cases in the new vector shuffle lowering algorithm which would generate redundant shuffle instructions for the sake of simplicity. I'm also deleting a testcase which was somewhat ridiculous. It was checking for a bug in 2007 about incorrectly transforming shuffles by looking for the string "-86" in the output of a pretty substantial function. This test case doesn't seem to have any value at this point. Differential Revision: http://reviews.llvm.org/D4240 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211889 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Begin a significant overhaul of how vector lowering is done in theChandler Carruth2014-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | x86 backend. This sketches out a new code path for vector lowering, hidden behind an off-by-default flag while it is under development. The fundamental idea behind the new code path is to aggressively break down the problem space in ways that ease selecting the odd set of instructions available on x86, and carefully avoid scalarizing code even when forced to use older ISAs. Notably, this starts off restricting itself to SSE2 and implements the complete vector shuffle and blend space for 128-bit vectors in SSE2 without scalarizing. The plan is to layer on top of this ISA extensions where we can bail out of the complex SSE2 lowering and opt for a cheaper, specialized instruction (or set of instructions). It also needs to be generalized to AVX and AVX512 vector widths. Currently, this does a decent but not perfect job for SSE2. There are some specific shortcomings that I plan to address: - We need a peephole combine to fold together shuffles where possible. There are cases where a previous shuffle could be modified slightly to arrange for elements to be in the correct position and a later shuffle eliminated. Doing this eagerly added quite a bit of complexity, and so my plan is to combine away these redundancies afterward. - There are a lot more clever ways to use unpck and pack that need to be added. This is essential for real world shuffles as it turns out... Once SSE2 is polished a bit I should be able to get interesting numbers on performance improvements on benchmarks conducive to vectorization. All of this will be off by default until it is functionally equivalent of course. Differential Revision: http://reviews.llvm.org/D4225 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211888 91177308-0d34-0410-b5e6-96231b3b80d8
* Silence a warning due to a comparison between signed and unsigned.Andrea Di Biagio2014-06-26
| | | | | | | | No functional change intended. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211782 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Improve the selection of SSE3/AVX addsub instructions. Andrea Di Biagio2014-06-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch teaches the backend how to canonicalize a shuffle vectors according to the rule: - (shuffle (FADD A, B), (FSUB A, B), Mask) -> (shuffle (FSUB A, -B), (FADD A, -B), Mask) Where 'Mask' is: <0,5,2,7> ;; for v4f32 and v4f64 shuffles. <0,3> ;; for v2f64 shuffles. <0,9,2,11,4,13,6,15> ;; for v8f32 shuffles. In general, ISel only knows how to pattern-match a canonical 'fadd + fsub + blendi' dag node sequence into an ADDSUB instruction. This new rule allows to convert a non-canonical dag sequence into a canonical one that will be matched by a single ADDSUB at ISel stage. The idea of converting a non-canonical ADDSUB into a canonical one by swapping the first two operands of the shuffle, and then negating the second operand of the FADD and FSUB, was originally proposed by Hal Finkel. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211771 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Always prefer to lower a VECTOR_SHUFFLE into a BLENDI instead of SHUFP ↵Andrea Di Biagio2014-06-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | (or VPERM2X128). This patch teaches method 'LowerVECTOR_SHUFFLE' to give higher precedence to the check for 'isBlendMask'; the idea is that, when possible, we should firstly check if a shuffle performs a blend, and in case, try to lower it into a BLENDI instead of selecting a SHUFP or (worse) a VPERM2X128. In general: - AVX VBLENDPS/D always have better latency and throughput than VPERM2F128; - BLENDPS/D instructions tend to always have better 'reciprocal throughput' than the equivalent SHUFPS/D; - Both BLENDPS/D and SHUFPS/D are often decoded into the same number of m-ops; however, a m-op obtained from a BLENDPS/D can be scheduled to more than one execution port. This patch: - Moves the check for 'isBlendMask' immediately before the check for 'isSHUFPMask' within method 'LowerVECTOR_SHUFFLE'; - Updates existing tests for sse/avx shuffle/blend instructions to verify that we select (v)blendps/d when possible (instead of (v)shufps/d or vperm2f128). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211720 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Add intrinsics for the pshufd, pshuflw, and pshufhw instructions.Chandler Carruth2014-06-25
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211694 91177308-0d34-0410-b5e6-96231b3b80d8
* Re-apply r211399, "Generate native unwind info on Win64" with a fix to ↵NAKAMURA Takumi2014-06-25
| | | | | | | | | | | | | | | | | | | | | | | ignore SEH pseudo ops in X86 JIT emitter. -- This patch enables LLVM to emit Win64-native unwind info rather than DWARF CFI. It handles all corner cases (I hope), including stack realignment. Because the unwind info is not flexible enough to describe stack frames with a gap of unknown size in the middle, such as the one caused by stack realignment, I modified register spilling code to place all spills into the fixed frame slots, so that they can be accessed relative to the frame pointer. Patch by Vadim Chugunov! Reviewed By: rnk Differential Revision: http://reviews.llvm.org/D4081 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211691 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Add target combine rule to select ADDSUB instructions from a build_vectorAndrea Di Biagio2014-06-25
| | | | | | | | | | | | | | | | | | | This patch teaches the backend how to combine a build_vector that implements an 'addsub' between packed float vectors into a sequence of vector add and vector sub followed by a VSELECT. The new VSELECT is expected to be lowered into a BLENDI. At ISel stage, the sequence 'vector add + vector sub + BLENDI' is pattern-matched against ISel patterns added at r211427 to select 'addsub' instructions. Added three more ISel patterns for ADDSUB. Added test sse3-avx-addsub-2.ll to verify that we correctly emit 'addsub' instructions. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211679 91177308-0d34-0410-b5e6-96231b3b80d8
* vpblend intrinsics combines as shifts intrinsics due to absence return stmt ↵Robert Khasanov2014-06-24
| | | | | | | | | | | between them Fix PR20088 Differential Revision: http://reviews.llvm.org/D4277 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211617 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert r211399, "Generate native unwind info on Win64"NAKAMURA Takumi2014-06-22
| | | | | | It broke Legacy JIT Tests on x86_64-{mingw32|msvc}, aka Windows x64. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211480 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix PR20087 by using the source index when changing the vector loadFilipe Cabecinhas2014-06-22
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211472 91177308-0d34-0410-b5e6-96231b3b80d8
* Generate native unwind info on Win64Reid Kleckner2014-06-20
| | | | | | | | | | | | | | | | | | | | This patch enables LLVM to emit Win64-native unwind info rather than DWARF CFI. It handles all corner cases (I hope), including stack realignment. Because the unwind info is not flexible enough to describe stack frames with a gap of unknown size in the middle, such as the one caused by stack realignment, I modified register spilling code to place all spills into the fixed frame slots, so that they can be accessed relative to the frame pointer. Patch by Vadim Chugunov! Reviewed By: rnk Differential Revision: http://reviews.llvm.org/D4081 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211399 91177308-0d34-0410-b5e6-96231b3b80d8
* [x86] Make the x86 PACKSSWB, PACKSSDW, PACKUSWB, and PACKUSDWChandler Carruth2014-06-20
| | | | | | | | | | | | | | | instructions available as synthetic SDNodes PACKSS and PACKUS that will select to the correct instruction variants based on the return type. This allows us to use these rather important instructions when lowering vector shuffles. Also moves the relevant instruction definitions to be split out from the fully generic multiclasses to allow them to match these new SDNodes in the same way that the UNPCK instructions do. No functionality should actually be changed here. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211332 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Teach how to combine horizontal binop even in the presence of undefs.Andrea Di Biagio2014-06-19
| | | | | | | | | | | | | | | | Before this change, the backend was unable to fold a build_vector dag node with UNDEF operands into a single horizontal add/sub. This patch teaches how to combine a build_vector with UNDEF operands into a horizontal add/sub when possible. The algorithm conservatively avoids to combine a build_vector with only a single non-UNDEF operand. Added test haddsub-undef.ll to verify that we correctly fold horizontal binop even in the presence of UNDEFs. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211265 91177308-0d34-0410-b5e6-96231b3b80d8
* Hook up vector int_ctlz for AVX512.Cameron McInally2014-06-16
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@211024 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: lower ATOMIC_CMP_SWAP_WITH_SUCCESS directlyTim Northover2014-06-13
| | | | | | | | | | | Lowering this new node allows us to fold the almost universal comparison for success before it's even formed. Instead we can create a copy from EFLAGS and an X86ISD::SETCC operation since all "cmpxchg" instructions set the zero-flag to the correct value. rdar://problem/13201607 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210923 91177308-0d34-0410-b5e6-96231b3b80d8
* IR: add "cmpxchg weak" variant to support permitted failure.Tim Northover2014-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds a weak variant of the cmpxchg operation, as described in C++11. A cmpxchg instruction with this modifier is permitted to fail to store, even if the comparison indicated it should. As a result, cmpxchg instructions must return a flag indicating success in addition to their original iN value loaded. Thus, for uniformity *all* cmpxchg instructions now return "{ iN, i1 }". The second flag is 1 when the store succeeded. At the DAG level, a new ATOMIC_CMP_SWAP_WITH_SUCCESS node has been added as the natural representation for the new cmpxchg instructions. It is a strong cmpxchg. By default this gets Expanded to the existing ATOMIC_CMP_SWAP during Legalization, so existing backends should see no change in behaviour. If they wish to deal with the enhanced node instead, they can call setOperationAction on it. Beware: as a node with 2 results, it cannot be selected from TableGen. Currently, no use is made of the extra information provided in this patch. Test updates are almost entirely adapting the input IR to the new scheme. Summary for out of tree users: ------------------------------ + Legacy Bitcode files are upgraded during read. + Legacy assembly IR files will be invalid. + Front-ends must adapt to different type for "cmpxchg". + Backends should be unaffected by default. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210903 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Teach how to dump the name of target node RDTSCP_DAG.Andrea Di Biagio2014-06-12
| | | | | | | | | | | When I originally added node RDTSCP_DAG (r207127) I forgot to add a string name for it in method 'getTargetNodeName'. No functional change intended. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210769 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Teach how to combine AVX and AVX2 horizontal binop on packed 256-bit ↵Andrea Di Biagio2014-06-12
| | | | | | | | | | | | | vectors. This patch adds target combine rules to match: - [AVX] Horizontal add/sub of packed single/double precision floating point values from 256-bit vectors; - [AVX2] Horizontal add/sub of packed integer values from 256-bit vectors. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210761 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: add stringy name for X86ISD::LCMPXCHG16_DAGTim Northover2014-06-11
| | | | | | | I don't know what "target specific node #383" is, and I don't want to have to. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210663 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Refactor the logic to select horizontal adds/subs to a helper function.Andrea Di Biagio2014-06-11
| | | | | | | | | | | | | | | | | This patch moves part of the logic implemented by the target specific combine rules added at r210477 to a separate helper function. This should make easier to add more rules for matching AVX/AVX2 horizontal adds/subs. This patch also fixes a problem caused by a wrong check performed on indices of extract_vector_elt dag nodes in input to the scalar adds/subs. New tests have been added to verify that we correctly check indices of extract_vector_elt dag nodes when selecting a horizontal operation. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210644 91177308-0d34-0410-b5e6-96231b3b80d8
* Use the TargetMachine on the DAG or the MachineFunction insteadEric Christopher2014-06-10
| | | | | | of using the cached TargetMachine. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210589 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a FIXME.Eric Christopher2014-06-10
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210559 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Improved target combine rules for selecting horizontal add/sub.Andrea Di Biagio2014-06-10
| | | | | | | | | | | | | | | This patch slightly changes the algorithm introduced at revision 210477 to fix a problem where the algorithm was producing incorrect code for the VEX.256 encoded versions of horizontal add/sub. For these cases, we now try to split the two 256-bit vectors into 128-bit chunks before emitting horizontal add/sub dag nodes. Added a new test case into haddsub-2.ll. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210545 91177308-0d34-0410-b5e6-96231b3b80d8
* SelectionDAG: Don't use MVT::Other to determine legality of ISD::SELECT_CCTom Stellard2014-06-10
| | | | | | | | | | | | | The SelectionDAG bad a special case for ISD::SELECT_CC, where it would allow targets to specify: setOperationAction(ISD::SELECT_CC, MVT::Other, Expand); to indicate that they wanted to expand ISD::SELECT_CC for all types. This wasn't applied correctly everywhere, and it makes writing new DAG patterns with ISD::SELECT_CC difficult. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210541 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "X86: elide comparisons after cmpxchg instructions."Tim Northover2014-06-10
| | | | | | | This reverts commit r210523. It was committed prematurely without waiting for review. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210524 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: elide comparisons after cmpxchg instructions.Tim Northover2014-06-10
| | | | | | | | | | | | | | | The C++ and C semantics of the compare_and_swap operations actually require us to return a boolean "success" value. In LLVM terms this means a second comparison of the output of "cmpxchg" against the input desired value. However, x86's "cmpxchg" instruction sets all flags for the comparison formed, so we can skip any secondary comparison. (N.b. this isn't true for cmpxchg8b/16b, which only set ZF). rdar://problem/13201607 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210523 91177308-0d34-0410-b5e6-96231b3b80d8
* Move all of the x86 subtarget initialized variables down into the x86 subtargetEric Christopher2014-06-09
| | | | | | from the x86 target machine. Should be no functional change. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210479 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Add target combine rules for horizontal add/sub.Andrea Di Biagio2014-06-09
| | | | | | | | | | | | | | | | | | | | | | This patch adds new target specific combine rules to identify horizontal add/sub idioms from BUILD_VECTOR dag nodes. This patch also teaches the DAGCombiner how to canonicalize sequences of insert_vector_elt dag nodes according to the following rule: (insert_vector_elt (insert_vector_elt A, I0), I1) -> (insert_vecto_elt (insert_vector_elt A, I1), I0) This new canonicalization rule only triggers if the inner insert_vector dag node has exactly one use; also, both indices must be known constants, and I1 < I0. This last rule made it possible to write a simpler algorithm to identify horizontal add/sub patterns because now we don't have to worry about the ordering of insert_vector_elt dag nodes. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210477 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Avoid emitting unnecessary test instructions.Andrea Di Biagio2014-06-09
| | | | | | | | | | | | | | | This patch teaches the backend how to check for the 'NoSignedWrap' flag on binary operations to improve the emission of 'test' instructions. If the result of a binary operation is known not to overflow we know that resetting the Overflow flag is unnecessary and so we can avoid emitting the test instruction. Patch by Marcello Maggioni. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210468 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Use ADD/SUB instead of INC/DEC for SilvermontAlexey Volkov2014-06-09
| | | | | | | | | | | | | | According to Intel Software Optimization Manual on Silvermont INC or DEC instructions require an additional uop to merge the flags. As a result, a branch instruction depending on an INC or a DEC instruction incurs a 1 cycle penalty. Differential Revision: http://reviews.llvm.org/D3990 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210466 91177308-0d34-0410-b5e6-96231b3b80d8
* [C++11] Use 'nullptr'.Craig Topper2014-06-08
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210442 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: Don't turn shifts into ands if there's another use that may not check ↵Benjamin Kramer2014-06-06
| | | | | | | | for equality. Fixes PR19964. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210371 91177308-0d34-0410-b5e6-96231b3b80d8
* Fixed a bug in lowering shuffle_vectors to insertpsFilipe Cabecinhas2014-06-06
| | | | | | | | | | | | | | Summary: We were being too strict and not accounting for undefs. Added a test case and fixed another one where we improved codegen. Reviewers: grosbach, nadav, delena Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D4039 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210361 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Fix checked arithmetic for i8 on X86.Andrea Di Biagio2014-06-02
| | | | | | | | | | | | | When lowering a ISD::BRCOND into a test+branch, make sure that we always use the correct condition code to emit the test operation. This fixes PR19858: "i8 checked mul is wrong on x86". Patch by Keno Fisher! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@210032 91177308-0d34-0410-b5e6-96231b3b80d8
* Have the TLOF creation take a Triple rather than needing a subtarget.Eric Christopher2014-05-31
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209937 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Add two combine rules to simplify dag nodes introduced during type ↵Andrea Di Biagio2014-05-30
| | | | | | | | | | | | | | | | | | | | | | | | | legalization when promoting nodes with illegal vector type. This patch teaches the backend how to simplify/canonicalize dag node sequences normally introduced by the backend when promoting certain dag nodes with illegal vector type. This patch adds two new combine rules: 1) fold (shuffle (bitcast (BINOP A, B)), Undef, <Mask>) -> (shuffle (BINOP (bitcast A), (bitcast B)), Undef, <Mask>) 2) fold (BINOP (shuffle (A, Undef, <Mask>)), (shuffle (B, Undef, <Mask>))) -> (shuffle (BINOP A, B), Undef, <Mask>). Both rules are only triggered on the type-legalized DAG. In particular, rule 1. is a target specific combine rule that attempts to sink a bitconvert into the operands of a binary operation. Rule 2. is a target independet rule that attempts to move a shuffle immediately after a binary operation. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209930 91177308-0d34-0410-b5e6-96231b3b80d8
* Separate the check for blend shuffle_vector masksFilipe Cabecinhas2014-05-30
| | | | | | | | | | | | | | | | | Summary: Separate the check for blend shuffle_vector masks into isBlendMask. This function will also be used to check if a vector shuffle is legal. No change in functionality was intended, but we ended up improving codegen on two tests, which were being (more) optimized only if the resulting shuffle was legal. Reviewers: nadav, delena, andreadb Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D3964 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209923 91177308-0d34-0410-b5e6-96231b3b80d8
* Delete dead code.Rafael Espindola2014-05-23
| | | | | | GV is never used past this point. This was probably a copy and paste error. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209518 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Improve the lowering of BITCAST from MVT::f64 to MVT::v4i16/MVT::v8i8.Andrea Di Biagio2014-05-22
| | | | | | | | | | | | | | | This patch teaches the x86 backend how to efficiently lower ISD::BITCAST dag nodes from MVT::f64 to MVT::v4i16 (and vice versa), and from MVT::f64 to MVT::v8i8 (and vice versa). This patch extends the logic from revision 208107 to also handle MVT::v4i16 and MVT::v8i8. Also, this patch correctly propagates Undef values when performing the widening of a vector (example: when widening from v2i32 to v4i32, the upper 64bits of the resulting vector are 'undef'). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209451 91177308-0d34-0410-b5e6-96231b3b80d8
* [X86] Fix a bug in the lowering of BLENDI introduced in r209043.Quentin Colombet2014-05-21
| | | | | | | | | | | | | | | | | | | | ISD::VSELECT mask uses 1 to identify the first argument and 0 to identify the second argument. On the other hand, BLENDI uses 0 to identify the first argument and 1 to identify the second argument. Fix the generation of the blend mask to account for this difference. The bug did not show up with r209043, because we were not checking for the actual arguments of the blend instruction! This commit also fixes the test cases. Note: The same mask works for the BLENDr variant because the arguments are swapped during instruction selection (see the BLENDXXrr patterns). <rdar://problem/16975435> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209324 91177308-0d34-0410-b5e6-96231b3b80d8
* Add parentheses to suppress the gcc warning '-Wparentheses'.Simon Atanasyan2014-05-20
| | | | | | No functional changes. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209203 91177308-0d34-0410-b5e6-96231b3b80d8
* Added more insertps optimizationsFilipe Cabecinhas2014-05-19
| | | | | | | | | | | | | | | | | | | | Summary: When inserting an element that's coming from a vector load or a broadcast of a vector (or scalar) load, combine the load into the insertps instruction. Added PerformINSERTPSCombine for the case where we need to fix the load (load of a vector + insertps with a non-zero CountS). Added patterns for the broadcasts. Also added tests for SSE4.1, AVX, and AVX2. Reviewers: delena, nadav, craig.topper Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D3581 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@209156 91177308-0d34-0410-b5e6-96231b3b80d8