summaryrefslogtreecommitdiff
path: root/test/CodeGen/X86/avx-sext.ll
Commit message (Collapse)AuthorAge
* X86: Custom lower sext v16i8 to v16i16, and the corresponding truncate.Benjamin Kramer2013-10-23
| | | | | | Also update the cost model. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193270 91177308-0d34-0410-b5e6-96231b3b80d8
* Cleanup: test source files do not need to be executableArnaud A. de Grandmaison2013-04-22
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180003 91177308-0d34-0410-b5e6-96231b3b80d8
* Optimize sext <4 x i8> and <4 x i16> to <4 x i64>.Nadav Rotem2013-03-19
| | | | | | | | Patch by Ahmad, Muhammad T <muhammad.t.ahmad@intel.com> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@177421 91177308-0d34-0410-b5e6-96231b3b80d8
* I optimized the following patterns:Elena Demikhovsky2013-02-20
| | | | | | | | | | | | | | | | | sext <4 x i1> to <4 x i64> sext <4 x i8> to <4 x i64> sext <4 x i16> to <4 x i64> I'm running Combine on SIGN_EXTEND_IN_REG and revert SEXT patterns: (sext_in_reg (v4i64 anyext (v4i32 x )), ExtraVT) -> (v4i64 sext (v4i32 sext_in_reg (v4i32 x , ExtraVT))) The sext_in_reg (v4i32 x) may be lowered to shl+sar operations. The "sar" does not exist on 64-bit operation, so lowering sext_in_reg (v4i64 x) has no vector solution. I also added a cost of this operations to the AVX costs table. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175619 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert 172708.Nadav Rotem2013-01-20
| | | | | | | | | | | The optimization handles esoteric cases but adds a lot of complexity both to the X86 backend and to other backends. This optimization disables an important canonicalization of chains of SEXT nodes and makes SEXT and ZEXT asymmetrical. Disabling the canonicalization of consecutive SEXT nodes into a single node disables other DAG optimizations that assume that there is only one SEXT node. The AVX mask optimizations is one example. Additionally this optimization does not update the cost model. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172968 91177308-0d34-0410-b5e6-96231b3b80d8
* On Sandybridge split unaligned 256bit stores into two xmm-sized stores. Nadav Rotem2013-01-19
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172894 91177308-0d34-0410-b5e6-96231b3b80d8
* Optimization for the following SIGN_EXTEND pairs:Elena Demikhovsky2013-01-17
| | | | | | | | | | | | v8i8 -> v8i64, v8i8 -> v8i32, v4i8 -> v4i64, v4i16 -> v4i64 for AVX and AVX2. Bug 14865. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@172708 91177308-0d34-0410-b5e6-96231b3b80d8
* X86: Emit vector sext as shuffle + sra if vpmovsx is not available.Benjamin Kramer2012-12-22
| | | | | | | Also loosen the SSSE3 dependency a bit, expanded pshufb + psra is still better than scalarized loads. Fixes PR14590. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170984 91177308-0d34-0410-b5e6-96231b3b80d8
* Optimized load + SIGN_EXTEND patterns in the X86 backend.Elena Demikhovsky2012-12-19
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@170506 91177308-0d34-0410-b5e6-96231b3b80d8
* Optimization for SIGN_EXTEND operation on AVX.Elena Demikhovsky2012-02-02
Special handling was added for v4i32 -> v4i64 and v8i16 -> v8i32 extensions. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@149600 91177308-0d34-0410-b5e6-96231b3b80d8