summaryrefslogtreecommitdiff
path: root/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
Commit message (Collapse)AuthorAge
...
* Add Constant Hoisting PassJuergen Ributzka2014-01-24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This pass identifies expensive constants to hoist and coalesces them to better prepare it for SelectionDAG-based code generation. This works around the limitations of the basic-block-at-a-time approach. First it scans all instructions for integer constants and calculates its cost. If the constant can be folded into the instruction (the cost is TCC_Free) or the cost is just a simple operation (TCC_BASIC), then we don't consider it expensive and leave it alone. This is the default behavior and the default implementation of getIntImmCost will always return TCC_Free. If the cost is more than TCC_BASIC, then the integer constant can't be folded into the instruction and it might be beneficial to hoist the constant. Similar constants are coalesced to reduce register pressure and materialization code. When a constant is hoisted, it is also hidden behind a bitcast to force it to be live-out of the basic block. Otherwise the constant would be just duplicated and each basic block would have its own copy in the SelectionDAG. The SelectionDAG recognizes such constants as opaque and doesn't perform certain transformations on them, which would create a new expensive constant. This optimization is only applied to integer constants in instructions and simple (this means not nested) constant cast experessions. For example: %0 = load i64* inttoptr (i64 big_constant to i64*) Reviewed by Eric git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@200022 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix non-deterministic SDNodeOrder-dependent codegenNico Rieck2014-01-12
| | | | | | | Reset SelectionDAGBuilder's SDNodeOrder to ensure deterministic code generation. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@199050 91177308-0d34-0410-b5e6-96231b3b80d8
* [stackprotector] Use analysis from the StackProtector pass for stack layout ↵Josh Magee2013-12-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | in PEI a nd LocalStackSlot passes. This changes the MachineFrameInfo API to use the new SSPLayoutKind information produced by the StackProtector pass (instead of a boolean flag) and updates a few pass dependencies (to preserve the SSP analysis). The stack layout follows the same approach used prior to this change - i.e., only LargeArray stack objects will be placed near the canary and everything else will be laid out normally. After this change, structures containing large arrays will also be placed near the canary - a case previously missed by the old implementation. Out of tree targets will need to update their usage of MachineFrameInfo::CreateStackObject to remove the MayNeedSP argument. The next patch will implement the rules for sspstrong and sspreq. The end goal is to support ssp-strong stack layout rules. WIP. Differential Revision: http://llvm-reviews.chandlerc.com/D2158 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197653 91177308-0d34-0410-b5e6-96231b3b80d8
* [Stackmap] Liveness Analysis PassJuergen Ributzka2013-12-14
| | | | | | | | | | | | | | | | | | | | This optional register liveness analysis pass can be enabled with either -enable-stackmap-liveness, -enable-patchpoint-liveness, or both. The pass traverses each basic block in a machine function. For each basic block the instructions are processed in reversed order and if a patchpoint or stackmap instruction is encountered the current live-out register set is encoded as a register mask and attached to the instruction. Later on during stackmap generation the live-out register mask is processed and also emitted as part of the stackmap. This information is optional and intended for optimization purposes only. This will enable a client of the stackmap to reason about the registers it can use and which registers need to be preserved. Reviewed by Andy git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197317 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "Liveness Analysis Pass"Andrew Trick2013-12-13
| | | | | | | | | | | | | | This reverts commit r197254. This was an accidental merge of Juergen's patch. It will be checked in shortly, but wasn't meant to go in quite yet. Conflicts: include/llvm/CodeGen/StackMaps.h lib/CodeGen/StackMaps.cpp test/CodeGen/X86/stackmap-liveness.ll git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197260 91177308-0d34-0410-b5e6-96231b3b80d8
* Grow the stackmap/patchpoint format to hold 64-bit IDs.Andrew Trick2013-12-13
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197255 91177308-0d34-0410-b5e6-96231b3b80d8
* Liveness Analysis PassAndrew Trick2013-12-13
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197254 91177308-0d34-0410-b5e6-96231b3b80d8
* SelectionDAG: Fix a typo.Benjamin Kramer2013-12-11
| | | | | | Found by "cppcheck". PR18208. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197047 91177308-0d34-0410-b5e6-96231b3b80d8
* Reland "Fix miscompile of MS inline assembly with stack realignment"Reid Kleckner2013-12-10
| | | | | | | | | | | This re-lands commit r196876, which was reverted in r196879. The tests have been fixed to pass on platforms with a stack alignment larger than 4. Update to clang side tests will land shortly. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196939 91177308-0d34-0410-b5e6-96231b3b80d8
* Add TargetLowering::prepareVolatileOrAtomicLoadRichard Sandiford2013-12-10
| | | | | | | | | | | | | | | | | | One unusual feature of the z architecture is that the result of a previous load can be reused indefinitely for subsequent loads, even if a cache-coherent store to that location is performed by another CPU. A special serializing instruction must be used if you want to force a load to be reattempted. Since volatile loads are not supposed to be omitted in this way, we should insert a serializing instruction before each such load. The same goes for atomic loads. The patch implements this at the IR->DAG boundary, in a similar way to atomic fences. It is a no-op for targets other than SystemZ. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196905 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert "Fix miscompile of MS inline assembly with stack realignment"Reid Kleckner2013-12-10
| | | | | | | This reverts commit r196876. Its tests failed on the bots, so I'll figure it out tomorrow. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196879 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix miscompile of MS inline assembly with stack realignmentReid Kleckner2013-12-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For stack frames requiring realignment, three pointers may be needed: - ebp to address incoming arguments - esi (could be any callee-saved register) to address locals - esp to address outgoing arguments We would use esi unconditionally without verifying that it did not conflict with inline assembly. This change doesn't do the verification, it simply emits a fatal error on functions that use stack realignment, dynamic SP adjustments, and inline assembly. Because stack realignment is common on Windows, we also no longer assume that MS inline assembly clobbers esp. Instead, we analyze the inline instructions for implicit definitions and check if esp is there. If so, we require the use of a base pointer and consider it in the condition above. Mostly fixes PR16830, but we could try harder to find a non-conflicting base pointer. Reviewers: sunfish Differential Revision: http://llvm-reviews.chandlerc.com/D1317 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196876 91177308-0d34-0410-b5e6-96231b3b80d8
* Try harder to get a consistent floating point results.Rafael Espindola2013-12-05
| | | | | | | | | This just extends the existing hack. It should be enough to get a reproducible bootstrap on 32 bits. I will open a bug to track getting a real fix for this. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196462 91177308-0d34-0410-b5e6-96231b3b80d8
* StackMap: Implement support for DirectMemRefOp.Andrew Trick2013-11-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A Direct stack map location records the address of frame index. This address is itself the value that the runtime requested. This differs from IndirectMemRefOp locations, which refer to a stack locations from which the requested values must be loaded. Direct locations can directly communicate the address if an alloca, while IndirectMemRefOp handle register spills. For example: entry: %a = alloca i64... llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a) Since both the alloca and stackmap intrinsic are in the entry block, and the intrinsic takes the address of the alloca, the runtime can assume that LLVM will not substitute alloca with any intervening value. This must be verified by the runtime by checking that the stack map's location is a Direct location type. The runtime can then determine the alloca's relative location on the stack immediately after compilation, or at any time thereafter. This differs from Register and Indirect locations, because the runtime can only read the values in those locations when execution reaches the instruction address of the stack map. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195712 91177308-0d34-0410-b5e6-96231b3b80d8
* patchpoint: factor SD builder code for live vars. Plain stackmap also ↵Andrew Trick2013-11-22
| | | | | | optimizes Constant values now. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195488 91177308-0d34-0410-b5e6-96231b3b80d8
* patchpoint: eliminate hard coded operand indices.Andrew Trick2013-11-22
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195487 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix codegen for null different sized pointer.Matt Arsenault2013-11-16
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194932 91177308-0d34-0410-b5e6-96231b3b80d8
* Add addrspacecast instruction.Matt Arsenault2013-11-15
| | | | | | Patch by Michele Scandale! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194760 91177308-0d34-0410-b5e6-96231b3b80d8
* Minor extension to llvm.experimental.patchpoint: don't require a call.Andrew Trick2013-11-14
| | | | | | | | If a null call target is provided, don't emit a dummy call. This allows the runtime to reserve as little nop space as it needs without the requirement of emitting a call. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194676 91177308-0d34-0410-b5e6-96231b3b80d8
* [Stackmap] Materialize the jump address within the patchpoint noop slide.Juergen Ributzka2013-11-09
| | | | | | | | | | | | | | | This patch moves the jump address materialization inside the noop slide. This enables patching of the materialization itself or its complete removal. This patch also adds the ability to define scratch registers that can be used safely by the code called from the patchpoint intrinsic. At least one scratch register is required, because that one is used for the materialization of the jump address. This patch depends on D2009. Differential Revision: http://llvm-reviews.chandlerc.com/D2074 Reviewed by Andy git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194306 91177308-0d34-0410-b5e6-96231b3b80d8
* [Stackmap] Add AnyReg calling convention support for patchpoint intrinsic.Juergen Ributzka2013-11-08
| | | | | | | | | | | | | | The idea of the AnyReg Calling Convention is to provide the call arguments in registers, but not to force them to be placed in a paticular order into a specified set of registers. Instead it is up tp the register allocator to assign any register as it sees fit. The same applies to the return value (if applicable). Differential Revision: http://llvm-reviews.chandlerc.com/D2009 Reviewed by Andy git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194293 91177308-0d34-0410-b5e6-96231b3b80d8
* Slightly change the way stackmap and patchpoint intrinsics are lowered.Andrew Trick2013-11-05
| | | | | | | | | | | | | | MorphNodeTo is not safe to call during DAG building. It eagerly deletes dependent DAG nodes which invalidates the NodeMap. We could expose a safe interface for morphing nodes, but I don't think it's worth it. Just create a new MachineNode and replaceAllUsesWith. My understaning of the SD design has been that we want to support early target opcode selection. That isn't very well supported, but generally works. It seems reasonable to rely on this feature even if it isn't widely used. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@194102 91177308-0d34-0410-b5e6-96231b3b80d8
* [Stackmap] Remove erroneous assert.Juergen Ributzka2013-11-01
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193871 91177308-0d34-0410-b5e6-96231b3b80d8
* Commenting out this assert because it is causing the build bots to fail. ↵Aaron Ballman2013-11-01
| | | | | | This effectively reverts r193861, but needs to be fixed as part of r193769. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193862 91177308-0d34-0410-b5e6-96231b3b80d8
* Fixing an order of evaluation error in an assert.Aaron Ballman2013-11-01
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193861 91177308-0d34-0410-b5e6-96231b3b80d8
* Add support for stack map generation in the X86 backend.Andrew Trick2013-10-31
| | | | | | Originally implemented by Lang Hames. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193811 91177308-0d34-0410-b5e6-96231b3b80d8
* Lower stackmap intrinsics directly to their target opcode in the DAG builder.Andrew Trick2013-10-31
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193769 91177308-0d34-0410-b5e6-96231b3b80d8
* SelectionDAG: Pass along the original argument/element type in ISD::InputArgTom Stellard2013-10-23
| | | | | | | | | | | | | | | | For some targets, it is useful to be able to look at the original type of an argument without having to dig through the original IR. This also fixes a bug in SelectionDAGBuilder where InputArg.PartOffset was not taking into account the offset of structure elements. Patch by: Justin Holewinski Tom Stellard: - Changed the type of ArgVT to EVT, so it can store non-simple types like v3i32. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193214 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix CodeGen for different size address space GEPsMatt Arsenault2013-10-21
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193111 91177308-0d34-0410-b5e6-96231b3b80d8
* Reuse variableMatt Arsenault2013-10-21
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@193107 91177308-0d34-0410-b5e6-96231b3b80d8
* Revert patches to add case-range support for PR1255.Bob Wilson2013-09-09
| | | | | | | | | | | | | | | | | The work on this project was left in an unfinished and inconsistent state. Hopefully someone will eventually get a chance to implement this feature, but in the meantime, it is better to put things back the way the were. I have left support in the bitcode reader to handle the case-range bitcode format, so that we do not lose bitcode compatibility with the llvm 3.3 release. This reverts the following commits: 155464, 156374, 156377, 156613, 156704, 156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575, 157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884, 157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100, 159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659, 159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@190328 91177308-0d34-0410-b5e6-96231b3b80d8
* SelectionDAG: Remove unnecessary uses of TargetLowering::getPointerTy()Tom Stellard2013-08-26
| | | | | | | | | | | | If we have a binary operation like ISD:ADD, we can set the result type equal to the result type of one of its operands rather than using TargetLowering::getPointerTy(). Also, any use of DAG.getIntPtrConstant(C) as an operand for a binary operation can be replaced with: DAG.getConstant(C, OtherOperand.getValueType()); git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189227 91177308-0d34-0410-b5e6-96231b3b80d8
* SelectionDAG: Use correct pointer size when lowering function arguments v2Tom Stellard2013-08-26
| | | | | | | | | | | | | | | | This adds minimal support to the SelectionDAG for handling address spaces with different pointer sizes. The SelectionDAG should now correctly lower pointer function arguments to the correct size as well as generate the correct code when lowering getelementptr. This patch also updates the R600 DataLayout to use 32-bit pointers for the local address space. v2: - Add more helper functions to TargetLoweringBase - Use CHECK-LABEL for tests git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189221 91177308-0d34-0410-b5e6-96231b3b80d8
* [stack protector] Work around an issue with the BMOVPCB_CALL instruction on ↵Michael Gottesman2013-08-22
| | | | | | | | | | ARM by disabling does not return on __stack_chk_fail. This is to fix the bots while I look to see if there is something I can do here. rdar://14811848 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@189076 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use SRST to optimize memchrRichard Sandiford2013-08-20
| | | | | | | | | | | | | | | | | | | | SystemZTargetLowering::emitStringWrapper() previously loaded the character into R0 before the loop and made R0 live on entry. I'd forgotten that allocatable registers weren't allowed to be live across blocks at this stage, and it confused LiveVariables enough to cause a miscompilation of f3 in memchr-02.ll. This patch instead loads R0 in the loop and leaves LICM to hoist it after RA. This is actually what I'd tried originally, but I went for the manual optimisation after noticing that R0 often wasn't being hoisted. This bug forced me to go back and look at why, now fixed as r188774. We should also try to optimize null checks so that they test the CC result of the SRST directly. The select between null and the SRST GPR result could then usually be deleted as dead. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188779 91177308-0d34-0410-b5e6-96231b3b80d8
* Teach selectiondag how to handle the stackprotectorcheck intrinsic.Michael Gottesman2013-08-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, generation of stack protectors was done exclusively in the pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated splitting basic blocks at the IR level to create the success/failure basic blocks in the tail of the basic block in question. As a result of this, calls that would have qualified for the sibling call optimization were no longer eligible for optimization since said calls were no longer right in the "tail position" (i.e. the immediate predecessor of a ReturnInst instruction). Then it was noticed that since the sibling call optimization causes the callee to reuse the caller's stack, if we could delay the generation of the stack protector check until later in CodeGen after the sibling call decision was made, we get both the tail call optimization and the stack protector check! A few goals in solving this problem were: 1. Preserve the architecture independence of stack protector generation. 2. Preserve the normal IR level stack protector check for platforms like OpenBSD for which we support platform specific stack protector generation. The main problem that guided the present solution is that one can not solve this problem in an architecture independent manner at the IR level only. This is because: 1. The decision on whether or not to perform a sibling call on certain platforms (for instance i386) requires lower level information related to available registers that can not be known at the IR level. 2. Even if the previous point were not true, the decision on whether to perform a tail call is done in LowerCallTo in SelectionDAG which occurs after the Stack Protector Pass. As a result, one would need to put the relevant callinst into the stack protector check success basic block (where the return inst is placed) and then move it back later at SelectionDAG/MI time before the stack protector check if the tail call optimization failed. The MI level option was nixed immediately since it would require platform specific pattern matching. The SelectionDAG level option was nixed because SelectionDAG only processes one IR level basic block at a time implying one could not create a DAG Combine to move the callinst. To get around this problem a few things were realized: 1. While one can not handle multiple IR level basic blocks at the SelectionDAG Level, one can generate multiple machine basic blocks for one IR level basic block. This is how we handle bit tests and switches. 2. At the MI level, tail calls are represented via a special return MIInst called "tcreturn". Thus if we know the basic block in which we wish to insert the stack protector check, we get the correct behavior by always inserting the stack protector check right before the return statement. This is a "magical transformation" since no matter where the stack protector check intrinsic is, we always insert the stack protector check code at the end of the BB. Given the aforementioned constraints, the following solution was devised: 1. On platforms that do not support SelectionDAG stack protector check generation, allow for the normal IR level stack protector check generation to continue. 2. On platforms that do support SelectionDAG stack protector check generation: a. Use the IR level stack protector pass to decide if a stack protector is required/which BB we insert the stack protector check in by reusing the logic already therein. If we wish to generate a stack protector check in a basic block, we place a special IR intrinsic called llvm.stackprotectorcheck right before the BB's returninst or if there is a callinst that could potentially be sibling call optimized, before the call inst. b. Then when a BB with said intrinsic is processed, we codegen the BB normally via SelectBasicBlock. In said process, when we visit the stack protector check, we do not actually emit anything into the BB. Instead, we just initialize the stack protector descriptor class (which involves stashing information/creating the success mbbb and the failure mbb if we have not created one for this function yet) and export the guard variable that we are going to compare. c. After we finish selecting the basic block, in FinishBasicBlock if the StackProtectorDescriptor attached to the SelectionDAGBuilder is initialized, we first find a splice point in the parent basic block before the terminator and then splice the terminator of said basic block into the success basic block. Then we code-gen a new tail for the parent basic block consisting of the two loads, the comparison, and finally two branches to the success/failure basic blocks. We conclude by code-gening the failure basic block if we have not code-gened it already (all stack protector checks we generate in the same function, use the same failure basic block). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188755 91177308-0d34-0410-b5e6-96231b3b80d8
* Add a llvm.copysign intrinsicHal Finkel2013-08-19
| | | | | | | | | | | | | | | | | | | | | This adds a llvm.copysign intrinsic; We already have Libfunc recognition for copysign (which is turned into the FCOPYSIGN SDAG node). In order to autovectorize calls to copysign in the loop vectorizer, we need a corresponding intrinsic as well. In addition to the expected changes to the language reference, the loop vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a few lists in LegalizeVector{Ops,Types} so that vector copysigns can be expanded. In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN be Expand for vector types. This seems correct for all in-tree targets, and I think is the right thing to do because, previously, there was no way to generate vector-values FCOPYSIGN nodes (and most targets don't specify an action for vector-typed FCOPYSIGN). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188728 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use SRST to implement strlen and strnlenRichard Sandiford2013-08-16
| | | | | | | It would also make sense to use it for memchr; I'm working on that now. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188547 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use MVST to implement strcpy and stpcpyRichard Sandiford2013-08-16
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188546 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use CLST to implement strcmpRichard Sandiford2013-08-16
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188544 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Fix handling of 64-bit memcmp resultsRichard Sandiford2013-08-16
| | | | | | | | | | | | | | Generalize r188163 to cope with return types other than MVT::i32, just as the existing visitMemCmpCall code did. I've split this out into a subroutine so that it can be used for other upcoming patches. I also noticed that I'd used the wrong API to record the out chain. It's a load that uses DAG.getRoot() rather than getRoot(), so the out chain should go on PendingLoads. I don't have a testcase for that because we don't do any interesting scheduling on z yet. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188540 91177308-0d34-0410-b5e6-96231b3b80d8
* Replace getValueType().getSimpleVT() with getSimpleValueType().Craig Topper2013-08-15
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188442 91177308-0d34-0410-b5e6-96231b3b80d8
* [SystemZ] Use CLC and IPM to implement memcmpRichard Sandiford2013-08-12
| | | | | | | | For now this is restricted to fixed-length comparisons with a length in the range [1, 256], as for memcpy() and MVC. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@188163 91177308-0d34-0410-b5e6-96231b3b80d8
* Add ISD::FROUND for libm round()Hal Finkel2013-08-07
| | | | | | | | | | | | | | | All libm floating-point rounding functions, except for round(), had their own ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm adding ISD::FROUND so that round() can be custom lowered as well. For the most part, this is straightforward. I've added an intrinsic and a matching ISD node just like those for nearbyint() and friends. The SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed fround). This will be used by the PowerPC backend in a follow-up commit. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187926 91177308-0d34-0410-b5e6-96231b3b80d8
* TargetLowering: Add getVectorIdxTy() function v2Tom Stellard2013-08-05
| | | | | | | | | | | | | | | | | | | | | This virtual function can be implemented by targets to specify the type to use for the index operand of INSERT_VECTOR_ELT, EXTRACT_VECTOR_ELT, INSERT_SUBVECTOR, EXTRACT_SUBVECTOR. The default implementation returns the result from TargetLowering::getPointerTy() The previous code was using TargetLowering::getPointerTy() for vector indices, because this is guaranteed to be legal on all targets. However, using TargetLowering::getPointerTy() can be a problem for targets with pointer sizes that differ across address spaces. On such targets, when vectors need to be loaded or stored to an address space other than the default 'zero' address space (which is the address space assumed by TargetLowering::getPointerTy()), having an index that is a different size than the pointer can lead to inefficient pointer calculations, (e.g. 64-bit adds for a 32-bit address space). There is no intended functionality change with this patch. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187748 91177308-0d34-0410-b5e6-96231b3b80d8
* Fix crashing on invalid inline asm with matching constraints.Eric Christopher2013-07-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | For a testcase like the following: typedef unsigned long uint64_t; typedef struct { uint64_t lo; uint64_t hi; } blob128_t; void add_128_to_128(const blob128_t *in, blob128_t *res) { asm ("PAND %1, %0" : "+Q"(*res) : "Q"(*in)); } where we'll fail to allocate the register for the output constraint, our matching input constraint will not find a register to match, and could try to search past the end of the current operands array. On the idea that we'd like to attempt to keep compilation going to find more errors in the module, change the error cases when we're visiting inline asm IR to return immediately and avoid trying to create a node in the DAG. This leaves us with only a single error message per inline asm instruction, but allows us to safely keep going in the general case. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187470 91177308-0d34-0410-b5e6-96231b3b80d8
* Reflow this to be easier to read.Eric Christopher2013-07-30
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187459 91177308-0d34-0410-b5e6-96231b3b80d8
* Document a known limitation of the status quo.Adrian Prantl2013-07-10
| | | | git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185992 91177308-0d34-0410-b5e6-96231b3b80d8
* Reapply an improved version of r180816/180817.Adrian Prantl2013-07-09
| | | | | | | | | | | | | | | Change the informal convention of DBG_VALUE machine instructions so that we can express a register-indirect address with an offset of 0. The old convention was that a DBG_VALUE is a register-indirect value if the offset (operand 1) is nonzero. The new convention is that a DBG_VALUE is register-indirect if the first operand is a register and the second operand is an immediate. For plain register values the combination reg, reg is used. MachineInstrBuilder::BuildMI knows how to build the new DBG_VALUES. rdar://problem/13658587 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185966 91177308-0d34-0410-b5e6-96231b3b80d8
* AArch64/PowerPC/SystemZ/X86: This patch fixes the interface, usage, and allStephen Lin2013-07-09
| | | | | | | | | | | | | | | | | | | | | | | | in-tree implementations of TargetLoweringBase::isFMAFasterThanMulAndAdd in order to resolve the following issues with fmuladd (i.e. optional FMA) intrinsics: 1. On X86(-64) targets, ISD::FMA nodes are formed when lowering fmuladd intrinsics even if the subtarget does not support FMA instructions, leading to laughably bad code generation in some situations. 2. On AArch64 targets, ISD::FMA nodes are formed for operations on fp128, resulting in a call to a software fp128 FMA implementation. 3. On PowerPC targets, FMAs are not generated from fmuladd intrinsics on types like v2f32, v8f32, v4f64, etc., even though they promote, split, scalarize, etc. to types that support hardware FMAs. The function has also been slightly renamed for consistency and to force a merge/build conflict for any out-of-tree target implementing it. To resolve, see comments and fixed in-tree examples. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185956 91177308-0d34-0410-b5e6-96231b3b80d8