summaryrefslogtreecommitdiff
path: root/docs/LangRef.html
diff options
context:
space:
mode:
authorDan Gohman <gohman@apple.com>2010-05-28 17:07:41 +0000
committerDan Gohman <gohman@apple.com>2010-05-28 17:07:41 +0000
commit3dfb3cfb383b64e2b5db30ec429fc130ac02e45d (patch)
tree8dc842287c3fb44fada024860a6b70093bca5775 /docs/LangRef.html
parent90a23220235ad037d7c65ec5c2bb27d87d482b6c (diff)
downloadllvm-3dfb3cfb383b64e2b5db30ec429fc130ac02e45d.tar.gz
llvm-3dfb3cfb383b64e2b5db30ec429fc130ac02e45d.tar.bz2
llvm-3dfb3cfb383b64e2b5db30ec429fc130ac02e45d.tar.xz
Fix whitespace to be more consistent with AsmPrinter's style.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@104962 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'docs/LangRef.html')
-rw-r--r--docs/LangRef.html204
1 files changed, 102 insertions, 102 deletions
diff --git a/docs/LangRef.html b/docs/LangRef.html
index bfb4256973..5ae6f99233 100644
--- a/docs/LangRef.html
+++ b/docs/LangRef.html
@@ -2475,104 +2475,104 @@ end:
supported). The following is the syntax for constant expressions:</p>
<dl>
- <dt><b><tt>trunc ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>trunc (CST to TYPE)</tt></b></dt>
<dd>Truncate a constant to another type. The bit size of CST must be larger
than the bit size of TYPE. Both types must be integers.</dd>
- <dt><b><tt>zext ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>zext (CST to TYPE)</tt></b></dt>
<dd>Zero extend a constant to another type. The bit size of CST must be
smaller or equal to the bit size of TYPE. Both types must be
integers.</dd>
- <dt><b><tt>sext ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>sext (CST to TYPE)</tt></b></dt>
<dd>Sign extend a constant to another type. The bit size of CST must be
smaller or equal to the bit size of TYPE. Both types must be
integers.</dd>
- <dt><b><tt>fptrunc ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>fptrunc (CST to TYPE)</tt></b></dt>
<dd>Truncate a floating point constant to another floating point type. The
size of CST must be larger than the size of TYPE. Both types must be
floating point.</dd>
- <dt><b><tt>fpext ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>fpext (CST to TYPE)</tt></b></dt>
<dd>Floating point extend a constant to another type. The size of CST must be
smaller or equal to the size of TYPE. Both types must be floating
point.</dd>
- <dt><b><tt>fptoui ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>fptoui (CST to TYPE)</tt></b></dt>
<dd>Convert a floating point constant to the corresponding unsigned integer
constant. TYPE must be a scalar or vector integer type. CST must be of
scalar or vector floating point type. Both CST and TYPE must be scalars,
or vectors of the same number of elements. If the value won't fit in the
integer type, the results are undefined.</dd>
- <dt><b><tt>fptosi ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>fptosi (CST to TYPE)</tt></b></dt>
<dd>Convert a floating point constant to the corresponding signed integer
constant. TYPE must be a scalar or vector integer type. CST must be of
scalar or vector floating point type. Both CST and TYPE must be scalars,
or vectors of the same number of elements. If the value won't fit in the
integer type, the results are undefined.</dd>
- <dt><b><tt>uitofp ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>uitofp (CST to TYPE)</tt></b></dt>
<dd>Convert an unsigned integer constant to the corresponding floating point
constant. TYPE must be a scalar or vector floating point type. CST must be
of scalar or vector integer type. Both CST and TYPE must be scalars, or
vectors of the same number of elements. If the value won't fit in the
floating point type, the results are undefined.</dd>
- <dt><b><tt>sitofp ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>sitofp (CST to TYPE)</tt></b></dt>
<dd>Convert a signed integer constant to the corresponding floating point
constant. TYPE must be a scalar or vector floating point type. CST must be
of scalar or vector integer type. Both CST and TYPE must be scalars, or
vectors of the same number of elements. If the value won't fit in the
floating point type, the results are undefined.</dd>
- <dt><b><tt>ptrtoint ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>ptrtoint (CST to TYPE)</tt></b></dt>
<dd>Convert a pointer typed constant to the corresponding integer constant
<tt>TYPE</tt> must be an integer type. <tt>CST</tt> must be of pointer
type. The <tt>CST</tt> value is zero extended, truncated, or unchanged to
make it fit in <tt>TYPE</tt>.</dd>
- <dt><b><tt>inttoptr ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>inttoptr (CST to TYPE)</tt></b></dt>
<dd>Convert a integer constant to a pointer constant. TYPE must be a pointer
type. CST must be of integer type. The CST value is zero extended,
truncated, or unchanged to make it fit in a pointer size. This one is
<i>really</i> dangerous!</dd>
- <dt><b><tt>bitcast ( CST to TYPE )</tt></b></dt>
+ <dt><b><tt>bitcast (CST to TYPE)</tt></b></dt>
<dd>Convert a constant, CST, to another TYPE. The constraints of the operands
are the same as those for the <a href="#i_bitcast">bitcast
instruction</a>.</dd>
- <dt><b><tt>getelementptr ( CSTPTR, IDX0, IDX1, ... )</tt></b></dt>
- <dt><b><tt>getelementptr inbounds ( CSTPTR, IDX0, IDX1, ... )</tt></b></dt>
+ <dt><b><tt>getelementptr (CSTPTR, IDX0, IDX1, ...)</tt></b></dt>
+ <dt><b><tt>getelementptr inbounds (CSTPTR, IDX0, IDX1, ...)</tt></b></dt>
<dd>Perform the <a href="#i_getelementptr">getelementptr operation</a> on
constants. As with the <a href="#i_getelementptr">getelementptr</a>
instruction, the index list may have zero or more indexes, which are
required to make sense for the type of "CSTPTR".</dd>
- <dt><b><tt>select ( COND, VAL1, VAL2 )</tt></b></dt>
+ <dt><b><tt>select (COND, VAL1, VAL2)</tt></b></dt>
<dd>Perform the <a href="#i_select">select operation</a> on constants.</dd>
- <dt><b><tt>icmp COND ( VAL1, VAL2 )</tt></b></dt>
+ <dt><b><tt>icmp COND (VAL1, VAL2)</tt></b></dt>
<dd>Performs the <a href="#i_icmp">icmp operation</a> on constants.</dd>
- <dt><b><tt>fcmp COND ( VAL1, VAL2 )</tt></b></dt>
+ <dt><b><tt>fcmp COND (VAL1, VAL2)</tt></b></dt>
<dd>Performs the <a href="#i_fcmp">fcmp operation</a> on constants.</dd>
- <dt><b><tt>extractelement ( VAL, IDX )</tt></b></dt>
+ <dt><b><tt>extractelement (VAL, IDX)</tt></b></dt>
<dd>Perform the <a href="#i_extractelement">extractelement operation</a> on
constants.</dd>
- <dt><b><tt>insertelement ( VAL, ELT, IDX )</tt></b></dt>
+ <dt><b><tt>insertelement (VAL, ELT, IDX)</tt></b></dt>
<dd>Perform the <a href="#i_insertelement">insertelement operation</a> on
constants.</dd>
- <dt><b><tt>shufflevector ( VEC1, VEC2, IDXMASK )</tt></b></dt>
+ <dt><b><tt>shufflevector (VEC1, VEC2, IDXMASK)</tt></b></dt>
<dd>Perform the <a href="#i_shufflevector">shufflevector operation</a> on
constants.</dd>
- <dt><b><tt>OPCODE ( LHS, RHS )</tt></b></dt>
+ <dt><b><tt>OPCODE (LHS, RHS)</tt></b></dt>
<dd>Perform the specified operation of the LHS and RHS constants. OPCODE may
be any of the <a href="#binaryops">binary</a>
or <a href="#bitwiseops">bitwise binary</a> operations. The constraints
@@ -5992,7 +5992,7 @@ LLVM</a>.</p>
<h5>Syntax:</h5>
<pre>
- declare i64 @llvm.readcyclecounter( )
+ declare i64 @llvm.readcyclecounter()
</pre>
<h5>Overview:</h5>
@@ -6938,13 +6938,13 @@ LLVM</a>.</p>
<pre>
%tramp = alloca [10 x i8], align 4 ; size and alignment only correct for X86
%tramp1 = getelementptr [10 x i8]* %tramp, i32 0, i32 0
- %p = call i8* @llvm.init.trampoline( i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval )
+ %p = call i8* @llvm.init.trampoline(i8* %tramp1, i8* bitcast (i32 (i8* nest , i32, i32)* @f to i8*), i8* %nval)
%fp = bitcast i8* %p to i32 (i32, i32)*
</pre>
</div>
-<p>The call <tt>%val = call i32 %fp( i32 %x, i32 %y )</tt> is then equivalent
- to <tt>%val = call i32 %f( i8* %nval, i32 %x, i32 %y )</tt>.</p>
+<p>The call <tt>%val = call i32 %fp(i32 %x, i32 %y)</tt> is then equivalent
+ to <tt>%val = call i32 %f(i8* %nval, i32 %x, i32 %y)</tt>.</p>
</div>
@@ -7024,7 +7024,7 @@ LLVM</a>.</p>
<div class="doc_text">
<h5>Syntax:</h5>
<pre>
- declare void @llvm.memory.barrier( i1 &lt;ll&gt;, i1 &lt;ls&gt;, i1 &lt;sl&gt;, i1 &lt;ss&gt;, i1 &lt;device&gt; )
+ declare void @llvm.memory.barrier(i1 &lt;ll&gt;, i1 &lt;ls&gt;, i1 &lt;sl&gt;, i1 &lt;ss&gt;, i1 &lt;device&gt;)
</pre>
<h5>Overview:</h5>
@@ -7081,7 +7081,7 @@ LLVM</a>.</p>
store i32 4, %ptr
%result1 = load i32* %ptr <i>; yields {i32}:result1 = 4</i>
- call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ call void @llvm.memory.barrier(i1 false, i1 true, i1 false, i1 false)
<i>; guarantee the above finishes</i>
store i32 8, %ptr <i>; before this begins</i>
</pre>
@@ -7101,10 +7101,10 @@ LLVM</a>.</p>
support all bit widths however.</p>
<pre>
- declare i8 @llvm.atomic.cmp.swap.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;cmp&gt;, i8 &lt;val&gt; )
- declare i16 @llvm.atomic.cmp.swap.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;cmp&gt;, i16 &lt;val&gt; )
- declare i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;cmp&gt;, i32 &lt;val&gt; )
- declare i64 @llvm.atomic.cmp.swap.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;cmp&gt;, i64 &lt;val&gt; )
+ declare i8 @llvm.atomic.cmp.swap.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;cmp&gt;, i8 &lt;val&gt;)
+ declare i16 @llvm.atomic.cmp.swap.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;cmp&gt;, i16 &lt;val&gt;)
+ declare i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;cmp&gt;, i32 &lt;val&gt;)
+ declare i64 @llvm.atomic.cmp.swap.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;cmp&gt;, i64 &lt;val&gt;)
</pre>
<h5>Overview:</h5>
@@ -7133,13 +7133,13 @@ LLVM</a>.</p>
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 4, %val1 )
+%result1 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 4, %val1)
<i>; yields {i32}:result1 = 4</i>
%stored1 = icmp eq i32 %result1, 4 <i>; yields {i1}:stored1 = true</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = 8</i>
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32( i32* %ptr, i32 5, %val2 )
+%result2 = call i32 @llvm.atomic.cmp.swap.i32.p0i32(i32* %ptr, i32 5, %val2)
<i>; yields {i32}:result2 = 8</i>
%stored2 = icmp eq i32 %result2, 5 <i>; yields {i1}:stored2 = false</i>
@@ -7159,10 +7159,10 @@ LLVM</a>.</p>
integer bit width. Not all targets support all bit widths however.</p>
<pre>
- declare i8 @llvm.atomic.swap.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;val&gt; )
- declare i16 @llvm.atomic.swap.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;val&gt; )
- declare i32 @llvm.atomic.swap.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;val&gt; )
- declare i64 @llvm.atomic.swap.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;val&gt; )
+ declare i8 @llvm.atomic.swap.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;val&gt;)
+ declare i16 @llvm.atomic.swap.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;val&gt;)
+ declare i32 @llvm.atomic.swap.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;val&gt;)
+ declare i64 @llvm.atomic.swap.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;val&gt;)
</pre>
<h5>Overview:</h5>
@@ -7189,13 +7189,13 @@ LLVM</a>.</p>
store i32 4, %ptr
%val1 = add i32 4, 4
-%result1 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val1 )
+%result1 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val1)
<i>; yields {i32}:result1 = 4</i>
%stored1 = icmp eq i32 %result1, 4 <i>; yields {i1}:stored1 = true</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = 8</i>
%val2 = add i32 1, 1
-%result2 = call i32 @llvm.atomic.swap.i32.p0i32( i32* %ptr, i32 %val2 )
+%result2 = call i32 @llvm.atomic.swap.i32.p0i32(i32* %ptr, i32 %val2)
<i>; yields {i32}:result2 = 8</i>
%stored2 = icmp eq i32 %result2, 8 <i>; yields {i1}:stored2 = true</i>
@@ -7217,10 +7217,10 @@ LLVM</a>.</p>
any integer bit width. Not all targets support all bit widths however.</p>
<pre>
- declare i8 @llvm.atomic.load.add.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.add.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.add.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.add.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.add.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.add.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.add.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.add.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<h5>Overview:</h5>
@@ -7243,11 +7243,11 @@ LLVM</a>.</p>
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 4, %ptr
-%result1 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 4)
<i>; yields {i32}:result1 = 4</i>
-%result2 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 2)
<i>; yields {i32}:result2 = 8</i>
-%result3 = call i32 @llvm.atomic.load.add.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.add.i32.p0i32(i32* %ptr, i32 5)
<i>; yields {i32}:result3 = 10</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = 15</i>
</pre>
@@ -7268,10 +7268,10 @@ LLVM</a>.</p>
support all bit widths however.</p>
<pre>
- declare i8 @llvm.atomic.load.sub.i8.p0i32( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.sub.i16.p0i32( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.sub.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.sub.i64.p0i32( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.sub.i8.p0i32(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.sub.i16.p0i32(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.sub.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.sub.i64.p0i32(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<h5>Overview:</h5>
@@ -7295,11 +7295,11 @@ LLVM</a>.</p>
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 8, %ptr
-%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 4 )
+%result1 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 4)
<i>; yields {i32}:result1 = 8</i>
-%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 2 )
+%result2 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 2)
<i>; yields {i32}:result2 = 4</i>
-%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32( i32* %ptr, i32 5 )
+%result3 = call i32 @llvm.atomic.load.sub.i32.p0i32(i32* %ptr, i32 5)
<i>; yields {i32}:result3 = 2</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = -3</i>
</pre>
@@ -7324,31 +7324,31 @@ LLVM</a>.</p>
widths however.</p>
<pre>
- declare i8 @llvm.atomic.load.and.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.and.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.and.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.and.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.and.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.and.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.and.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.and.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.or.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.or.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.or.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.or.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.or.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.or.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.or.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.or.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.nand.i8.p0i32( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.nand.i16.p0i32( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.nand.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.nand.i64.p0i32( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.nand.i8.p0i32(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.nand.i16.p0i32(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.nand.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.nand.i64.p0i32(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.xor.i8.p0i32( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.xor.i16.p0i32( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.xor.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.xor.i64.p0i32( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.xor.i8.p0i32(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.xor.i16.p0i32(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.xor.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.xor.i64.p0i32(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<h5>Overview:</h5>
@@ -7373,13 +7373,13 @@ LLVM</a>.</p>
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 0x0F0F, %ptr
-%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32( i32* %ptr, i32 0xFF )
+%result0 = call i32 @llvm.atomic.load.nand.i32.p0i32(i32* %ptr, i32 0xFF)
<i>; yields {i32}:result0 = 0x0F0F</i>
-%result1 = call i32 @llvm.atomic.load.and.i32.p0i32( i32* %ptr, i32 0xFF )
+%result1 = call i32 @llvm.atomic.load.and.i32.p0i32(i32* %ptr, i32 0xFF)
<i>; yields {i32}:result1 = 0xFFFFFFF0</i>
-%result2 = call i32 @llvm.atomic.load.or.i32.p0i32( i32* %ptr, i32 0F )
+%result2 = call i32 @llvm.atomic.load.or.i32.p0i32(i32* %ptr, i32 0F)
<i>; yields {i32}:result2 = 0xF0</i>
-%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32( i32* %ptr, i32 0F )
+%result3 = call i32 @llvm.atomic.load.xor.i32.p0i32(i32* %ptr, i32 0F)
<i>; yields {i32}:result3 = FF</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = F0</i>
</pre>
@@ -7403,31 +7403,31 @@ LLVM</a>.</p>
address spaces. Not all targets support all bit widths however.</p>
<pre>
- declare i8 @llvm.atomic.load.max.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.max.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.max.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.max.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.max.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.max.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.max.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.max.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.min.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.min.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.min.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.min.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.min.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.min.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.min.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.min.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.umax.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.umax.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.umax.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.umax.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.umax.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.umax.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.umax.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.umax.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<pre>
- declare i8 @llvm.atomic.load.umin.i8.p0i8( i8* &lt;ptr&gt;, i8 &lt;delta&gt; )
- declare i16 @llvm.atomic.load.umin.i16.p0i16( i16* &lt;ptr&gt;, i16 &lt;delta&gt; )
- declare i32 @llvm.atomic.load.umin.i32.p0i32( i32* &lt;ptr&gt;, i32 &lt;delta&gt; )
- declare i64 @llvm.atomic.load.umin.i64.p0i64( i64* &lt;ptr&gt;, i64 &lt;delta&gt; )
+ declare i8 @llvm.atomic.load.umin.i8.p0i8(i8* &lt;ptr&gt;, i8 &lt;delta&gt;)
+ declare i16 @llvm.atomic.load.umin.i16.p0i16(i16* &lt;ptr&gt;, i16 &lt;delta&gt;)
+ declare i32 @llvm.atomic.load.umin.i32.p0i32(i32* &lt;ptr&gt;, i32 &lt;delta&gt;)
+ declare i64 @llvm.atomic.load.umin.i64.p0i64(i64* &lt;ptr&gt;, i64 &lt;delta&gt;)
</pre>
<h5>Overview:</h5>
@@ -7452,13 +7452,13 @@ LLVM</a>.</p>
%mallocP = tail call i8* @malloc(i32 ptrtoint (i32* getelementptr (i32* null, i32 1) to i32))
%ptr = bitcast i8* %mallocP to i32*
store i32 7, %ptr
-%result0 = call i32 @llvm.atomic.load.min.i32.p0i32( i32* %ptr, i32 -2 )
+%result0 = call i32 @llvm.atomic.load.min.i32.p0i32(i32* %ptr, i32 -2)
<i>; yields {i32}:result0 = 7</i>
-%result1 = call i32 @llvm.atomic.load.max.i32.p0i32( i32* %ptr, i32 8 )
+%result1 = call i32 @llvm.atomic.load.max.i32.p0i32(i32* %ptr, i32 8)
<i>; yields {i32}:result1 = -2</i>
-%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32( i32* %ptr, i32 10 )
+%result2 = call i32 @llvm.atomic.load.umin.i32.p0i32(i32* %ptr, i32 10)
<i>; yields {i32}:result2 = 8</i>
-%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32( i32* %ptr, i32 30 )
+%result3 = call i32 @llvm.atomic.load.umax.i32.p0i32(i32* %ptr, i32 30)
<i>; yields {i32}:result3 = 8</i>
%memval1 = load i32* %ptr <i>; yields {i32}:memval1 = 30</i>
</pre>
@@ -7613,7 +7613,7 @@ LLVM</a>.</p>
<h5>Syntax:</h5>
<pre>
- declare void @llvm.var.annotation(i8* &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
+ declare void @llvm.var.annotation(i8* &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
</pre>
<h5>Overview:</h5>
@@ -7644,11 +7644,11 @@ LLVM</a>.</p>
any integer bit width.</p>
<pre>
- declare i8 @llvm.annotation.i8(i8 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
- declare i16 @llvm.annotation.i16(i16 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
- declare i32 @llvm.annotation.i32(i32 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
- declare i64 @llvm.annotation.i64(i64 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
- declare i256 @llvm.annotation.i256(i256 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt; )
+ declare i8 @llvm.annotation.i8(i8 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
+ declare i16 @llvm.annotation.i16(i16 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
+ declare i32 @llvm.annotation.i32(i32 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
+ declare i64 @llvm.annotation.i64(i64 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
+ declare i256 @llvm.annotation.i256(i256 &lt;val&gt;, i8* &lt;str&gt;, i8* &lt;str&gt;, i32 &lt;int&gt;)
</pre>
<h5>Overview:</h5>
@@ -7702,7 +7702,7 @@ LLVM</a>.</p>
<h5>Syntax:</h5>
<pre>
- declare void @llvm.stackprotector( i8* &lt;guard&gt;, i8** &lt;slot&gt; )
+ declare void @llvm.stackprotector(i8* &lt;guard&gt;, i8** &lt;slot&gt;)
</pre>
<h5>Overview:</h5>
@@ -7736,8 +7736,8 @@ LLVM</a>.</p>
<h5>Syntax:</h5>
<pre>
- declare i32 @llvm.objectsize.i32( i8* &lt;object&gt;, i1 &lt;type&gt; )
- declare i64 @llvm.objectsize.i64( i8* &lt;object&gt;, i1 &lt;type&gt; )
+ declare i32 @llvm.objectsize.i32(i8* &lt;object&gt;, i1 &lt;type&gt;)
+ declare i64 @llvm.objectsize.i64(i8* &lt;object&gt;, i1 &lt;type&gt;)
</pre>
<h5>Overview:</h5>