1
2.. _gmir-opcodes:
3
4Generic Opcodes
5===============
6
7.. contents::
8   :local:
9
10.. note::
11
12  This documentation does not yet fully account for vectors. Many of the
13  scalar/integer/floating-point operations can also take vectors.
14
15Constants
16---------
17
18G_IMPLICIT_DEF
19^^^^^^^^^^^^^^
20
21An undefined value.
22
23.. code-block:: none
24
25  %0:_(s32) = G_IMPLICIT_DEF
26
27G_CONSTANT
28^^^^^^^^^^
29
30An integer constant.
31
32.. code-block:: none
33
34  %0:_(s32) = G_CONSTANT i32 1
35
36G_FCONSTANT
37^^^^^^^^^^^
38
39A floating point constant.
40
41.. code-block:: none
42
43  %0:_(s32) = G_FCONSTANT float 1.0
44
45G_FRAME_INDEX
46^^^^^^^^^^^^^
47
48The address of an object in the stack frame.
49
50.. code-block:: none
51
52  %1:_(p0) = G_FRAME_INDEX %stack.0.ptr0
53
54G_GLOBAL_VALUE
55^^^^^^^^^^^^^^
56
57The address of a global value.
58
59.. code-block:: none
60
61  %0(p0) = G_GLOBAL_VALUE @var_local
62
63G_BLOCK_ADDR
64^^^^^^^^^^^^
65
66The address of a basic block.
67
68.. code-block:: none
69
70  %0:_(p0) = G_BLOCK_ADDR blockaddress(@test_blockaddress, %ir-block.block)
71
72Integer Extension and Truncation
73--------------------------------
74
75G_ANYEXT
76^^^^^^^^
77
78Extend the underlying scalar type of an operation, leaving the high bits
79unspecified.
80
81.. code-block:: none
82
83  %1:_(s32) = G_ANYEXT %0:_(s16)
84
85G_SEXT
86^^^^^^
87
88Sign extend the underlying scalar type of an operation, copying the sign bit
89into the newly-created space.
90
91.. code-block:: none
92
93  %1:_(s32) = G_SEXT %0:_(s16)
94
95G_SEXT_INREG
96^^^^^^^^^^^^
97
98Sign extend the value from an arbitrary bit position, copying the sign bit
99into all bits above it. This is equivalent to a shl + ashr pair with an
100appropriate shift amount. $sz is an immediate (MachineOperand::isImm()
101returns true) to allow targets to have some bitwidths legal and others
102lowered. This opcode is particularly useful if the target has sign-extension
103instructions that are cheaper than the constituent shifts as the optimizer is
104able to make decisions on whether it's better to hang on to the G_SEXT_INREG
105or to lower it and optimize the individual shifts.
106
107.. code-block:: none
108
109  %1:_(s32) = G_SEXT_INREG %0:_(s32), 16
110
111G_ZEXT
112^^^^^^
113
114Zero extend the underlying scalar type of an operation, putting zero bits
115into the newly-created space.
116
117.. code-block:: none
118
119  %1:_(s32) = G_ZEXT %0:_(s16)
120
121G_TRUNC
122^^^^^^^
123
124Truncate the underlying scalar type of an operation. This is equivalent to
125G_EXTRACT for scalar types, but acts elementwise on vectors.
126
127.. code-block:: none
128
129  %1:_(s16) = G_TRUNC %0:_(s32)
130
131Type Conversions
132----------------
133
134G_INTTOPTR
135^^^^^^^^^^
136
137Convert an integer to a pointer.
138
139.. code-block:: none
140
141  %1:_(p0) = G_INTTOPTR %0:_(s32)
142
143G_PTRTOINT
144^^^^^^^^^^
145
146Convert a pointer to an integer.
147
148.. code-block:: none
149
150  %1:_(s32) = G_PTRTOINT %0:_(p0)
151
152G_BITCAST
153^^^^^^^^^
154
155Reinterpret a value as a new type. This is usually done without
156changing any bits but this is not always the case due a subtlety in the
157definition of the :ref:`LLVM-IR Bitcast Instruction <i_bitcast>`. It
158is allowed to bitcast between pointers with the same size, but
159different address spaces.
160
161.. code-block:: none
162
163  %1:_(s64) = G_BITCAST %0:_(<2 x s32>)
164
165G_ADDRSPACE_CAST
166^^^^^^^^^^^^^^^^
167
168Convert a pointer to an address space to a pointer to another address space.
169
170.. code-block:: none
171
172  %1:_(p1) = G_ADDRSPACE_CAST %0:_(p0)
173
174.. caution::
175
176  :ref:`i_addrspacecast` doesn't mention what happens if the cast is simply
177  invalid (i.e. if the address spaces are disjoint).
178
179Scalar Operations
180-----------------
181
182G_EXTRACT
183^^^^^^^^^
184
185Extract a register of the specified size, starting from the block given by
186index. This will almost certainly be mapped to sub-register COPYs after
187register banks have been selected.
188
189.. code-block:: none
190
191  %3:_(s32) = G_EXTRACT %2:_(s64), 32
192
193G_INSERT
194^^^^^^^^
195
196Insert a smaller register into a larger one at the specified bit-index.
197
198.. code-block:: none
199
200  %2:_(s64) = G_INSERT %0:(_s64), %1:_(s32), 0
201
202G_MERGE_VALUES
203^^^^^^^^^^^^^^
204
205Concatenate multiple registers of the same size into a wider register.
206The input operands are always ordered from lowest bits to highest:
207
208.. code-block:: none
209
210  %0:(s32) = G_MERGE_VALUES %bits_0_7:(s8), %bits_8_15:(s8),
211                            %bits_16_23:(s8), %bits_24_31:(s8)
212
213G_UNMERGE_VALUES
214^^^^^^^^^^^^^^^^
215
216Extract multiple registers of the specified size, starting from blocks given by
217indexes. This will almost certainly be mapped to sub-register COPYs after
218register banks have been selected.
219The output operands are always ordered from lowest bits to highest:
220
221.. code-block:: none
222
223  %bits_0_7:(s8), %bits_8_15:(s8),
224      %bits_16_23:(s8), %bits_24_31:(s8) = G_UNMERGE_VALUES %0:(s32)
225
226G_BSWAP
227^^^^^^^
228
229Reverse the order of the bytes in a scalar.
230
231.. code-block:: none
232
233  %1:_(s32) = G_BSWAP %0:_(s32)
234
235G_BITREVERSE
236^^^^^^^^^^^^
237
238Reverse the order of the bits in a scalar.
239
240.. code-block:: none
241
242  %1:_(s32) = G_BITREVERSE %0:_(s32)
243
244G_SBFX, G_UBFX
245^^^^^^^^^^^^^^
246
247Extract a range of bits from a register.
248
249The source operands are registers as follows:
250
251- Source
252- The least-significant bit for the extraction
253- The width of the extraction
254
255The least-significant bit (lsb) and width operands are in the range:
256
257::
258
259      0 <= lsb < lsb + width <= source bitwidth, where all values are unsigned
260
261G_SBFX sign-extends the result, while G_UBFX zero-extends the result.
262
263.. code-block:: none
264
265  ; Extract 5 bits starting at bit 1 from %x and store them in %a.
266  ; Sign-extend the result.
267  ;
268  ; Example:
269  ; %x = 0...0000[10110]1 ---> %a = 1...111111[10110]
270  %lsb_one = G_CONSTANT i32 1
271  %width_five = G_CONSTANT i32 5
272  %a:_(s32) = G_SBFX %x, %lsb_one, %width_five
273
274  ; Extract 3 bits starting at bit 2 from %x and store them in %b. Zero-extend
275  ; the result.
276  ;
277  ; Example:
278  ; %x = 1...11111[100]11 ---> %b = 0...00000[100]
279  %lsb_two = G_CONSTANT i32 2
280  %width_three = G_CONSTANT i32 3
281  %b:_(s32) = G_UBFX %x, %lsb_two, %width_three
282
283Integer Operations
284-------------------
285
286G_ADD, G_SUB, G_MUL, G_AND, G_OR, G_XOR, G_SDIV, G_UDIV, G_SREM, G_UREM
287^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
288
289These each perform their respective integer arithmetic on a scalar.
290
291.. code-block:: none
292
293  %dst:_(s32) = G_ADD %src0:_(s32), %src1:_(s32)
294
295The above example adds %src1 to %src0 and stores the result in %dst.
296
297G_SDIVREM, G_UDIVREM
298^^^^^^^^^^^^^^^^^^^^
299
300Perform integer division and remainder thereby producing two results.
301
302.. code-block:: none
303
304  %div:_(s32), %rem:_(s32) = G_SDIVREM %0:_(s32), %1:_(s32)
305
306G_SADDSAT, G_UADDSAT, G_SSUBSAT, G_USUBSAT, G_SSHLSAT, G_USHLSAT
307^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
308
309Signed and unsigned addition, subtraction and left shift with saturation.
310
311.. code-block:: none
312
313  %2:_(s32) = G_SADDSAT %0:_(s32), %1:_(s32)
314
315G_SHL, G_LSHR, G_ASHR
316^^^^^^^^^^^^^^^^^^^^^
317
318Shift the bits of a scalar left or right inserting zeros (sign-bit for G_ASHR).
319
320G_ROTR, G_ROTL
321^^^^^^^^^^^^^^
322
323Rotate the bits right (G_ROTR) or left (G_ROTL).
324
325G_ICMP
326^^^^^^
327
328Perform integer comparison producing non-zero (true) or zero (false). It's
329target specific whether a true value is 1, ~0U, or some other non-zero value.
330
331G_SELECT
332^^^^^^^^
333
334Select between two values depending on a zero/non-zero value.
335
336.. code-block:: none
337
338  %5:_(s32) = G_SELECT %4(s1), %6, %2
339
340G_PTR_ADD
341^^^^^^^^^
342
343Add a scalar offset in addressible units to a pointer. Addressible units are
344typically bytes but this may vary between targets.
345
346.. code-block:: none
347
348  %1:_(p0) = G_PTR_ADD %0:_(p0), %1:_(s32)
349
350.. caution::
351
352  There are currently no in-tree targets that use this with addressable units
353  not equal to 8 bit.
354
355G_PTRMASK
356^^^^^^^^^^
357
358Zero out an arbitrary mask of bits of a pointer. The mask type must be
359an integer, and the number of vector elements must match for all
360operands. This corresponds to `i_intr_llvm_ptrmask`.
361
362.. code-block:: none
363
364  %2:_(p0) = G_PTRMASK %0, %1
365
366G_SMIN, G_SMAX, G_UMIN, G_UMAX
367^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
368
369Take the minimum/maximum of two values.
370
371.. code-block:: none
372
373  %5:_(s32) = G_SMIN %6, %2
374
375G_ABS
376^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
377
378Take the absolute value of a signed integer. The absolute value of the minimum
379negative value (e.g. the 8-bit value `0x80`) is defined to be itself.
380
381.. code-block:: none
382
383  %1:_(s32) = G_ABS %0
384
385G_UADDO, G_SADDO, G_USUBO, G_SSUBO, G_SMULO, G_UMULO
386^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
387
388Perform the requested arithmetic and produce a carry output in addition to the
389normal result.
390
391.. code-block:: none
392
393  %3:_(s32), %4:_(s1) = G_UADDO %0, %1
394
395G_UADDE, G_SADDE, G_USUBE, G_SSUBE
396^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
397
398Perform the requested arithmetic and consume a carry input in addition to the
399normal input. Also produce a carry output in addition to the normal result.
400
401.. code-block:: none
402
403  %4:_(s32), %5:_(s1) = G_UADDE %0, %1, %3:_(s1)
404
405G_UMULH, G_SMULH
406^^^^^^^^^^^^^^^^
407
408Multiply two numbers at twice the incoming bit width (unsigned or signed) and
409return the high half of the result.
410
411.. code-block:: none
412
413  %3:_(s32) = G_UMULH %0, %1
414
415G_CTLZ, G_CTTZ, G_CTPOP
416^^^^^^^^^^^^^^^^^^^^^^^
417
418Count leading zeros, trailing zeros, or number of set bits.
419
420.. code-block:: none
421
422  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
423  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
424  %2:_(s33) = G_CTPOP %1
425
426G_CTLZ_ZERO_UNDEF, G_CTTZ_ZERO_UNDEF
427^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
428
429Count leading zeros or trailing zeros. If the value is zero then the result is
430undefined.
431
432.. code-block:: none
433
434  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
435  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
436
437Floating Point Operations
438-------------------------
439
440G_FCMP
441^^^^^^
442
443Perform floating point comparison producing non-zero (true) or zero
444(false). It's target specific whether a true value is 1, ~0U, or some other
445non-zero value.
446
447G_FNEG
448^^^^^^
449
450Floating point negation.
451
452G_FPEXT
453^^^^^^^
454
455Convert a floating point value to a larger type.
456
457G_FPTRUNC
458^^^^^^^^^
459
460Convert a floating point value to a narrower type.
461
462G_FPTOSI, G_FPTOUI, G_SITOFP, G_UITOFP
463^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
464
465Convert between integer and floating point.
466
467G_FABS
468^^^^^^
469
470Take the absolute value of a floating point value.
471
472G_FCOPYSIGN
473^^^^^^^^^^^
474
475Copy the value of the first operand, replacing the sign bit with that of the
476second operand.
477
478G_FCANONICALIZE
479^^^^^^^^^^^^^^^
480
481See :ref:`i_intr_llvm_canonicalize`.
482
483G_IS_FPCLASS
484^^^^^^^^^^^^
485
486Tests if the first operand, which must be floating-point scalar or vector, has
487floating-point class specified by the second operand. The third operand
488specifies floating-point semantics of the tested value. Returns non-zero (true)
489or zero (false). It's target specific whether a true value is 1, ~0U, or some
490other non-zero value. If the first operand is a vector, the returned value is a
491vector of the same length.
492
493G_FMINNUM
494^^^^^^^^^
495
496Perform floating-point minimum on two values.
497
498In the case where a single input is a NaN (either signaling or quiet),
499the non-NaN input is returned.
500
501The return value of (FMINNUM 0.0, -0.0) could be either 0.0 or -0.0.
502
503G_FMAXNUM
504^^^^^^^^^
505
506Perform floating-point maximum on two values.
507
508In the case where a single input is a NaN (either signaling or quiet),
509the non-NaN input is returned.
510
511The return value of (FMAXNUM 0.0, -0.0) could be either 0.0 or -0.0.
512
513G_FMINNUM_IEEE
514^^^^^^^^^^^^^^
515
516Perform floating-point minimum on two values, following the IEEE-754 2008
517definition. This differs from FMINNUM in the handling of signaling NaNs. If one
518input is a signaling NaN, returns a quiet NaN.
519
520G_FMAXNUM_IEEE
521^^^^^^^^^^^^^^
522
523Perform floating-point maximum on two values, following the IEEE-754 2008
524definition. This differs from FMAXNUM in the handling of signaling NaNs. If one
525input is a signaling NaN, returns a quiet NaN.
526
527G_FMINIMUM
528^^^^^^^^^^
529
530NaN-propagating minimum that also treat -0.0 as less than 0.0. While
531FMINNUM_IEEE follow IEEE 754-2008 semantics, FMINIMUM follows IEEE 754-2018
532draft semantics.
533
534G_FMAXIMUM
535^^^^^^^^^^
536
537NaN-propagating maximum that also treat -0.0 as less than 0.0. While
538FMAXNUM_IEEE follow IEEE 754-2008 semantics, FMAXIMUM follows IEEE 754-2018
539draft semantics.
540
541G_FADD, G_FSUB, G_FMUL, G_FDIV, G_FREM
542^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
543
544Perform the specified floating point arithmetic.
545
546G_FMA
547^^^^^
548
549Perform a fused multiply add (i.e. without the intermediate rounding step).
550
551G_FMAD
552^^^^^^
553
554Perform a non-fused multiply add (i.e. with the intermediate rounding step).
555
556G_FPOW
557^^^^^^
558
559Raise the first operand to the power of the second.
560
561G_FEXP, G_FEXP2
562^^^^^^^^^^^^^^^
563
564Calculate the base-e or base-2 exponential of a value
565
566G_FLOG, G_FLOG2, G_FLOG10
567^^^^^^^^^^^^^^^^^^^^^^^^^
568
569Calculate the base-e, base-2, or base-10 respectively.
570
571G_FCEIL, G_FCOS, G_FSIN, G_FSQRT, G_FFLOOR, G_FRINT, G_FNEARBYINT
572^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
573
574These correspond to the standard C functions of the same name.
575
576G_INTRINSIC_TRUNC
577^^^^^^^^^^^^^^^^^
578
579Returns the operand rounded to the nearest integer not larger in magnitude than the operand.
580
581G_INTRINSIC_ROUND
582^^^^^^^^^^^^^^^^^
583
584Returns the operand rounded to the nearest integer.
585
586G_LROUND, G_LLROUND
587^^^^^^^^^^^^^^^^^^^
588
589Returns the source operand rounded to the nearest integer with ties away from
590zero.
591
592See the LLVM LangRef entry on '``llvm.lround.*'`` for details on behaviour.
593
594.. code-block:: none
595
596  %rounded_32:_(s32) = G_LROUND %round_me:_(s64)
597  %rounded_64:_(s64) = G_LLROUND %round_me:_(s64)
598
599Vector Specific Operations
600--------------------------
601
602G_CONCAT_VECTORS
603^^^^^^^^^^^^^^^^
604
605Concatenate two vectors to form a longer vector.
606
607G_BUILD_VECTOR, G_BUILD_VECTOR_TRUNC
608^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
609
610Create a vector from multiple scalar registers. No implicit
611conversion is performed (i.e. the result element type must be the
612same as all source operands)
613
614The _TRUNC version truncates the larger operand types to fit the
615destination vector elt type.
616
617G_INSERT_VECTOR_ELT
618^^^^^^^^^^^^^^^^^^^
619
620Insert an element into a vector
621
622G_EXTRACT_VECTOR_ELT
623^^^^^^^^^^^^^^^^^^^^
624
625Extract an element from a vector
626
627G_SHUFFLE_VECTOR
628^^^^^^^^^^^^^^^^
629
630Concatenate two vectors and shuffle the elements according to the mask operand.
631The mask operand should be an IR Constant which exactly matches the
632corresponding mask for the IR shufflevector instruction.
633
634Vector Reduction Operations
635---------------------------
636
637These operations represent horizontal vector reduction, producing a scalar result.
638
639G_VECREDUCE_SEQ_FADD, G_VECREDUCE_SEQ_FMUL
640^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
641
642The SEQ variants perform reductions in sequential order. The first operand is
643an initial scalar accumulator value, and the second operand is the vector to reduce.
644
645G_VECREDUCE_FADD, G_VECREDUCE_FMUL
646^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
647
648These reductions are relaxed variants which may reduce the elements in any order.
649
650G_VECREDUCE_FMAX, G_VECREDUCE_FMIN
651^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
652
653FMIN/FMAX nodes can have flags, for NaN/NoNaN variants.
654
655
656Integer/bitwise reductions
657^^^^^^^^^^^^^^^^^^^^^^^^^^
658
659* G_VECREDUCE_ADD
660* G_VECREDUCE_MUL
661* G_VECREDUCE_AND
662* G_VECREDUCE_OR
663* G_VECREDUCE_XOR
664* G_VECREDUCE_SMAX
665* G_VECREDUCE_SMIN
666* G_VECREDUCE_UMAX
667* G_VECREDUCE_UMIN
668
669Integer reductions may have a result type larger than the vector element type.
670However, the reduction is performed using the vector element type and the value
671in the top bits is unspecified.
672
673Memory Operations
674-----------------
675
676G_LOAD, G_SEXTLOAD, G_ZEXTLOAD
677^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
678
679Generic load. Expects a MachineMemOperand in addition to explicit
680operands. If the result size is larger than the memory size, the
681high bits are undefined, sign-extended, or zero-extended respectively.
682
683Only G_LOAD is valid if the result is a vector type. If the result is larger
684than the memory size, the high elements are undefined (i.e. this is not a
685per-element, vector anyextload)
686
687Unlike in SelectionDAG, atomic loads are expressed with the same
688opcodes as regular loads. G_LOAD, G_SEXTLOAD and G_ZEXTLOAD may all
689have atomic memory operands.
690
691G_INDEXED_LOAD
692^^^^^^^^^^^^^^
693
694Generic indexed load. Combines a GEP with a load. $newaddr is set to $base + $offset.
695If $am is 0 (post-indexed), then the value is loaded from $base; if $am is 1 (pre-indexed)
696then the value is loaded from $newaddr.
697
698G_INDEXED_SEXTLOAD
699^^^^^^^^^^^^^^^^^^
700
701Same as G_INDEXED_LOAD except that the load performed is sign-extending, as with G_SEXTLOAD.
702
703G_INDEXED_ZEXTLOAD
704^^^^^^^^^^^^^^^^^^
705
706Same as G_INDEXED_LOAD except that the load performed is zero-extending, as with G_ZEXTLOAD.
707
708G_STORE
709^^^^^^^
710
711Generic store. Expects a MachineMemOperand in addition to explicit
712operands. If the stored value size is greater than the memory size,
713the high bits are implicitly truncated. If this is a vector store, the
714high elements are discarded (i.e. this does not function as a per-lane
715vector, truncating store)
716
717G_INDEXED_STORE
718^^^^^^^^^^^^^^^
719
720Combines a store with a GEP. See description of G_INDEXED_LOAD for indexing behaviour.
721
722G_ATOMIC_CMPXCHG_WITH_SUCCESS
723^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
724
725Generic atomic cmpxchg with internal success check. Expects a
726MachineMemOperand in addition to explicit operands.
727
728G_ATOMIC_CMPXCHG
729^^^^^^^^^^^^^^^^
730
731Generic atomic cmpxchg. Expects a MachineMemOperand in addition to explicit
732operands.
733
734G_ATOMICRMW_XCHG, G_ATOMICRMW_ADD, G_ATOMICRMW_SUB, G_ATOMICRMW_AND,
735G_ATOMICRMW_NAND, G_ATOMICRMW_OR, G_ATOMICRMW_XOR, G_ATOMICRMW_MAX,
736G_ATOMICRMW_MIN, G_ATOMICRMW_UMAX, G_ATOMICRMW_UMIN, G_ATOMICRMW_FADD,
737G_ATOMICRMW_FSUB, G_ATOMICRMW_FMAX, G_ATOMICRMW_FMIN
738^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
739
740Generic atomicrmw. Expects a MachineMemOperand in addition to explicit
741operands.
742
743G_FENCE
744^^^^^^^
745
746.. caution::
747
748  I couldn't find any documentation on this at the time of writing.
749
750G_MEMCPY
751^^^^^^^^
752
753Generic memcpy. Expects two MachineMemOperands covering the store and load
754respectively, in addition to explicit operands.
755
756G_MEMCPY_INLINE
757^^^^^^^^^^^^^^^
758
759Generic inlined memcpy. Like G_MEMCPY, but it is guaranteed that this version
760will not be lowered as a call to an external function. Currently the size
761operand is required to evaluate as a constant (not an immediate), though that is
762expected to change when llvm.memcpy.inline is taught to support dynamic sizes.
763
764G_MEMMOVE
765^^^^^^^^^
766
767Generic memmove. Similar to G_MEMCPY, but the source and destination memory
768ranges are allowed to overlap.
769
770G_MEMSET
771^^^^^^^^
772
773Generic memset. Expects a MachineMemOperand in addition to explicit operands.
774
775G_BZERO
776^^^^^^^
777
778Generic bzero. Expects a MachineMemOperand in addition to explicit operands.
779
780Control Flow
781------------
782
783G_PHI
784^^^^^
785
786Implement the φ node in the SSA graph representing the function.
787
788.. code-block:: none
789
790  %dst(s8) = G_PHI %src1(s8), %bb.<id1>, %src2(s8), %bb.<id2>
791
792G_BR
793^^^^
794
795Unconditional branch
796
797.. code-block:: none
798
799  G_BR %bb.<id>
800
801G_BRCOND
802^^^^^^^^
803
804Conditional branch
805
806.. code-block:: none
807
808  G_BRCOND %condition, %basicblock.<id>
809
810G_BRINDIRECT
811^^^^^^^^^^^^
812
813Indirect branch
814
815.. code-block:: none
816
817  G_BRINDIRECT %src(p0)
818
819G_BRJT
820^^^^^^
821
822Indirect branch to jump table entry
823
824.. code-block:: none
825
826  G_BRJT %ptr(p0), %jti, %idx(s64)
827
828G_JUMP_TABLE
829^^^^^^^^^^^^
830
831Generates a pointer to the address of the jump table specified by the source
832operand. The source operand is a jump table index.
833G_JUMP_TABLE can be used in conjunction with G_BRJT to support jump table
834codegen with GlobalISel.
835
836.. code-block:: none
837
838  %dst:_(p0) = G_JUMP_TABLE %jump-table.0
839
840The above example generates a pointer to the source jump table index.
841
842
843G_INTRINSIC, G_INTRINSIC_W_SIDE_EFFECTS
844^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
845
846Call an intrinsic
847
848The _W_SIDE_EFFECTS version is considered to have unknown side-effects and
849as such cannot be reordered across other side-effecting instructions.
850
851.. note::
852
853  Unlike SelectionDAG, there is no _VOID variant. Both of these are permitted
854  to have zero, one, or multiple results.
855
856Variadic Arguments
857------------------
858
859G_VASTART
860^^^^^^^^^
861
862.. caution::
863
864  I found no documentation for this instruction at the time of writing.
865
866G_VAARG
867^^^^^^^
868
869.. caution::
870
871  I found no documentation for this instruction at the time of writing.
872
873Other Operations
874----------------
875
876G_DYN_STACKALLOC
877^^^^^^^^^^^^^^^^
878
879Dynamically realigns the stack pointer to the specified size and alignment.
880An alignment value of `0` or `1` means no specific alignment.
881
882.. code-block:: none
883
884  %8:_(p0) = G_DYN_STACKALLOC %7(s64), 32
885
886Optimization Hints
887------------------
888
889These instructions do not correspond to any target instructions. They act as
890hints for various combines.
891
892G_ASSERT_SEXT, G_ASSERT_ZEXT
893^^^^^^^^^^^^^^^^^^^^^^^^^^^^
894
895This signifies that the contents of a register were previously extended from a
896smaller type.
897
898The smaller type is denoted using an immediate operand. For scalars, this is the
899width of the entire smaller type. For vectors, this is the width of the smaller
900element type.
901
902.. code-block:: none
903
904  %x_was_zexted:_(s32) = G_ASSERT_ZEXT %x(s32), 16
905  %y_was_zexted:_(<2 x s32>) = G_ASSERT_ZEXT %y(<2 x s32>), 16
906
907  %z_was_sexted:_(s32) = G_ASSERT_SEXT %z(s32), 8
908
909G_ASSERT_SEXT and G_ASSERT_ZEXT act like copies, albeit with some restrictions.
910
911The source and destination registers must
912
913- Be virtual
914- Belong to the same register class
915- Belong to the same register bank
916
917It should always be safe to
918
919- Look through the source register
920- Replace the destination register with the source register
921