1
2.. _gmir-opcodes:
3
4Generic Opcodes
5===============
6
7.. contents::
8   :local:
9
10.. note::
11
12  This documentation does not yet fully account for vectors. Many of the
13  scalar/integer/floating-point operations can also take vectors.
14
15Constants
16---------
17
18G_IMPLICIT_DEF
19^^^^^^^^^^^^^^
20
21An undefined value.
22
23.. code-block:: none
24
25  %0:_(s32) = G_IMPLICIT_DEF
26
27G_CONSTANT
28^^^^^^^^^^
29
30An integer constant.
31
32.. code-block:: none
33
34  %0:_(s32) = G_CONSTANT i32 1
35
36G_FCONSTANT
37^^^^^^^^^^^
38
39A floating point constant.
40
41.. code-block:: none
42
43  %0:_(s32) = G_FCONSTANT float 1.0
44
45G_FRAME_INDEX
46^^^^^^^^^^^^^
47
48The address of an object in the stack frame.
49
50.. code-block:: none
51
52  %1:_(p0) = G_FRAME_INDEX %stack.0.ptr0
53
54G_GLOBAL_VALUE
55^^^^^^^^^^^^^^
56
57The address of a global value.
58
59.. code-block:: none
60
61  %0(p0) = G_GLOBAL_VALUE @var_local
62
63G_BLOCK_ADDR
64^^^^^^^^^^^^
65
66The address of a basic block.
67
68.. code-block:: none
69
70  %0:_(p0) = G_BLOCK_ADDR blockaddress(@test_blockaddress, %ir-block.block)
71
72Integer Extension and Truncation
73--------------------------------
74
75G_ANYEXT
76^^^^^^^^
77
78Extend the underlying scalar type of an operation, leaving the high bits
79unspecified.
80
81.. code-block:: none
82
83  %1:_(s32) = G_ANYEXT %0:_(s16)
84
85G_SEXT
86^^^^^^
87
88Sign extend the underlying scalar type of an operation, copying the sign bit
89into the newly-created space.
90
91.. code-block:: none
92
93  %1:_(s32) = G_SEXT %0:_(s16)
94
95G_SEXT_INREG
96^^^^^^^^^^^^
97
98Sign extend the value from an arbitrary bit position, copying the sign bit
99into all bits above it. This is equivalent to a shl + ashr pair with an
100appropriate shift amount. $sz is an immediate (MachineOperand::isImm()
101returns true) to allow targets to have some bitwidths legal and others
102lowered. This opcode is particularly useful if the target has sign-extension
103instructions that are cheaper than the constituent shifts as the optimizer is
104able to make decisions on whether it's better to hang on to the G_SEXT_INREG
105or to lower it and optimize the individual shifts.
106
107.. code-block:: none
108
109  %1:_(s32) = G_SEXT_INREG %0:_(s32), 16
110
111G_ZEXT
112^^^^^^
113
114Zero extend the underlying scalar type of an operation, putting zero bits
115into the newly-created space.
116
117.. code-block:: none
118
119  %1:_(s32) = G_ZEXT %0:_(s16)
120
121G_TRUNC
122^^^^^^^
123
124Truncate the underlying scalar type of an operation. This is equivalent to
125G_EXTRACT for scalar types, but acts elementwise on vectors.
126
127.. code-block:: none
128
129  %1:_(s16) = G_TRUNC %0:_(s32)
130
131Type Conversions
132----------------
133
134G_INTTOPTR
135^^^^^^^^^^
136
137Convert an integer to a pointer.
138
139.. code-block:: none
140
141  %1:_(p0) = G_INTTOPTR %0:_(s32)
142
143G_PTRTOINT
144^^^^^^^^^^
145
146Convert a pointer to an integer.
147
148.. code-block:: none
149
150  %1:_(s32) = G_PTRTOINT %0:_(p0)
151
152G_BITCAST
153^^^^^^^^^
154
155Reinterpret a value as a new type. This is usually done without
156changing any bits but this is not always the case due a subtlety in the
157definition of the :ref:`LLVM-IR Bitcast Instruction <i_bitcast>`. It
158is allowed to bitcast between pointers with the same size, but
159different address spaces.
160
161.. code-block:: none
162
163  %1:_(s64) = G_BITCAST %0:_(<2 x s32>)
164
165G_ADDRSPACE_CAST
166^^^^^^^^^^^^^^^^
167
168Convert a pointer to an address space to a pointer to another address space.
169
170.. code-block:: none
171
172  %1:_(p1) = G_ADDRSPACE_CAST %0:_(p0)
173
174.. caution::
175
176  :ref:`i_addrspacecast` doesn't mention what happens if the cast is simply
177  invalid (i.e. if the address spaces are disjoint).
178
179Scalar Operations
180-----------------
181
182G_EXTRACT
183^^^^^^^^^
184
185Extract a register of the specified size, starting from the block given by
186index. This will almost certainly be mapped to sub-register COPYs after
187register banks have been selected.
188
189.. code-block:: none
190
191  %3:_(s32) = G_EXTRACT %2:_(s64), 32
192
193G_INSERT
194^^^^^^^^
195
196Insert a smaller register into a larger one at the specified bit-index.
197
198.. code-block:: none
199
200  %2:_(s64) = G_INSERT %0:(_s64), %1:_(s32), 0
201
202G_MERGE_VALUES
203^^^^^^^^^^^^^^
204
205Concatenate multiple registers of the same size into a wider register.
206The input operands are always ordered from lowest bits to highest:
207
208.. code-block:: none
209
210  %0:(s32) = G_MERGE_VALUES %bits_0_7:(s8), %bits_8_15:(s8),
211                            %bits_16_23:(s8), %bits_24_31:(s8)
212
213G_UNMERGE_VALUES
214^^^^^^^^^^^^^^^^
215
216Extract multiple registers of the specified size, starting from blocks given by
217indexes. This will almost certainly be mapped to sub-register COPYs after
218register banks have been selected.
219The output operands are always ordered from lowest bits to highest:
220
221.. code-block:: none
222
223  %bits_0_7:(s8), %bits_8_15:(s8),
224      %bits_16_23:(s8), %bits_24_31:(s8) = G_UNMERGE_VALUES %0:(s32)
225
226G_BSWAP
227^^^^^^^
228
229Reverse the order of the bytes in a scalar.
230
231.. code-block:: none
232
233  %1:_(s32) = G_BSWAP %0:_(s32)
234
235G_BITREVERSE
236^^^^^^^^^^^^
237
238Reverse the order of the bits in a scalar.
239
240.. code-block:: none
241
242  %1:_(s32) = G_BITREVERSE %0:_(s32)
243
244G_SBFX, G_UBFX
245^^^^^^^^^^^^^^
246
247Extract a range of bits from a register.
248
249The source operands are registers as follows:
250
251- Source
252- The least-significant bit for the extraction
253- The width of the extraction
254
255The least-significant bit (lsb) and width operands are in the range:
256
257::
258
259      0 <= lsb < lsb + width <= source bitwidth, where all values are unsigned
260
261G_SBFX sign-extends the result, while G_UBFX zero-extends the result.
262
263.. code-block:: none
264
265  ; Extract 5 bits starting at bit 1 from %x and store them in %a.
266  ; Sign-extend the result.
267  ;
268  ; Example:
269  ; %x = 0...0000[10110]1 ---> %a = 1...111111[10110]
270  %lsb_one = G_CONSTANT i32 1
271  %width_five = G_CONSTANT i32 5
272  %a:_(s32) = G_SBFX %x, %lsb_one, %width_five
273
274  ; Extract 3 bits starting at bit 2 from %x and store them in %b. Zero-extend
275  ; the result.
276  ;
277  ; Example:
278  ; %x = 1...11111[100]11 ---> %b = 0...00000[100]
279  %lsb_two = G_CONSTANT i32 2
280  %width_three = G_CONSTANT i32 3
281  %b:_(s32) = G_UBFX %x, %lsb_two, %width_three
282
283Integer Operations
284-------------------
285
286G_ADD, G_SUB, G_MUL, G_AND, G_OR, G_XOR, G_SDIV, G_UDIV, G_SREM, G_UREM
287^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
288
289These each perform their respective integer arithmetic on a scalar.
290
291.. code-block:: none
292
293  %dst:_(s32) = G_ADD %src0:_(s32), %src1:_(s32)
294
295The above example adds %src1 to %src0 and stores the result in %dst.
296
297G_SDIVREM, G_UDIVREM
298^^^^^^^^^^^^^^^^^^^^
299
300Perform integer division and remainder thereby producing two results.
301
302.. code-block:: none
303
304  %div:_(s32), %rem:_(s32) = G_SDIVREM %0:_(s32), %1:_(s32)
305
306G_SADDSAT, G_UADDSAT, G_SSUBSAT, G_USUBSAT, G_SSHLSAT, G_USHLSAT
307^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
308
309Signed and unsigned addition, subtraction and left shift with saturation.
310
311.. code-block:: none
312
313  %2:_(s32) = G_SADDSAT %0:_(s32), %1:_(s32)
314
315G_SHL, G_LSHR, G_ASHR
316^^^^^^^^^^^^^^^^^^^^^
317
318Shift the bits of a scalar left or right inserting zeros (sign-bit for G_ASHR).
319
320G_ROTR, G_ROTL
321^^^^^^^^^^^^^^
322
323Rotate the bits right (G_ROTR) or left (G_ROTL).
324
325G_ICMP
326^^^^^^
327
328Perform integer comparison producing non-zero (true) or zero (false). It's
329target specific whether a true value is 1, ~0U, or some other non-zero value.
330
331G_SELECT
332^^^^^^^^
333
334Select between two values depending on a zero/non-zero value.
335
336.. code-block:: none
337
338  %5:_(s32) = G_SELECT %4(s1), %6, %2
339
340G_PTR_ADD
341^^^^^^^^^
342
343Add a scalar offset in addressible units to a pointer. Addressible units are
344typically bytes but this may vary between targets.
345
346.. code-block:: none
347
348  %1:_(p0) = G_PTR_ADD %0:_(p0), %1:_(s32)
349
350.. caution::
351
352  There are currently no in-tree targets that use this with addressable units
353  not equal to 8 bit.
354
355G_PTRMASK
356^^^^^^^^^^
357
358Zero out an arbitrary mask of bits of a pointer. The mask type must be
359an integer, and the number of vector elements must match for all
360operands. This corresponds to `i_intr_llvm_ptrmask`.
361
362.. code-block:: none
363
364  %2:_(p0) = G_PTRMASK %0, %1
365
366G_SMIN, G_SMAX, G_UMIN, G_UMAX
367^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
368
369Take the minimum/maximum of two values.
370
371.. code-block:: none
372
373  %5:_(s32) = G_SMIN %6, %2
374
375G_ABS
376^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
377
378Take the absolute value of a signed integer. The absolute value of the minimum
379negative value (e.g. the 8-bit value `0x80`) is defined to be itself.
380
381.. code-block:: none
382
383  %1:_(s32) = G_ABS %0
384
385G_UADDO, G_SADDO, G_USUBO, G_SSUBO, G_SMULO, G_UMULO
386^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
387
388Perform the requested arithmetic and produce a carry output in addition to the
389normal result.
390
391.. code-block:: none
392
393  %3:_(s32), %4:_(s1) = G_UADDO %0, %1
394
395G_UADDE, G_SADDE, G_USUBE, G_SSUBE
396^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
397
398Perform the requested arithmetic and consume a carry input in addition to the
399normal input. Also produce a carry output in addition to the normal result.
400
401.. code-block:: none
402
403  %4:_(s32), %5:_(s1) = G_UADDE %0, %1, %3:_(s1)
404
405G_UMULH, G_SMULH
406^^^^^^^^^^^^^^^^
407
408Multiply two numbers at twice the incoming bit width (signed) and return
409the high half of the result.
410
411.. code-block:: none
412
413  %3:_(s32) = G_UMULH %0, %1
414
415G_CTLZ, G_CTTZ, G_CTPOP
416^^^^^^^^^^^^^^^^^^^^^^^
417
418Count leading zeros, trailing zeros, or number of set bits.
419
420.. code-block:: none
421
422  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
423  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
424  %2:_(s33) = G_CTPOP %1
425
426G_CTLZ_ZERO_UNDEF, G_CTTZ_ZERO_UNDEF
427^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
428
429Count leading zeros or trailing zeros. If the value is zero then the result is
430undefined.
431
432.. code-block:: none
433
434  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
435  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
436
437Floating Point Operations
438-------------------------
439
440G_FCMP
441^^^^^^
442
443Perform floating point comparison producing non-zero (true) or zero
444(false). It's target specific whether a true value is 1, ~0U, or some other
445non-zero value.
446
447G_FNEG
448^^^^^^
449
450Floating point negation.
451
452G_FPEXT
453^^^^^^^
454
455Convert a floating point value to a larger type.
456
457G_FPTRUNC
458^^^^^^^^^
459
460Convert a floating point value to a narrower type.
461
462G_FPTOSI, G_FPTOUI, G_SITOFP, G_UITOFP
463^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
464
465Convert between integer and floating point.
466
467G_FABS
468^^^^^^
469
470Take the absolute value of a floating point value.
471
472G_FCOPYSIGN
473^^^^^^^^^^^
474
475Copy the value of the first operand, replacing the sign bit with that of the
476second operand.
477
478G_FCANONICALIZE
479^^^^^^^^^^^^^^^
480
481See :ref:`i_intr_llvm_canonicalize`.
482
483G_FMINNUM
484^^^^^^^^^
485
486Perform floating-point minimum on two values.
487
488In the case where a single input is a NaN (either signaling or quiet),
489the non-NaN input is returned.
490
491The return value of (FMINNUM 0.0, -0.0) could be either 0.0 or -0.0.
492
493G_FMAXNUM
494^^^^^^^^^
495
496Perform floating-point maximum on two values.
497
498In the case where a single input is a NaN (either signaling or quiet),
499the non-NaN input is returned.
500
501The return value of (FMAXNUM 0.0, -0.0) could be either 0.0 or -0.0.
502
503G_FMINNUM_IEEE
504^^^^^^^^^^^^^^
505
506Perform floating-point minimum on two values, following the IEEE-754 2008
507definition. This differs from FMINNUM in the handling of signaling NaNs. If one
508input is a signaling NaN, returns a quiet NaN.
509
510G_FMAXNUM_IEEE
511^^^^^^^^^^^^^^
512
513Perform floating-point maximum on two values, following the IEEE-754 2008
514definition. This differs from FMAXNUM in the handling of signaling NaNs. If one
515input is a signaling NaN, returns a quiet NaN.
516
517G_FMINIMUM
518^^^^^^^^^^
519
520NaN-propagating minimum that also treat -0.0 as less than 0.0. While
521FMINNUM_IEEE follow IEEE 754-2008 semantics, FMINIMUM follows IEEE 754-2018
522draft semantics.
523
524G_FMAXIMUM
525^^^^^^^^^^
526
527NaN-propagating maximum that also treat -0.0 as less than 0.0. While
528FMAXNUM_IEEE follow IEEE 754-2008 semantics, FMAXIMUM follows IEEE 754-2018
529draft semantics.
530
531G_FADD, G_FSUB, G_FMUL, G_FDIV, G_FREM
532^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
533
534Perform the specified floating point arithmetic.
535
536G_FMA
537^^^^^
538
539Perform a fused multiply add (i.e. without the intermediate rounding step).
540
541G_FMAD
542^^^^^^
543
544Perform a non-fused multiply add (i.e. with the intermediate rounding step).
545
546G_FPOW
547^^^^^^
548
549Raise the first operand to the power of the second.
550
551G_FEXP, G_FEXP2
552^^^^^^^^^^^^^^^
553
554Calculate the base-e or base-2 exponential of a value
555
556G_FLOG, G_FLOG2, G_FLOG10
557^^^^^^^^^^^^^^^^^^^^^^^^^
558
559Calculate the base-e, base-2, or base-10 respectively.
560
561G_FCEIL, G_FCOS, G_FSIN, G_FSQRT, G_FFLOOR, G_FRINT, G_FNEARBYINT
562^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
563
564These correspond to the standard C functions of the same name.
565
566G_INTRINSIC_TRUNC
567^^^^^^^^^^^^^^^^^
568
569Returns the operand rounded to the nearest integer not larger in magnitude than the operand.
570
571G_INTRINSIC_ROUND
572^^^^^^^^^^^^^^^^^
573
574Returns the operand rounded to the nearest integer.
575
576G_LROUND, G_LLROUND
577^^^^^^^^^^^^^^^^^^^
578
579Returns the source operand rounded to the nearest integer with ties away from
580zero.
581
582See the LLVM LangRef entry on '``llvm.lround.*'`` for details on behaviour.
583
584.. code-block:: none
585
586  %rounded_32:_(s32) = G_LROUND %round_me:_(s64)
587  %rounded_64:_(s64) = G_LLROUND %round_me:_(s64)
588
589Vector Specific Operations
590--------------------------
591
592G_CONCAT_VECTORS
593^^^^^^^^^^^^^^^^
594
595Concatenate two vectors to form a longer vector.
596
597G_BUILD_VECTOR, G_BUILD_VECTOR_TRUNC
598^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
599
600Create a vector from multiple scalar registers. No implicit
601conversion is performed (i.e. the result element type must be the
602same as all source operands)
603
604The _TRUNC version truncates the larger operand types to fit the
605destination vector elt type.
606
607G_INSERT_VECTOR_ELT
608^^^^^^^^^^^^^^^^^^^
609
610Insert an element into a vector
611
612G_EXTRACT_VECTOR_ELT
613^^^^^^^^^^^^^^^^^^^^
614
615Extract an element from a vector
616
617G_SHUFFLE_VECTOR
618^^^^^^^^^^^^^^^^
619
620Concatenate two vectors and shuffle the elements according to the mask operand.
621The mask operand should be an IR Constant which exactly matches the
622corresponding mask for the IR shufflevector instruction.
623
624Vector Reduction Operations
625---------------------------
626
627These operations represent horizontal vector reduction, producing a scalar result.
628
629G_VECREDUCE_SEQ_FADD, G_VECREDUCE_SEQ_FMUL
630^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
631
632The SEQ variants perform reductions in sequential order. The first operand is
633an initial scalar accumulator value, and the second operand is the vector to reduce.
634
635G_VECREDUCE_FADD, G_VECREDUCE_FMUL
636^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
637
638These reductions are relaxed variants which may reduce the elements in any order.
639
640G_VECREDUCE_FMAX, G_VECREDUCE_FMIN
641^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
642
643FMIN/FMAX nodes can have flags, for NaN/NoNaN variants.
644
645
646Integer/bitwise reductions
647^^^^^^^^^^^^^^^^^^^^^^^^^^
648
649* G_VECREDUCE_ADD
650* G_VECREDUCE_MUL
651* G_VECREDUCE_AND
652* G_VECREDUCE_OR
653* G_VECREDUCE_XOR
654* G_VECREDUCE_SMAX
655* G_VECREDUCE_SMIN
656* G_VECREDUCE_UMAX
657* G_VECREDUCE_UMIN
658
659Integer reductions may have a result type larger than the vector element type.
660However, the reduction is performed using the vector element type and the value
661in the top bits is unspecified.
662
663Memory Operations
664-----------------
665
666G_LOAD, G_SEXTLOAD, G_ZEXTLOAD
667^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
668
669Generic load. Expects a MachineMemOperand in addition to explicit
670operands. If the result size is larger than the memory size, the
671high bits are undefined, sign-extended, or zero-extended respectively.
672
673Only G_LOAD is valid if the result is a vector type. If the result is larger
674than the memory size, the high elements are undefined (i.e. this is not a
675per-element, vector anyextload)
676
677G_INDEXED_LOAD
678^^^^^^^^^^^^^^
679
680Generic indexed load. Combines a GEP with a load. $newaddr is set to $base + $offset.
681If $am is 0 (post-indexed), then the value is loaded from $base; if $am is 1 (pre-indexed)
682then the value is loaded from $newaddr.
683
684G_INDEXED_SEXTLOAD
685^^^^^^^^^^^^^^^^^^
686
687Same as G_INDEXED_LOAD except that the load performed is sign-extending, as with G_SEXTLOAD.
688
689G_INDEXED_ZEXTLOAD
690^^^^^^^^^^^^^^^^^^
691
692Same as G_INDEXED_LOAD except that the load performed is zero-extending, as with G_ZEXTLOAD.
693
694G_STORE
695^^^^^^^
696
697Generic store. Expects a MachineMemOperand in addition to explicit
698operands. If the stored value size is greater than the memory size,
699the high bits are implicitly truncated. If this is a vector store, the
700high elements are discarded (i.e. this does not function as a per-lane
701vector, truncating store)
702
703G_INDEXED_STORE
704^^^^^^^^^^^^^^^
705
706Combines a store with a GEP. See description of G_INDEXED_LOAD for indexing behaviour.
707
708G_ATOMIC_CMPXCHG_WITH_SUCCESS
709^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
710
711Generic atomic cmpxchg with internal success check. Expects a
712MachineMemOperand in addition to explicit operands.
713
714G_ATOMIC_CMPXCHG
715^^^^^^^^^^^^^^^^
716
717Generic atomic cmpxchg. Expects a MachineMemOperand in addition to explicit
718operands.
719
720G_ATOMICRMW_XCHG, G_ATOMICRMW_ADD, G_ATOMICRMW_SUB, G_ATOMICRMW_AND, G_ATOMICRMW_NAND, G_ATOMICRMW_OR, G_ATOMICRMW_XOR, G_ATOMICRMW_MAX, G_ATOMICRMW_MIN, G_ATOMICRMW_UMAX, G_ATOMICRMW_UMIN, G_ATOMICRMW_FADD, G_ATOMICRMW_FSUB
721^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
722
723Generic atomicrmw. Expects a MachineMemOperand in addition to explicit
724operands.
725
726G_FENCE
727^^^^^^^
728
729.. caution::
730
731  I couldn't find any documentation on this at the time of writing.
732
733G_MEMCPY
734^^^^^^^^
735
736Generic memcpy. Expects two MachineMemOperands covering the store and load
737respectively, in addition to explicit operands.
738
739G_MEMCPY_INLINE
740^^^^^^^^^^^^^^^
741
742Generic inlined memcpy. Like G_MEMCPY, but it is guaranteed that this version
743will not be lowered as a call to an external function. Currently the size
744operand is required to evaluate as a constant (not an immediate), though that is
745expected to change when llvm.memcpy.inline is taught to support dynamic sizes.
746
747G_MEMMOVE
748^^^^^^^^^
749
750Generic memmove. Similar to G_MEMCPY, but the source and destination memory
751ranges are allowed to overlap.
752
753G_MEMSET
754^^^^^^^^
755
756Generic memset. Expects a MachineMemOperand in addition to explicit operands.
757
758G_BZERO
759^^^^^^^
760
761Generic bzero. Expects a MachineMemOperand in addition to explicit operands.
762
763Control Flow
764------------
765
766G_PHI
767^^^^^
768
769Implement the φ node in the SSA graph representing the function.
770
771.. code-block:: none
772
773  %dst(s8) = G_PHI %src1(s8), %bb.<id1>, %src2(s8), %bb.<id2>
774
775G_BR
776^^^^
777
778Unconditional branch
779
780.. code-block:: none
781
782  G_BR %bb.<id>
783
784G_BRCOND
785^^^^^^^^
786
787Conditional branch
788
789.. code-block:: none
790
791  G_BRCOND %condition, %basicblock.<id>
792
793G_BRINDIRECT
794^^^^^^^^^^^^
795
796Indirect branch
797
798.. code-block:: none
799
800  G_BRINDIRECT %src(p0)
801
802G_BRJT
803^^^^^^
804
805Indirect branch to jump table entry
806
807.. code-block:: none
808
809  G_BRJT %ptr(p0), %jti, %idx(s64)
810
811G_JUMP_TABLE
812^^^^^^^^^^^^
813
814Generates a pointer to the address of the jump table specified by the source
815operand. The source operand is a jump table index.
816G_JUMP_TABLE can be used in conjunction with G_BRJT to support jump table
817codegen with GlobalISel.
818
819.. code-block:: none
820
821  %dst:_(p0) = G_JUMP_TABLE %jump-table.0
822
823The above example generates a pointer to the source jump table index.
824
825
826G_INTRINSIC, G_INTRINSIC_W_SIDE_EFFECTS
827^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
828
829Call an intrinsic
830
831The _W_SIDE_EFFECTS version is considered to have unknown side-effects and
832as such cannot be reordered across other side-effecting instructions.
833
834.. note::
835
836  Unlike SelectionDAG, there is no _VOID variant. Both of these are permitted
837  to have zero, one, or multiple results.
838
839Variadic Arguments
840------------------
841
842G_VASTART
843^^^^^^^^^
844
845.. caution::
846
847  I found no documentation for this instruction at the time of writing.
848
849G_VAARG
850^^^^^^^
851
852.. caution::
853
854  I found no documentation for this instruction at the time of writing.
855
856Other Operations
857----------------
858
859G_DYN_STACKALLOC
860^^^^^^^^^^^^^^^^
861
862Dynamically realigns the stack pointer to the specified size and alignment.
863An alignment value of `0` or `1` means no specific alignment.
864
865.. code-block:: none
866
867  %8:_(p0) = G_DYN_STACKALLOC %7(s64), 32
868
869Optimization Hints
870------------------
871
872These instructions do not correspond to any target instructions. They act as
873hints for various combines.
874
875G_ASSERT_SEXT, G_ASSERT_ZEXT
876^^^^^^^^^^^^^^^^^^^^^^^^^^^^
877
878This signifies that the contents of a register were previously extended from a
879smaller type.
880
881The smaller type is denoted using an immediate operand. For scalars, this is the
882width of the entire smaller type. For vectors, this is the width of the smaller
883element type.
884
885.. code-block:: none
886
887  %x_was_zexted:_(s32) = G_ASSERT_ZEXT %x(s32), 16
888  %y_was_zexted:_(<2 x s32>) = G_ASSERT_ZEXT %y(<2 x s32>), 16
889
890  %z_was_sexted:_(s32) = G_ASSERT_SEXT %z(s32), 8
891
892G_ASSERT_SEXT and G_ASSERT_ZEXT act like copies, albeit with some restrictions.
893
894The source and destination registers must
895
896- Be virtual
897- Belong to the same register class
898- Belong to the same register bank
899
900It should always be safe to
901
902- Look through the source register
903- Replace the destination register with the source register
904