|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6 |
|
| #
0f45eaf0 |
| 18-Jun-2022 |
luxufan <[email protected]> |
[RISCV] Add a scavenge spill slot when use ADDI to compute scalable stack offset
Computing scalable offset needs up to two scrach registers. We add scavenge spill slots according to the result of `R
[RISCV] Add a scavenge spill slot when use ADDI to compute scalable stack offset
Computing scalable offset needs up to two scrach registers. We add scavenge spill slots according to the result of `RISCV::isRVVSpill` and `RVVStackSize`. Since ADDI is not included in `RISCV::isRVVSpill`, PEI doesn't add scavenge spill slots for scrach registers when using ADDI to get scalable stack offsets.
The ADDI instruction has a destination register which can be used as a scrach register. So one scavenge spil slot is sufficient for computing scalable stack offsets.
Differential Revision: https://reviews.llvm.org/D128188
show more ...
|
| #
d63b6684 |
| 12-Jun-2022 |
Craig Topper <[email protected]> |
[RISCV] Move some methods out of RISCVInstrInfo and into RISCV namespace.
These methods don't access any state from RISCVInstrInfo. Make them free functions in the RISCV namespace.
Reviewed By: fra
[RISCV] Move some methods out of RISCVInstrInfo and into RISCV namespace.
These methods don't access any state from RISCVInstrInfo. Make them free functions in the RISCV namespace.
Reviewed By: frasercrmck
Differential Revision: https://reviews.llvm.org/D127583
show more ...
|
|
Revision tags: llvmorg-14.0.5, llvmorg-14.0.4 |
|
| #
7dbf2e7b |
| 16-May-2022 |
Philip Reames <[email protected]> |
Teach PeepholeOpt to eliminate redundant copy from constant physreg (e.g VLENB on RISCV)
The existing redundant copy elimination required a virtual register source, but the same logic works for any
Teach PeepholeOpt to eliminate redundant copy from constant physreg (e.g VLENB on RISCV)
The existing redundant copy elimination required a virtual register source, but the same logic works for any physreg where we don't have to worry about clobbers. On RISCV, this helps eliminate redundant CSR reads from VLENB.
Differential Revision: https://reviews.llvm.org/D125564
show more ...
|
| #
af5e09b7 |
| 13-May-2022 |
Philip Reames <[email protected]> |
[RISCV] Add llvm.read.register support for vlenb
This patch adds minimal support for lowering an read.register intrinsic with vlenb as the argument. Note that vlenb is an implementation constant, so
[RISCV] Add llvm.read.register support for vlenb
This patch adds minimal support for lowering an read.register intrinsic with vlenb as the argument. Note that vlenb is an implementation constant, so it is never allocatable.
This was split off a patch to eventually replace PseudoReadVLENB with a COPY MI because doing so revealed a couple of optimization opportunities which really seemed to warrant individual patches and tests. To write those patches, I need a way to write the tests involving vlenb, and read.register seemed like the right testing hook.
Differential Revision: https://reviews.llvm.org/D125552
show more ...
|
|
Revision tags: llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3 |
|
| #
fbce4a78 |
| 06-Mar-2022 |
Benjamin Kramer <[email protected]> |
Drop some more global std::maps. NFCI.
|
|
Revision tags: llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1 |
|
| #
ffe8720a |
| 02-Feb-2022 |
serge-sans-paille <[email protected]> |
Reduce dependencies on llvm/BinaryFormat/Dwarf.h
This header is very large (3M Lines once expended) and was included in location where dwarf-specific information were not needed.
More specifically,
Reduce dependencies on llvm/BinaryFormat/Dwarf.h
This header is very large (3M Lines once expended) and was included in location where dwarf-specific information were not needed.
More specifically, this commit suppresses the dependencies on llvm/BinaryFormat/Dwarf.h in two headers: llvm/IR/IRBuilder.h and llvm/IR/DebugInfoMetadata.h. As these headers (esp. the former) are widely used, this has a decent impact on number of preprocessed lines generated during compilation of LLVM, as showcased below.
This is achieved by moving some definitions back to the .cpp file, no performance impact implied[0].
As a consequence of that patch, downstream user may need to manually some extra files:
llvm/IR/IRBuilder.h no longer includes llvm/BinaryFormat/Dwarf.h llvm/IR/DebugInfoMetadata.h no longer includes llvm/BinaryFormat/Dwarf.h
In some situations, codes maybe relying on the fact that llvm/BinaryFormat/Dwarf.h was including llvm/ADT/Triple.h, this hidden dependency now needs to be explicit.
$ clang++ -E -Iinclude -I../llvm/include ../llvm/lib/Transforms/Scalar/*.cpp -std=c++14 -fno-rtti -fno-exceptions | wc -l after: 10978519 before: 11245451
Related Discourse thread: https://llvm.discourse.group/t/include-what-you-use-include-cleanup [0] https://llvm-compile-time-tracker.com/compare.php?from=fa7145dfbf94cb93b1c3e610582c495cb806569b&to=995d3e326ee1d9489145e20762c65465a9caeab4&stat=instructions
Differential Revision: https://reviews.llvm.org/D118781
show more ...
|
|
Revision tags: llvmorg-15-init |
|
| #
8def89b5 |
| 21-Jan-2022 |
wangpc <[email protected]> |
[RISCV] Set CostPerUse to 1 iff RVC is enabled
After D86836, we can define multiple cost values for different cost models. So here we set CostPerUse to 1 iff RVC is enabled to avoid potential impact
[RISCV] Set CostPerUse to 1 iff RVC is enabled
After D86836, we can define multiple cost values for different cost models. So here we set CostPerUse to 1 iff RVC is enabled to avoid potential impact on RA.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D117741
show more ...
|
|
Revision tags: llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
5861cf77 |
| 10-Dec-2021 |
Craig Topper <[email protected]> |
[RISCV] Remove FCSR from RISCVRegisterInfo.
We only used this to mark it as a reserved register. But that's not important if we don't do anything else with it.
I think if we were ever to do anythin
[RISCV] Remove FCSR from RISCVRegisterInfo.
We only used this to mark it as a reserved register. But that's not important if we don't do anything else with it.
I think if we were ever to do anything with it, we would need to model it as a super register of FRM and FFLAGS. But it might be easier to reference both FRM and FFLAGS in implicit defs/uses for anything we were to do with "fcsr".
Reviewed By: sepavloff
Differential Revision: https://reviews.llvm.org/D115455
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2 |
|
| #
b0c74215 |
| 04-Aug-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Emit DWARF location expression for RVV stack objects.
VLENB is the length of a vector register in bytes. We use <vscale x 64 bits> to represent one vector register. The dwarf offset is VLENB
[RISCV] Emit DWARF location expression for RVV stack objects.
VLENB is the length of a vector register in bytes. We use <vscale x 64 bits> to represent one vector register. The dwarf offset is VLENB * scalable_offset / 8.
For the mask vector, it occupies one vector register.
Differential Revision: https://reviews.llvm.org/D107432
show more ...
|
|
Revision tags: llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2 |
|
| #
8790e852 |
| 26-May-2021 |
Fraser Cormack <[email protected]> |
[RISCV] Reserve an emergency spill slot for any RVV spills
This patch addresses an issue in which fixed-length (VLS) vector RVV code could fail to reserve an emergency spill slot for their frame ind
[RISCV] Reserve an emergency spill slot for any RVV spills
This patch addresses an issue in which fixed-length (VLS) vector RVV code could fail to reserve an emergency spill slot for their frame index elimination. This is because we were previously only reserving a spill slot when there were `scalable-vector` frame indices being used. However, fixed-length codegen uses regular-type frame indices if it needs to spill.
This patch does the fairly brute-force method of checking ahead of time whether the function contains any RVV spill instructions, in which case it reserves one slot. Note that the second RVV slot is still only reserved for `scalable-vector` frame indices.
This unfortunately causes quite a bit of churn in existing tests, where we chop and change stack offsets for spill slots.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D103269
show more ...
|
|
Revision tags: llvmorg-12.0.1-rc1 |
|
| #
b41e1306 |
| 13-May-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Add the DebugLoc parameter to getVLENFactoredAmount().
The MachineBasicBlock::iterator is continuously changing during generating the frame handling instructions. We should use the DebugLoc
[RISCV] Add the DebugLoc parameter to getVLENFactoredAmount().
The MachineBasicBlock::iterator is continuously changing during generating the frame handling instructions. We should use the DebugLoc from the caller, instead of getting it from the changing iterator.
If the prologue instructions located in a basic block without any other instructions after these prologue instructions, the iterator will be updated to the boundary of the basic block and it is invalid to use the iterator to access DebugLoc. This patch also fixes the crash when accessing DebugLoc using the iterator.
Differential Revision: https://reviews.llvm.org/D102386
show more ...
|
| #
d8ec2b18 |
| 10-May-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Fix the calculation of the offset of Zvlsseg spilling.
For Zvlsseg spilling, we need to convert the pseudo instructions into multiple vector load/store instructions with appropriate offsets.
[RISCV] Fix the calculation of the offset of Zvlsseg spilling.
For Zvlsseg spilling, we need to convert the pseudo instructions into multiple vector load/store instructions with appropriate offsets. For example, for PseudoVSPILL3_M2, we need to convert it to
VS2R %v2, %base ADDI %base, %base, (vlenb x 2) VS2R %v4, %base ADDI %base, %base, (vlenb x 2) VS2R %v6, %base
We need to keep the size of the offset in the pseudo spilling instructions. In this case, it is (vlenb x 2).
In the original implementation, we use the size of frame objects divide the number of vectors in zvlsseg types. The size of frame objects is not necessary exactly the same as the spilling data. It may be larger than it. So, we change it to (VLENB x LMUL) in this patch. The calculation is more direct and easy to understand.
Differential Revision: https://reviews.llvm.org/D101869
show more ...
|
| #
3f02d269 |
| 20-Apr-2021 |
Fraser Cormack <[email protected]> |
[RISCV] Further fixes for RVV stack offset computation
This patch fixes a case missed out by D100574, in which RVV scalable stack offset computations may require three live registers in the case whe
[RISCV] Further fixes for RVV stack offset computation
This patch fixes a case missed out by D100574, in which RVV scalable stack offset computations may require three live registers in the case where the offset's fixed component is 12 bits or larger and has a scalable component.
Instead of adding an additional emergency spill slot, this patch further optimizes the scalable stack offset computation sequences to reduce register usage.
By emitting the sequence to compute the scalable component before the fixed component, we can free up one scratch register to be reallocated by the sequence for the fixed component. Doing this saves one register and thus one additional emergency spill slot.
Compare:
$x5 = LUI 1 $x1 = ADDIW killed $x5, -1896 $x1 = ADD $x2, killed $x1 $x5 = PseudoReadVLENB $x6 = ADDI $x0, 50 $x5 = MUL killed $x5, killed $x6 $x1 = ADD killed $x1, killed $x5
versus:
$x5 = PseudoReadVLENB $x1 = ADDI $x0, 50 $x5 = MUL killed $x5, killed $x1 $x1 = LUI 1 $x1 = ADDIW killed $x1, -1896 $x1 = ADD $x2, killed $x1 $x1 = ADD killed $x1, killed $x5
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D100847
show more ...
|
|
Revision tags: llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4 |
|
| #
d20a2376 |
| 19-Mar-2021 |
Serge Pavlov <[email protected]> |
[RISCV] Introduce floating point control and state registers
New registers FRM, FFLAGS and FCSR was defined. They represent corresponding system registers. The new registers are necessary to properl
[RISCV] Introduce floating point control and state registers
New registers FRM, FFLAGS and FCSR was defined. They represent corresponding system registers. The new registers are necessary to properly order floating point instructions in non-default modes.
Differential Revision: https://reviews.llvm.org/D99083
show more ...
|
| #
ba72bdef |
| 08-Apr-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Add scalable offset under very large stack size.
If the stack size is larger than 12 bits, we have to use a scratch register to store the stack size. Before we introduce the scalable stack o
[RISCV] Add scalable offset under very large stack size.
If the stack size is larger than 12 bits, we have to use a scratch register to store the stack size. Before we introduce the scalable stack offset, we could simplify
%0 = ADDI %stack.0, 0
=>
%scratch = ... # sequence of instructions to move the offset into %%scratch %0 = ADD %fp, %scratch
However, if the offset contains scalable part, we need to consider it.
%0 = ADDI %stack.0, 0
=>
%scratch = ... # sequence of instructions to move the offset into %%scratch %scratch = ADD %fp, %scratch %scalable_offset = ... # sequence of instructions for vscaled-offset. %0 = ADD/SUB %scratch, %scalable_offset
Differential Revision: https://reviews.llvm.org/D100035
show more ...
|
| #
02ffbac8 |
| 19-Mar-2021 |
luxufan <[email protected]> |
[RISCV] remove redundant instruction when eliminate frame index
The reason for generating mv a0, a0 instruction is when the stack object offset is large then int<12>. To deal this situation, in the
[RISCV] remove redundant instruction when eliminate frame index
The reason for generating mv a0, a0 instruction is when the stack object offset is large then int<12>. To deal this situation, in the elimintateFrameIndex function, it will create a virtual register, which needs the register scavenger to scavenge it. If the machine instruction that contains the stack object and the opcode is ADDI(the addi was generated by frameindexNode), and then this instruction's destination register was the same as the register that was generated by the register scavenger, then the mv a0, a0 was generated. So to eliminnate this instruction, in the eliminateFrameIndex function, if the instrution opcode is ADDI, then the virtual register can't be created.
Differential Revision: https://reviews.llvm.org/D92479
show more ...
|
| #
aa8d33a6 |
| 15-Mar-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Spilling for Zvlsseg registers.
For Zvlsseg, we create several tuple register classes. When spilling for these tuple register classes, we need to iterate NF times to load/store these tuple r
[RISCV] Spilling for Zvlsseg registers.
For Zvlsseg, we create several tuple register classes. When spilling for these tuple register classes, we need to iterate NF times to load/store these tuple registers.
Differential Revision: https://reviews.llvm.org/D98629
show more ...
|
|
Revision tags: llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2 |
|
| #
9aa20cae |
| 19-Feb-2021 |
Fraser Cormack <[email protected]> |
[RISCV] Improve register allocation around vector masks
With vector mask registers only allocatable to V0 (VMV0Regs) it is relatively simple to generate code which uses multiple masks and naively re
[RISCV] Improve register allocation around vector masks
With vector mask registers only allocatable to V0 (VMV0Regs) it is relatively simple to generate code which uses multiple masks and naively requires spilling.
This patch aims to improve codegen in such cases by telling LLVM it can use VRRegs to hold masks. This will prevent spilling in many cases by having LLVM copy to an available VR register.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D97055
show more ...
|
|
Revision tags: llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1 |
|
| #
5a31a673 |
| 08-Jan-2021 |
Hsiangkai Wang <[email protected]> |
[RISCV] Frame handling for RISC-V V extension.
This patch proposes how to deal with RISC-V vector frame objects. The layout of RISC-V vector frame will look like
|---------------------------------|
[RISCV] Frame handling for RISC-V V extension.
This patch proposes how to deal with RISC-V vector frame objects. The layout of RISC-V vector frame will look like
|---------------------------------| | scalar callee-saved registers | |---------------------------------| | scalar local variables | |---------------------------------| | scalar outgoing arguments | |---------------------------------| | RVV local variables && | | RVV outgoing arguments | |---------------------------------| <- end of frame (sp)
If there is realignment or variable length array in the stack, we will use frame pointer to access fixed objects and stack pointer to access non-fixed objects.
|---------------------------------| <- frame pointer (fp) | scalar callee-saved registers | |---------------------------------| | scalar local variables | |---------------------------------| | ///// realignment ///// | |---------------------------------| | scalar outgoing arguments | |---------------------------------| | RVV local variables && | | RVV outgoing arguments | |---------------------------------| <- end of frame (sp)
If there are both realignment and variable length array in the stack, we will use frame pointer to access fixed objects and base pointer to access non-fixed objects.
|---------------------------------| <- frame pointer (fp) | scalar callee-saved registers | |---------------------------------| | scalar local variables | |---------------------------------| | ///// realignment ///// | |---------------------------------| <- base pointer (bp) | RVV local variables && | | RVV outgoing arguments | |---------------------------------| | /////////////////////////////// | | variable length array | | /////////////////////////////// | |---------------------------------| <- end of frame (sp) | scalar outgoing arguments | |---------------------------------|
In this version, we do not save the addresses of RVV objects in the stack. We access them directly through the polynomial expression (a x VLENB + b). We do not reserve frame pointer when there is any RVV object in the stack. So, we also access the scalar frame objects through the polynomial expression (a x VLENB + b) if the access across RVV stack area.
Differential Revision: https://reviews.llvm.org/D94465
show more ...
|
| #
3183add5 |
| 21-Dec-2020 |
Monk Chiang <[email protected]> |
[RISCV] Define the remaining vector fixed-point arithmetic intrinsics.
This patch base on D93366, and define vector fixed-point intrinsics. 1. vaaddu/vaadd/vasubu/vasub 2. vsmul 3. vss
[RISCV] Define the remaining vector fixed-point arithmetic intrinsics.
This patch base on D93366, and define vector fixed-point intrinsics. 1. vaaddu/vaadd/vasubu/vasub 2. vsmul 3. vssrl/vssra 4. vnclipu/vnclip
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <[email protected]> Co-Authored-by: ShihPo Hung <[email protected]>
Differential Revision: https://reviews.llvm.org/D93508
show more ...
|
|
Revision tags: llvmorg-11.0.1, llvmorg-11.0.1-rc2 |
|
| #
ee2cb90e |
| 17-Dec-2020 |
Monk Chiang <[email protected]> |
[RISCV] Define vsadd/vsaddu/vssub/vssubu intrinsics.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <[email protected]> Co-Authored-by: ShihPo Hung <shihp
[RISCV] Define vsadd/vsaddu/vssub/vssubu intrinsics.
We work with @rogfer01 from BSC to come out this patch.
Authored-by: Roger Ferrer Ibanez <[email protected]> Co-Authored-by: ShihPo Hung <[email protected]> Co-Authored-by: Monk Chiang <[email protected]>
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D93366
show more ...
|
| #
5baef635 |
| 01-Dec-2020 |
Craig Topper <[email protected]> |
[RISCV] Initial infrastructure for code generation of the RISC-V V-extension
The companion RFC (http://lists.llvm.org/pipermail/llvm-dev/2020-October/145850.html) gives lots of details on the overal
[RISCV] Initial infrastructure for code generation of the RISC-V V-extension
The companion RFC (http://lists.llvm.org/pipermail/llvm-dev/2020-October/145850.html) gives lots of details on the overall strategy, but we summarize it here:
LLVM IR involving vector types is going to be selected using pseudo instructions (only MachineInstr). These pseudo instructions contain dummy operands to represent the vector type being operated and the vector length for the operation. These two dummy operands, as set by instruction selection, will be used by the custom inserter to prepend every operation with an appropriate vsetvli instruction that ensures the vector architecture is properly configured for the operation. Not in this patch: later passes will remove the redundant vsetvli instructions. Register classes of tuples of vector registers are used to represent vector register groups (LMUL > 1). Those pseudos are eventually lowered into the actual instructions when emitting the MCInsts. About the patch:
Because there is a bit of initial infrastructure required, this is the minimal patch that allows us to select instructions for 3 LLVM IR instructions: load, add and store vectors of integers. LLVM IR operations have "whole-vector" semantics (as in they generate values for all the elements).
Later patches will extend the information represented in TableGen.
Authored-by: Roger Ferrer Ibanez <[email protected]> Co-Authored-by: Evandro Menezes <[email protected]> Co-Authored-by: Craig Topper <[email protected]>
Differential Revision: https://reviews.llvm.org/D89449
show more ...
|
|
Revision tags: llvmorg-11.0.1-rc1, llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3 |
|
| #
f7bc7c29 |
| 03-Jul-2020 |
Hsiangkai Wang <[email protected]> |
[RISCV] Support Zfh half-precision floating-point extension.
Support "Zfh" extension according to https://github.com/riscv/riscv-isa-manual/blob/zfh/src/zfh.tex
Differential Revision: https://revie
[RISCV] Support Zfh half-precision floating-point extension.
Support "Zfh" extension according to https://github.com/riscv/riscv-isa-manual/blob/zfh/src/zfh.tex
Differential Revision: https://reviews.llvm.org/D90738
show more ...
|
| #
a8dc2110 |
| 24-Nov-2020 |
Luís Marques <[email protected]> |
[RISCV] Add GHC calling convention
This is a special calling convention to be used by the GHC compiler.
Patch by Andreas Schwab (schwab)
Differential Revision: https://reviews.llvm.org/D89788
|
| #
e4d93802 |
| 24-Nov-2020 |
Luís Marques <[email protected]> |
Revert "[RISCV] Add GHC calling convention"
This reverts commit f8317bb256be2cd8ed81ebc567f0fa626b645f63 due to lack of proper attribution.
|