|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2 |
|
| #
a3f50fb0 |
| 23-Dec-2021 |
Simon Pilgrim <[email protected]> |
[X86] isVectorShiftByScalarCheap - vXi8 select(shift(x,splat0),shift(x,splat1)) is better than shift(x,select(splat0,splat1))
Even though we don't have vXi8 vector shifts (apart from XOP), it is sti
[X86] isVectorShiftByScalarCheap - vXi8 select(shift(x,splat0),shift(x,splat1)) is better than shift(x,select(splat0,splat1))
Even though we don't have vXi8 vector shifts (apart from XOP), it is still better to prefer shift (or funnel-shift/rotate) by scalar where possible.
https://llvm.godbolt.org/z/6ss6ffTxv
Differential Revision: https://reviews.llvm.org/D116191
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2, llvmorg-13.0.0-rc1, llvmorg-14-init, llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1, llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init, llvmorg-11.1.0-rc2, llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1 |
|
| #
8c4a86f7 |
| 09-Nov-2020 |
Simon Pilgrim <[email protected]> |
[CodeGenPrepare] Remove unused check-prefixes
|
|
Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6, llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3, llvmorg-11.0.0-rc2, llvmorg-11.0.0-rc1, llvmorg-12-init, llvmorg-10.0.1, llvmorg-10.0.1-rc4, llvmorg-10.0.1-rc3, llvmorg-10.0.1-rc2, llvmorg-10.0.1-rc1 |
|
| #
5be37cb1 |
| 15-May-2020 |
Sanjay Patel <[email protected]> |
[x86][CGP] try to hoist funnel shift above select-of-splats
This is basically the same patch as D63233, but converted to funnel shifts rather than regular shifts. I did not see a way to effectively
[x86][CGP] try to hoist funnel shift above select-of-splats
This is basically the same patch as D63233, but converted to funnel shifts rather than regular shifts. I did not see a way to effectively share code for these 2 cases though.
This follows D79718 and D79827 to re-fix PR37426 because that gets canonicalized to funnel shift intrinsics in IR.
I did draft an alternative patch as an enhancement to "shouldSinkOperands()", but that was awkward because we have to key the transform from the select, but then look at both its users and its operands.
show more ...
|
| #
dfb99e1a |
| 15-May-2020 |
Sanjay Patel <[email protected]> |
[x86][CGP] add more tests for PR37426; NFC
This broke when we started canonicalizing more code to funnel shift. See D79718 and D79827 for related test/transforms.
|
| #
26e742fd |
| 14-May-2020 |
Sanjay Patel <[email protected]> |
[x86][CGP] improve sinking of splatted vector shift amount operand
Expands on the enablement of the shouldSinkOperands() TLI hook in: D79718
The last codegen/IR test diff shows what I suspected cou
[x86][CGP] improve sinking of splatted vector shift amount operand
Expands on the enablement of the shouldSinkOperands() TLI hook in: D79718
The last codegen/IR test diff shows what I suspected could happen - we were sinking all splat shift operands into a loop. But that's not what we want in general; we only want to sink the *shift amount* operand if it is a splat.
Differential Revision: https://reviews.llvm.org/D79827
show more ...
|
| #
9237d880 |
| 09-May-2020 |
Simon Pilgrim <[email protected]> |
[X86] isVectorShiftByScalarCheap - don't limit fast XOP vector shifts to 128-bit vectors
XOP targets have fast per-element vector shifts and we're better off splitting to 128-bit shifts where necess
[X86] isVectorShiftByScalarCheap - don't limit fast XOP vector shifts to 128-bit vectors
XOP targets have fast per-element vector shifts and we're better off splitting to 128-bit shifts where necessary (which is what we already do in LowerShift).
show more ...
|
| #
f8b09f7b |
| 09-May-2020 |
Simon Pilgrim <[email protected]> |
[CodeGenPrepare][X86] Add x16i16, v32i8 and XOP vector shift by scalar amount tests
Helps improve test coverage of the XOP modes in X86TargetLowering::isVectorShiftByScalarCheap (and where we always
[CodeGenPrepare][X86] Add x16i16, v32i8 and XOP vector shift by scalar amount tests
Helps improve test coverage of the XOP modes in X86TargetLowering::isVectorShiftByScalarCheap (and where we always return false for vXi8 vector shifts).
show more ...
|
|
Revision tags: llvmorg-10.0.0, llvmorg-10.0.0-rc6, llvmorg-10.0.0-rc5, llvmorg-10.0.0-rc4, llvmorg-10.0.0-rc3, llvmorg-10.0.0-rc2, llvmorg-10.0.0-rc1, llvmorg-11-init, llvmorg-9.0.1, llvmorg-9.0.1-rc3, llvmorg-9.0.1-rc2, llvmorg-9.0.1-rc1, llvmorg-9.0.0, llvmorg-9.0.0-rc6, llvmorg-9.0.0-rc5, llvmorg-9.0.0-rc4, llvmorg-9.0.0-rc3, llvmorg-9.0.0-rc2, llvmorg-9.0.0-rc1, llvmorg-10-init, llvmorg-8.0.1, llvmorg-8.0.1-rc4, llvmorg-8.0.1-rc3 |
|
| #
c8d88ad1 |
| 16-Jun-2019 |
Sanjay Patel <[email protected]> |
[CodeGenPrepare][x86] shift both sides of a vector select when profitable
This is based on the example/discussion in PR37428: https://bugs.llvm.org/show_bug.cgi?id=37428
Proper vector shift instruc
[CodeGenPrepare][x86] shift both sides of a vector select when profitable
This is based on the example/discussion in PR37428: https://bugs.llvm.org/show_bug.cgi?id=37428
Proper vector shift instructions don't appear until AVX2, so we may generate several extra instructions within a loop trying to compensate for that. It's difficult to recover from that shift expansion later than this, so use the existing TLI hook and splat analysis to enable better codegen.
This extends CGP functionality introduced with: rL201655
Differential Revision: https://reviews.llvm.org/D63233
llvm-svn: 363511
show more ...
|
| #
7ea378b9 |
| 14-Jun-2019 |
Sanjay Patel <[email protected]> |
[CodeGenPrepare] propagate debuginfo when copying a shuffle
llvm-svn: 363409
|
| #
a1421e83 |
| 12-Jun-2019 |
Sanjay Patel <[email protected]> |
[x86] add tests for vector shifts; NFC
llvm-svn: 363203
|