|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3 |
|
| #
c35ad9ee |
| 27-Apr-2022 |
Mircea Trofin <[email protected]> |
[mlgo] Support exposing more features than those supported by models
This allows the compiler to support more features than those supported by a model. The only requirement (development mode only) i
[mlgo] Support exposing more features than those supported by models
This allows the compiler to support more features than those supported by a model. The only requirement (development mode only) is that the new features must be appended at the end of the list of features requested from the model. The support is transparent to compiler code: for unsupported features, we provide a valid buffer to copy their values; it's just that this buffer is disconnected from the model, so insofar as the model is concerned (AOT or development mode), these features don't exist. The buffers are allocated at setup - meaning, at steady state, there is no extra allocation (maintaining the current invariant). These buffers has 2 roles: one, keep the compiler code simple. Second, allow logging their values in development mode. The latter allows retraining a model supporting the larger feature set starting from traces produced with the old model.
For release mode (AOT-ed models), this decouples compiler evolution from model evolution, which we want in scenarios where the toolchain is frequently rebuilt and redeployed: we can first deploy the new features, and continue working with the older model, until a new model is made available, which can then be picked up the next time the compiler is built.
Differential Revision: https://reviews.llvm.org/D124565
show more ...
|
|
Revision tags: llvmorg-14.0.2, llvmorg-14.0.1, llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3 |
|
| #
ed98c1b3 |
| 09-Mar-2022 |
serge-sans-paille <[email protected]> |
Cleanup includes: DebugInfo & CodeGen
Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup Differential Revision: https://reviews.llvm.org/D121332
|
|
Revision tags: llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3 |
|
| #
5f4ae564 |
| 18-Jan-2022 |
Jan Svoboda <[email protected]> |
[llvm] Remove uses of `std::vector<bool>`
LLVM Programmer’s Manual strongly discourages the use of `std::vector<bool>` and suggests `llvm::BitVector` as a possible replacement.
This patch does just
[llvm] Remove uses of `std::vector<bool>`
LLVM Programmer’s Manual strongly discourages the use of `std::vector<bool>` and suggests `llvm::BitVector` as a possible replacement.
This patch does just that for llvm.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D117121
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc2 |
|
| #
248d55af |
| 10-Jan-2022 |
Mircea Trofin <[email protected]> |
[NFC][MLGO] Use LazyCallGraph::Node to track functions.
This avoids the InlineAdvisor carrying the responsibility of deleting Function objects. We use LazyCallGraph::Node objects instead, which are
[NFC][MLGO] Use LazyCallGraph::Node to track functions.
This avoids the InlineAdvisor carrying the responsibility of deleting Function objects. We use LazyCallGraph::Node objects instead, which are stable in memory for the duration of the Module-wide performance of CGSCC passes started under the same ModuleToPostOrderCGSCCPassAdaptor (which is the case here)
Differential Revision: https://reviews.llvm.org/D116964
show more ...
|
| #
a120fdd3 |
| 04-Jan-2022 |
Mircea Trofin <[email protected]> |
[NFC][MLGO]Add RTTI support for MLModelRunner and simplify runner setup
|
| #
04f2712e |
| 09-Dec-2021 |
Mircea Trofin <[email protected]> |
[NFC][MLGO] Factor ModelUnderTrainingRunner for reuse
This is so we may reuse it. It was very non-inliner specific already.
Differential Revision: https://reviews.llvm.org/D115465
|
| #
059e0347 |
| 07-Dec-2021 |
Mircea Trofin <[email protected]> |
[NFC][mlgo] Generalize model runner interface
This prepares it for the regalloc work. Part of it is making model evaluation accross 'development' and 'release' scenarios more reusable. This patch: -
[NFC][mlgo] Generalize model runner interface
This prepares it for the regalloc work. Part of it is making model evaluation accross 'development' and 'release' scenarios more reusable. This patch: - extends support to tensors of any shape (not just scalars, like we had in the inliner -Oz case). While the tensor shape can be anything, we assume row-major layout and expose the tensor as a buffer. - exposes the NoInferenceModelRunner, which we use in the 'development' mode to keep the evaluation code path consistent and simplify logging, as we'll want to reuse it in the regalloc case.
Differential Revision: https://reviews.llvm.org/D115306
show more ...
|
|
Revision tags: llvmorg-13.0.1-rc1, llvmorg-13.0.0, llvmorg-13.0.0-rc4, llvmorg-13.0.0-rc3, llvmorg-13.0.0-rc2 |
|
| #
1055c5e1 |
| 23-Aug-2021 |
Mircea Trofin <[email protected]> |
[MLGO] Make sure inliner logs when deleting callees
When using final reward (which is now the default), we were skipping logging decisions that were leading to callee deletion. This fixes that.
Dif
[MLGO] Make sure inliner logs when deleting callees
When using final reward (which is now the default), we were skipping logging decisions that were leading to callee deletion. This fixes that.
Differential Revision: https://reviews.llvm.org/D108587
show more ...
|
| #
c874dd53 |
| 05-Aug-2021 |
Christopher Di Bella <[email protected]> |
[llvm][clang][NFC] updates inline licence info
Some files still contained the old University of Illinois Open Source Licence header. This patch replaces that with the Apache 2 with LLVM Exception li
[llvm][clang][NFC] updates inline licence info
Some files still contained the old University of Illinois Open Source Licence header. This patch replaces that with the Apache 2 with LLVM Exception licence.
Differential Revision: https://reviews.llvm.org/D107528
show more ...
|
| #
ae1a2a09 |
| 05-Aug-2021 |
Mircea Trofin <[email protected]> |
[NFC][MLGO] Make logging more robust
1) add some self-diagnosis (when asserts are enabled) to check that all features have the same nr of entries
2) avoid storing pointers to mutable fields because
[NFC][MLGO] Make logging more robust
1) add some self-diagnosis (when asserts are enabled) to check that all features have the same nr of entries
2) avoid storing pointers to mutable fields because the proto API contract doesn't actually guarantee those stay fixed even if no further mutation of the object occurs.
Differential Revision: https://reviews.llvm.org/D107594
show more ...
|
|
Revision tags: llvmorg-13.0.0-rc1, llvmorg-14-init |
|
| #
55e12f70 |
| 22-Jul-2021 |
Mircea Trofin <[email protected]> |
[NFC][MLGO] Just use the underlying protobuf object for logging
Avoid buffering just to copy the buffered data, in 'development mode', when logging. Instead, just populate the underlying protobuf.
[NFC][MLGO] Just use the underlying protobuf object for logging
Avoid buffering just to copy the buffered data, in 'development mode', when logging. Instead, just populate the underlying protobuf.
Differential Revision: https://reviews.llvm.org/D106592
show more ...
|
|
Revision tags: llvmorg-12.0.1, llvmorg-12.0.1-rc4, llvmorg-12.0.1-rc3, llvmorg-12.0.1-rc2, llvmorg-12.0.1-rc1 |
|
| #
0d06b14f |
| 16-Apr-2021 |
Mircea Trofin <[email protected]> |
[MLGO] Fix use of AM.invalidate post D100519
The ML inline advisors more aggressively invalidate certain analyses after each call site inlining, to more accurately capture the problem state.
|
|
Revision tags: llvmorg-12.0.0, llvmorg-12.0.0-rc5, llvmorg-12.0.0-rc4, llvmorg-12.0.0-rc3, llvmorg-12.0.0-rc2, llvmorg-11.1.0, llvmorg-11.1.0-rc3, llvmorg-12.0.0-rc1, llvmorg-13-init |
|
| #
a3254904 |
| 23-Jan-2021 |
Kazu Hirata <[email protected]> |
[Analysis] Use llvm::append_range (NFC)
|
|
Revision tags: llvmorg-11.1.0-rc2 |
|
| #
e8049dc3 |
| 15-Jan-2021 |
Mircea Trofin <[email protected]> |
[NewPM][Inliner] Move the 'always inliner' case in the same CGSCC pass as 'regular' inliner
Expanding from D94808 - we ensure the same InlineAdvisor is used by both InlinerPass instances. The notion
[NewPM][Inliner] Move the 'always inliner' case in the same CGSCC pass as 'regular' inliner
Expanding from D94808 - we ensure the same InlineAdvisor is used by both InlinerPass instances. The notion of mandatory inlining is moved into the core InlineAdvisor: advisors anyway have to handle that case, so this change also factors out that a bit better.
Differential Revision: https://reviews.llvm.org/D94825
show more ...
|
|
Revision tags: llvmorg-11.1.0-rc1, llvmorg-11.0.1, llvmorg-11.0.1-rc2, llvmorg-11.0.1-rc1 |
|
| #
8ab2353a |
| 19-Nov-2020 |
Mircea Trofin <[email protected]> |
[NFC][TFUtils] also include output specs lookup logic in loadOutputSpecs
The lookup logic is also reusable.
Also refactored the API to return the loaded vector - this makes it more clear what state
[NFC][TFUtils] also include output specs lookup logic in loadOutputSpecs
The lookup logic is also reusable.
Also refactored the API to return the loaded vector - this makes it more clear what state it is in in the case of error (as it won't be returned).
Differential Revision: https://reviews.llvm.org/D91759
show more ...
|
| #
b51e844f |
| 19-Nov-2020 |
Mircea Trofin <[email protected]> |
[NFC][TFUtils] Extract out the output spec loader
It's generic for the 'development mode', not specific to the inliner case.
Differential Revision: https://reviews.llvm.org/D91751
|
| #
ac2018da |
| 07-Oct-2020 |
Mircea Trofin <[email protected]> |
[NFC][MLInliner] Getters should return by reference
|
|
Revision tags: llvmorg-11.0.0, llvmorg-11.0.0-rc6 |
|
| #
36bb1fb1 |
| 03-Oct-2020 |
Mircea Trofin <[email protected]> |
[MLInliner] Factor out logging
Factored out the logging facility, to allow its reuse outside the inliner.
Differential Revision: https://reviews.llvm.org/D88770
|
|
Revision tags: llvmorg-11.0.0-rc5, llvmorg-11.0.0-rc4, llvmorg-11.0.0-rc3 |
|
| #
8c63df24 |
| 24-Aug-2020 |
Mircea Trofin <[email protected]> |
[MLInliner] Support training that doesn't require partial rewards
If we use training algorithms that don't need partial rewards, we don't need to worry about an ir2native model. In that case, traini
[MLInliner] Support training that doesn't require partial rewards
If we use training algorithms that don't need partial rewards, we don't need to worry about an ir2native model. In that case, training logs won't contain a 'delta_size' feature either (since that's the partial reward).
Differential Revision: https://reviews.llvm.org/D86481
show more ...
|
|
Revision tags: llvmorg-11.0.0-rc2 |
|
| #
62fc44ca |
| 10-Aug-2020 |
Mircea Trofin <[email protected]> |
[MLInliner] In development mode, obtain the output specs from a file
Different training algorithms may produce models that, besides the main policy output (i.e. inline/don't inline), produce additio
[MLInliner] In development mode, obtain the output specs from a file
Different training algorithms may produce models that, besides the main policy output (i.e. inline/don't inline), produce additional outputs that are necessary for the next training stage. To facilitate this, in development mode, we require the training policy infrastructure produce a description of the outputs that are interesting to it, in the form of a JSON file. We special-case the first entry in the JSON file as the inlining decision - we care about its value, so we can guide inlining during training - but treat the rest as opaque data that we just copy over to the training log.
Differential Revision: https://reviews.llvm.org/D85674
show more ...
|
| #
211117b6 |
| 10-Aug-2020 |
Mircea Trofin <[email protected]> |
[NFC][MLInliner] remove curly braces for a few sinle-line loops
|
| #
d5c81be3 |
| 10-Aug-2020 |
Mircea Trofin <[email protected]> |
[NFC][MLInliner] Set up the logger outside the development mode advisor
This allows us to subsequently configure the logger for the case when we use a model evaluator and want to log additional outp
[NFC][MLInliner] Set up the logger outside the development mode advisor
This allows us to subsequently configure the logger for the case when we use a model evaluator and want to log additional outputs.
Differential Revision: https://reviews.llvm.org/D85577
show more ...
|
| #
64372d93 |
| 07-Aug-2020 |
Mircea Trofin <[email protected]> |
[NFC][MLInliner] Refactor logging implementation
This prepares it for logging externally-specified outputs.
Differential Revision: https://reviews.llvm.org/D85451
|
| #
87fb7aa1 |
| 06-Aug-2020 |
Mircea Trofin <[email protected]> |
[llvm][MLInliner] Don't log 'mandatory' events
We don't want mandatory events in the training log. We do want to handle them, to keep the native size accounting accurate, but that's all.
Fixed the
[llvm][MLInliner] Don't log 'mandatory' events
We don't want mandatory events in the training log. We do want to handle them, to keep the native size accounting accurate, but that's all.
Fixed the code, also expanded the test to capture this.
Differential Revision: https://reviews.llvm.org/D85373
show more ...
|
| #
65b6dbf9 |
| 04-Aug-2020 |
Mircea Trofin <[email protected]> |
[llvm][NFC] Moved implementation of TrainingLogger outside of its decl
Also renamed a method - printTensor - to print; and added comments.
|