|
Revision tags: llvmorg-20.1.0, llvmorg-20.1.0-rc3, llvmorg-20.1.0-rc2, llvmorg-20.1.0-rc1, llvmorg-21-init, llvmorg-19.1.7, llvmorg-19.1.6, llvmorg-19.1.5, llvmorg-19.1.4, llvmorg-19.1.3, llvmorg-19.1.2, llvmorg-19.1.1, llvmorg-19.1.0, llvmorg-19.1.0-rc4, llvmorg-19.1.0-rc3, llvmorg-19.1.0-rc2, llvmorg-19.1.0-rc1, llvmorg-20-init, llvmorg-18.1.8, llvmorg-18.1.7, llvmorg-18.1.6, llvmorg-18.1.5, llvmorg-18.1.4, llvmorg-18.1.3, llvmorg-18.1.2, llvmorg-18.1.1, llvmorg-18.1.0, llvmorg-18.1.0-rc4, llvmorg-18.1.0-rc3, llvmorg-18.1.0-rc2, llvmorg-18.1.0-rc1, llvmorg-19-init, llvmorg-17.0.6, llvmorg-17.0.5, llvmorg-17.0.4, llvmorg-17.0.3, llvmorg-17.0.2, llvmorg-17.0.1, llvmorg-17.0.0, llvmorg-17.0.0-rc4, llvmorg-17.0.0-rc3, llvmorg-17.0.0-rc2, llvmorg-17.0.0-rc1, llvmorg-18-init, llvmorg-16.0.6, llvmorg-16.0.5, llvmorg-16.0.4, llvmorg-16.0.3, llvmorg-16.0.2, llvmorg-16.0.1, llvmorg-16.0.0, llvmorg-16.0.0-rc4, llvmorg-16.0.0-rc3, llvmorg-16.0.0-rc2, llvmorg-16.0.0-rc1, llvmorg-17-init, llvmorg-15.0.7, llvmorg-15.0.6, llvmorg-15.0.5, llvmorg-15.0.4, llvmorg-15.0.3, llvmorg-15.0.2, llvmorg-15.0.1, llvmorg-15.0.0, llvmorg-15.0.0-rc3, llvmorg-15.0.0-rc2, llvmorg-15.0.0-rc1, llvmorg-16-init, llvmorg-14.0.6, llvmorg-14.0.5, llvmorg-14.0.4, llvmorg-14.0.3, llvmorg-14.0.2, llvmorg-14.0.1 |
|
| #
e67cee09 |
| 29-Mar-2022 |
Pavel Labath <[email protected]> |
[lldb] Avoid duplicate vdso modules when opening core files
When opening core files (and also in some other situations) we could end up with two vdso modules. This could happen because the vdso modu
[lldb] Avoid duplicate vdso modules when opening core files
When opening core files (and also in some other situations) we could end up with two vdso modules. This could happen because the vdso module is very special, and over the years, we have accumulated various ways to load it.
In D10800, we added one mechanism for loading it, which took the form of a generic load-from-memory capability. Unfortunately loading an elf file from memory is not possible (because the loader never loads the entire file), and our attempts to do so were causing crashes. So, in D34352, we partially reverted D10800 and implemented a custom mechanism specific to the vdso.
Unfortunately, enough of D10800 remained such that, under the right circumstances, it could end up loading a second (non-functional) copy of the vdso module. This happened when the process plugin did not support the extended MemoryRegionInfo query (added in D22219, to workaround a different bug), which meant that the loader plugin was not able to recognise that the linux-vdso.so.1 module (this is how the loader calls it) is in fact the same as the [vdso] module (the name used in /proc/$PID/maps) we loaded before. This typically happened in a core file, as they don't store this kind of information.
This patch fixes the issue by completing the revert of D10800 -- the memory loading code is removed completely. It also reduces the scope of the hackaround introduced in D22219 -- it isn't completely sound and is only relevant for fairly old (but still supported) versions of android.
I added the memory loading logic to the wasm dynamic loader, which has since appeared and is relying on this feature (it even has a test). As far as I can tell loading wasm modules from memory is possible and reliable. MachO memory loading is not affected by this patch, as it uses a completely different code path.
Since the scenarios/patches I described came without test cases, I have created two new gdb-client tests cases for them. They're not particularly readable, but right now, this is the best way we can simulate the behavior (bugs) of a particular dynamic linker.
Differential Revision: https://reviews.llvm.org/D122660
show more ...
|
|
Revision tags: llvmorg-14.0.0, llvmorg-14.0.0-rc4, llvmorg-14.0.0-rc3, llvmorg-14.0.0-rc2, llvmorg-14.0.0-rc1, llvmorg-15-init, llvmorg-13.0.1, llvmorg-13.0.1-rc3, llvmorg-13.0.1-rc2, llvmorg-13.0.1-rc1 |
|
| #
14086849 |
| 11-Nov-2021 |
Pavel Labath <[email protected]> |
[lldb] Introduce PlatformQemuUser
This adds a new platform class, whose job is to enable running (debugging) executables under qemu.
(For general information about qemu, I recommend reading the RFC
[lldb] Introduce PlatformQemuUser
This adds a new platform class, whose job is to enable running (debugging) executables under qemu.
(For general information about qemu, I recommend reading the RFC thread on lldb-dev <https://lists.llvm.org/pipermail/lldb-dev/2021-October/017106.html>.)
This initial patch implements the necessary boilerplate as well as the minimal amount of functionality needed to actually be able to do something useful (which, in this case means debugging a fully statically linked executable).
The knobs necessary to emulate dynamically linked programs, as well as to control other aspects of qemu operation (the emulated cpu, for instance) will be added in subsequent patches. Same goes for the ability to automatically bind to the executables of the emulated architecture.
Currently only two settings are available: - architecture: the architecture that we should emulate - emulator-path: the path to the emulator
Even though this patch is relatively small, it doesn't lack subtleties that are worth calling out explicitly: - named sockets: qemu supports tcp and unix socket connections, both of them in the "forward connect" mode (qemu listening, lldb connecting). Forward TCP connections are impossible to realise in a race-free way. This is the reason why I chose unix sockets as they have larger, more structured names, which can guarantee that there are no collisions between concurrent connection attempts. - the above means that this code will not work on windows. I don't think that's an issue since user mode qemu does not support windows anyway. - Right now, I am leaving the code enabled for windows, but maybe it would be better to disable it (otoh, disabling it means windows developers can't check they don't break it) - qemu-user also does not support macOS, so one could contemplate disabling it there too. However, macOS does support named sockets, so one can even run the (mock) qemu tests there, and I think it'd be a shame to lose that.
Differential Revision: https://reviews.llvm.org/D114509
show more ...
|
| #
7c8ae65f |
| 18-Nov-2021 |
Pavel Labath <[email protected]> |
[lldb/test] Make it possible to run the mock gdb server on a single thread
This is a preparatory commit to enable mocking of qemu startup. That will involve running the mock server in a separate pro
[lldb/test] Make it possible to run the mock gdb server on a single thread
This is a preparatory commit to enable mocking of qemu startup. That will involve running the mock server in a separate process, so there's no need for multithreading.
Initialization is moved from the start function into the constructor (which can then take an actual socket instead of a class), and the run method is made public.
Depends on D114156.
Differential Revision: https://reviews.llvm.org/D114157
show more ...
|
| #
f3b7cc8b |
| 18-Nov-2021 |
Pavel Labath <[email protected]> |
[lldb/test] Add ability to terminate connection from a gdb-client handler
We were using the client socket close as a way to terminate the handler thread. But this kind of concurrent access to the sa
[lldb/test] Add ability to terminate connection from a gdb-client handler
We were using the client socket close as a way to terminate the handler thread. But this kind of concurrent access to the same socket is not safe. It also complicates running the handler without a dedicated thread (next patch).
Instead, here I add an explicit way for a packet handler to request termination. Waiting for lldb to terminate the connection would almost be sufficient, but in the pty test we want to keep the pty open so we can examine its state. Ability to disconnect at an arbitrary point may be useful for testing other aspects of lldb functionality as well.
The way this works is that now each packet handler can optionally return a list of responses (instead of just one). One of those responses (it only makes sense for it to be the last one) can be a special RESPONSE_DISCONNECT object, which triggers a disconnection (via a new TerminateConnectionException).
As the mock server now cleans up the connection whenever it disconnects, the pty test needs to explicitly dup(2) the descriptors in order to inspect the post-disconnect state.
Differential Revision: https://reviews.llvm.org/D114156
show more ...
|
| #
33c0f93f |
| 11-Nov-2021 |
Pavel Labath <[email protected]> |
[lldb/test] Move gdb client utils into the packages tree
This infrastructure has proven proven its worth, so give it a more prominent place.
My immediate motivation for this is the desire to reuse
[lldb/test] Move gdb client utils into the packages tree
This infrastructure has proven proven its worth, so give it a more prominent place.
My immediate motivation for this is the desire to reuse this infrastructure for qemu platform testing, but I believe this move makes sense independently of that. Moving this code to the packages tree will allow as to add more structure to the gdb client tests -- currently they are all crammed into the same test folder as that was the only way they could access this code.
I'm splitting the code into two parts while moving it. The first once contains just the generic gdb protocol wrappers, while the other one contains the unit test glue. The reason for that is that for qemu testing, I need to run the gdb code in a separate process, so I will only be using the first part there.
Differential Revision: https://reviews.llvm.org/D113893
show more ...
|