|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6 |
|
| #
0f613bfa |
| 05-Jun-2023 |
Mark Rutland <[email protected]> |
locking/atomic: treewide: use raw_atomic*_<op>()
Now that we have raw_atomic*_<op>() definitions, there's no need to use arch_atomic*_<op>() definitions outside of the low-level atomic definitions.
locking/atomic: treewide: use raw_atomic*_<op>()
Now that we have raw_atomic*_<op>() definitions, there's no need to use arch_atomic*_<op>() definitions outside of the low-level atomic definitions.
Move treewide users of arch_atomic*_<op>() over to the equivalent raw_atomic*_<op>().
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Kees Cook <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6 |
|
| #
8739c681 |
| 26-Jan-2023 |
Peter Zijlstra <[email protected]> |
sched/clock/x86: Mark sched_clock() noinstr
In order to use sched_clock() from noinstr code, mark it and all it's implenentations noinstr.
The whole pvclock thing (used by KVM/Xen) is a bit of a pa
sched/clock/x86: Mark sched_clock() noinstr
In order to use sched_clock() from noinstr code, mark it and all it's implenentations noinstr.
The whole pvclock thing (used by KVM/Xen) is a bit of a pain, since it calls out to watchdogs, create a pvclock_clocksource_read_nowd() variant doesn't do that and can be noinstr.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
show more ...
|
| #
5c9da9fe |
| 26-Jan-2023 |
Uros Bizjak <[email protected]> |
x86/pvclock: Improve atomic update of last_value in pvclock_clocksource_read()
Improve atomic update of last_value in pvclock_clocksource_read:
- Atomic update can be skipped if the "last_value" is
x86/pvclock: Improve atomic update of last_value in pvclock_clocksource_read()
Improve atomic update of last_value in pvclock_clocksource_read:
- Atomic update can be skipped if the "last_value" is already equal to "ret".
- The detection of atomic update failure is not correct. The value, returned by atomic64_cmpxchg should be compared to the old value from the location to be updated. If these two are the same, then atomic update succeeded and "last_value" location is updated to "ret" in an atomic way. Otherwise, the atomic update failed and it should be retried with the value from "last_value" - exactly what atomic64_try_cmpxchg does in a correct and more optimal way.
Signed-off-by: Uros Bizjak <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3, v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5, v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4, v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1, v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4 |
|
| #
d9f6e12f |
| 18-Mar-2021 |
Ingo Molnar <[email protected]> |
x86: Fix various typos in comments
Fix ~144 single-word typos in arch/x86/ code comments.
Doing this in a single commit should reduce the churn.
Signed-off-by: Ingo Molnar <[email protected]> Cc: B
x86: Fix various typos in comments
Fix ~144 single-word typos in arch/x86/ code comments.
Doing this in a single commit should reduce the churn.
Signed-off-by: Ingo Molnar <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Bjorn Helgaas <[email protected]> Cc: [email protected]
show more ...
|
|
Revision tags: v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4, v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7, v5.7-rc7, v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7, v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1 |
|
| #
b95a8a27 |
| 07-Feb-2020 |
Thomas Gleixner <[email protected]> |
x86/vdso: Use generic VDSO clock mode storage
Switch to the generic VDSO clock mode storage.
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@ar
x86/vdso: Use generic VDSO clock mode storage
Switch to the generic VDSO clock mode storage.
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Vincenzo Frascino <[email protected]> (VDSO parts) Acked-by: Juergen Gross <[email protected]> (Xen parts) Acked-by: Paolo Bonzini <[email protected]> (KVM parts) Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.5, v5.5-rc7, v5.5-rc6, v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2, v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6 |
|
| #
7ac87074 |
| 21-Jun-2019 |
Vincenzo Frascino <[email protected]> |
x86/vdso: Switch to generic vDSO implementation
The x86 vDSO library requires some adaptations to take advantage of the newly introduced generic vDSO library.
Introduce the following changes: - Mo
x86/vdso: Switch to generic vDSO implementation
The x86 vDSO library requires some adaptations to take advantage of the newly introduced generic vDSO library.
Introduce the following changes: - Modification of vdso.c to be compliant with the common vdso datapage - Use of lib/vdso for gettimeofday
[ tglx: Massaged changelog and cleaned up the function signature formatting ]
Signed-off-by: Vincenzo Frascino <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Russell King <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: Paul Burton <[email protected]> Cc: Daniel Lezcano <[email protected]> Cc: Mark Salyzyn <[email protected]> Cc: Peter Collingbourne <[email protected]> Cc: Shuah Khan <[email protected]> Cc: Dmitry Safonov <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Huw Davies <[email protected]> Cc: Shijith Thotton <[email protected]> Cc: Andre Przywara <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v5.2-rc5, v5.2-rc4, v5.2-rc3, v5.2-rc2 |
|
| #
fd534e9b |
| 23-May-2019 |
Thomas Gleixner <[email protected]> |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of th
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 102
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 51 franklin st fifth floor boston ma 02110 1301 usa
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 50 file(s).
Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
show more ...
|
|
Revision tags: v5.2-rc1, v5.1, v5.1-rc7, v5.1-rc6, v5.1-rc5, v5.1-rc4, v5.1-rc3, v5.1-rc2, v5.1-rc1, v5.0, v5.0-rc8, v5.0-rc7, v5.0-rc6, v5.0-rc5, v5.0-rc4, v5.0-rc3, v5.0-rc2, v5.0-rc1, v4.20, v4.20-rc7, v4.20-rc6, v4.20-rc5, v4.20-rc4, v4.20-rc3, v4.20-rc2, v4.20-rc1 |
|
| #
57c8a661 |
| 30-Oct-2018 |
Mike Rapoport <[email protected]> |
mm: remove include/linux/bootmem.h
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header.
The includes were replaced
mm: remove include/linux/bootmem.h
Move remaining definitions and declarations from include/linux/bootmem.h into include/linux/memblock.h and remove the redundant header.
The includes were replaced with the semantic patch below and then semi-automated removal of duplicated '#include <linux/memblock.h>
@@ @@ - #include <linux/bootmem.h> + #include <linux/memblock.h>
[[email protected]: dma-direct: fix up for the removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: powerpc: fix up for removal of linux/bootmem.h] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Chris Zankel <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Greentime Hu <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Guan Xuetao <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Jonas Bonn <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Ley Foon Tan <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Turner <[email protected]> Cc: Michael Ellerman <[email protected]> Cc: Michal Simek <[email protected]> Cc: Palmer Dabbelt <[email protected]> Cc: Paul Burton <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Rich Felker <[email protected]> Cc: Russell King <[email protected]> Cc: Serge Semin <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Yoshinori Sato <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
show more ...
|
|
Revision tags: v4.19, v4.19-rc8, v4.19-rc7, v4.19-rc6, v4.19-rc5, v4.19-rc4, v4.19-rc3, v4.19-rc2, v4.19-rc1, v4.18, v4.18-rc8, v4.18-rc7, v4.18-rc6, v4.18-rc5, v4.18-rc4, v4.18-rc3, v4.18-rc2, v4.18-rc1, v4.17, v4.17-rc7, v4.17-rc6, v4.17-rc5, v4.17-rc4, v4.17-rc3 |
|
| #
e27c4929 |
| 27-Apr-2018 |
Arnd Bergmann <[email protected]> |
x86: Convert x86_platform_ops to timespec64
The x86 platform operations are fairly isolated, so it's easy to change them from using timespec to timespec64. It has been checked that all the users and
x86: Convert x86_platform_ops to timespec64
The x86 platform operations are fairly isolated, so it's easy to change them from using timespec to timespec64. It has been checked that all the users and callers are safe, and there is only one critical function that is broken beyond 2106:
pvclock_read_wallclock() uses a 32-bit number of seconds since the epoch to communicate the boot time between host and guest in a virtual environment. This will work until 2106, but fixing this is outside the scope of this change, Add a comment at least.
Signed-off-by: Arnd Bergmann <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Boris Ostrovsky <[email protected]> Acked-by: Radim Krčmář <[email protected]> Acked-by: Jan Kiszka <[email protected]> Cc: Juergen Gross <[email protected]> Cc: [email protected] Cc: Borislav Petkov <[email protected]> Cc: [email protected] Cc: [email protected] Cc: "Rafael J. Wysocki" <[email protected]> Cc: [email protected] Cc: John Stultz <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Andy Shevchenko <[email protected]> Cc: Joao Martins <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
show more ...
|
|
Revision tags: v4.17-rc2, v4.17-rc1, v4.16, v4.16-rc7, v4.16-rc6, v4.16-rc5, v4.16-rc4, v4.16-rc3, v4.16-rc2, v4.16-rc1, v4.15, v4.15-rc9, v4.15-rc8, v4.15-rc7, v4.15-rc6, v4.15-rc5, v4.15-rc4, v4.15-rc3, v4.15-rc2, v4.15-rc1, v4.14 |
|
| #
9f08890a |
| 08-Nov-2017 |
Joao Martins <[email protected]> |
x86/pvclock: add setter for pvclock_pvti_cpu0_va
Right now there is only a pvclock_pvti_cpu0_va() which is defined on kvmclock since:
commit dac16fba6fc5 ("x86/vdso: Get pvclock data from the vvar
x86/pvclock: add setter for pvclock_pvti_cpu0_va
Right now there is only a pvclock_pvti_cpu0_va() which is defined on kvmclock since:
commit dac16fba6fc5 ("x86/vdso: Get pvclock data from the vvar VMA instead of the fixmap")
The only user of this interface so far is kvm. This commit adds a setter function for the pvti page and moves pvclock_pvti_cpu0_va to pvclock, which is a more generic place to have it; and would allow other PV clocksources to use it, such as Xen.
While moving pvclock_pvti_cpu0_va into pvclock, rename also this function to pvclock_get_pvti_cpu0_va (including its call sites) to be symmetric with the setter (pvclock_set_pvti_cpu0_va).
Signed-off-by: Joao Martins <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Acked-by: Paolo Bonzini <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Signed-off-by: Boris Ostrovsky <[email protected]>
show more ...
|
|
Revision tags: v4.14-rc8, v4.14-rc7, v4.14-rc6, v4.14-rc5, v4.14-rc4, v4.14-rc3, v4.14-rc2, v4.14-rc1, v4.13, v4.13-rc7, v4.13-rc6, v4.13-rc5, v4.13-rc4, v4.13-rc3, v4.13-rc2, v4.13-rc1, v4.12, v4.12-rc7, v4.12-rc6, v4.12-rc5, v4.12-rc4, v4.12-rc3, v4.12-rc2, v4.12-rc1, v4.11, v4.11-rc8, v4.11-rc7, v4.11-rc6, v4.11-rc5, v4.11-rc4, v4.11-rc3, v4.11-rc2, v4.11-rc1, v4.10, v4.10-rc8 |
|
| #
38b8d208 |
| 08-Feb-2017 |
Ingo Molnar <[email protected]> |
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/nmi.h>
We are going to move softlockup APIs out of <linux/sched.h>, which will have to be picked up from other h
sched/headers: Prepare for new header dependencies before moving code to <linux/sched/nmi.h>
We are going to move softlockup APIs out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files.
<linux/nmi.h> already includes <linux/sched.h>.
Include the <linux/nmi.h> header in the files that are going to need it.
Acked-by: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.10-rc7, v4.10-rc6, v4.10-rc5, v4.10-rc4, v4.10-rc3, v4.10-rc2, v4.10-rc1 |
|
| #
a5a1d1c2 |
| 21-Dec-2016 |
Thomas Gleixner <[email protected]> |
clocksource: Use a plain u64 instead of cycle_t
There is no point in having an extra type for extra confusion. u64 is unambiguous.
Conversion was done with the following coccinelle script:
@rem@ @
clocksource: Use a plain u64 instead of cycle_t
There is no point in having an extra type for extra confusion. u64 is unambiguous.
Conversion was done with the following coccinelle script:
@rem@ @@ -typedef u64 cycle_t;
@fix@ typedef cycle_t; @@ -cycle_t +u64
Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: John Stultz <[email protected]>
show more ...
|
|
Revision tags: v4.9, v4.9-rc8, v4.9-rc7, v4.9-rc6, v4.9-rc5, v4.9-rc4, v4.9-rc3, v4.9-rc2, v4.9-rc1, v4.8, v4.8-rc8, v4.8-rc7, v4.8-rc6, v4.8-rc5 |
|
| #
108b249c |
| 01-Sep-2016 |
Paolo Bonzini <[email protected]> |
KVM: x86: introduce get_kvmclock_ns
Introduce a function that reads the exact nanoseconds value that is provided to the guest in kvmclock. This crystallizes the notion of kvmclock as a thin veneer
KVM: x86: introduce get_kvmclock_ns
Introduce a function that reads the exact nanoseconds value that is provided to the guest in kvmclock. This crystallizes the notion of kvmclock as a thin veneer over a stable TSC, that the guest will (hopefully) convert with NTP. In other words, kvmclock is *not* a paravirtualized host-to-guest NTP.
Drop the get_kernel_ns() function, that was used both to get the base value of the master clock and to get the current value of kvmclock. The former use is replaced by ktime_get_boot_ns(), the latter is the purpose of get_kernel_ns().
This also allows KVM to provide a Hyper-V time reference counter that is synchronized with the time that is computed from the TSC page.
Reviewed-by: Roman Kagan <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
|
Revision tags: v4.8-rc4, v4.8-rc3, v4.8-rc2, v4.8-rc1, v4.7, v4.7-rc7, v4.7-rc6, v4.7-rc5, v4.7-rc4, v4.7-rc3 |
|
| #
3aed64f6 |
| 09-Jun-2016 |
Paolo Bonzini <[email protected]> |
pvclock: introduce seqcount-like API
The version field in struct pvclock_vcpu_time_info basically implements a seqcount. Wrap it with the usual read_begin and read_retry functions, and use these AP
pvclock: introduce seqcount-like API
The version field in struct pvclock_vcpu_time_info basically implements a seqcount. Wrap it with the usual read_begin and read_retry functions, and use these APIs instead of peppering the code with smp_rmb()s. While at it, change it to the more pedantically correct virt_rmb().
With this change, __pvclock_read_cycles can be simplified noticeably.
Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
|
Revision tags: v4.7-rc2, v4.7-rc1 |
|
| #
ed911b43 |
| 28-May-2016 |
Minfei Huang <[email protected]> |
pvclock: Get rid of __pvclock_read_cycles in function pvclock_read_flags
There is a generic function __pvclock_read_cycles to be used to get both flags and cycles. For function pvclock_read_flags, i
pvclock: Get rid of __pvclock_read_cycles in function pvclock_read_flags
There is a generic function __pvclock_read_cycles to be used to get both flags and cycles. For function pvclock_read_flags, it's useless to get cycles value. To make this function be more effective, get this variable flags directly in function.
Signed-off-by: Minfei Huang <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
| #
749d088b |
| 27-May-2016 |
Minfei Huang <[email protected]> |
pvclock: Add CPU barriers to get correct version value
Protocol for the "version" fields is: hypervisor raises it (making it uneven) before it starts updating the fields and raises it again (making
pvclock: Add CPU barriers to get correct version value
Protocol for the "version" fields is: hypervisor raises it (making it uneven) before it starts updating the fields and raises it again (making it even) when it is done. Thus the guest can make sure the time values it got are consistent by checking the version before and after reading them.
Add CPU barries after getting version value just like what function vread_pvclock does, because all of callees in this function is inline.
Fixes: 502dfeff239e8313bfbe906ca0a1a6827ac8481b Cc: [email protected] Signed-off-by: Minfei Huang <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
|
Revision tags: v4.6, v4.6-rc7, v4.6-rc6, v4.6-rc5, v4.6-rc4, v4.6-rc3, v4.6-rc2, v4.6-rc1, v4.5, v4.5-rc7, v4.5-rc6, v4.5-rc5, v4.5-rc4, v4.5-rc3, v4.5-rc2, v4.5-rc1, v4.4, v4.4-rc8, v4.4-rc7, v4.4-rc6, v4.4-rc5 |
|
| #
cc1e24fd |
| 11-Dec-2015 |
Andy Lutomirski <[email protected]> |
x86/vdso: Remove pvclock fixmap machinery
Signed-off-by: Andy Lutomirski <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Pet
x86/vdso: Remove pvclock fixmap machinery
Signed-off-by: Andy Lutomirski <[email protected]> Reviewed-by: Paolo Bonzini <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/4933029991103ae44672c82b97a20035f5c1fe4f.1449702533.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
show more ...
|
|
Revision tags: v4.4-rc4, v4.4-rc3, v4.4-rc2, v4.4-rc1, v4.3, v4.3-rc7, v4.3-rc6, v4.3-rc5, v4.3-rc4, v4.3-rc3, v4.3-rc2, v4.3-rc1, v4.2, v4.2-rc8, v4.2-rc7, v4.2-rc6, v4.2-rc5, v4.2-rc4, v4.2-rc3, v4.2-rc2, v4.2-rc1, v4.1, v4.1-rc8, v4.1-rc7, v4.1-rc6, v4.1-rc5, v4.1-rc4, v4.1-rc3, v4.1-rc2, v4.1-rc1 |
|
| #
73459e2a |
| 23-Apr-2015 |
Paolo Bonzini <[email protected]> |
x86: pvclock: Really remove the sched notifier for cross-cpu migrations
This reverts commits 0a4e6be9ca17c54817cf814b4b5aa60478c6df27 and 80f7fdb1c7f0f9266421f823964fd1962681f6ce.
The task migratio
x86: pvclock: Really remove the sched notifier for cross-cpu migrations
This reverts commits 0a4e6be9ca17c54817cf814b4b5aa60478c6df27 and 80f7fdb1c7f0f9266421f823964fd1962681f6ce.
The task migration notifier was originally introduced in order to support the pvclock vsyscall with non-synchronized TSC, but KVM only supports it with synchronized TSC. Hence, on KVM the race condition is only needed due to a bad implementation on the host side, and even then it's so rare that it's mostly theoretical.
As far as KVM is concerned it's possible to fix the host, avoiding the additional complexity in the vDSO and the (re)introduction of the task migration notifier.
Xen, on the other hand, hasn't yet implemented vsyscall support at all, so we do not care about its plans for non-synchronized TSC.
Reported-by: Peter Zijlstra <[email protected]> Suggested-by: Marcelo Tosatti <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
|
Revision tags: v4.0, v4.0-rc7, v4.0-rc6 |
|
| #
0a4e6be9 |
| 23-Mar-2015 |
Marcelo Tosatti <[email protected]> |
x86: kvm: Revert "remove sched notifier for cross-cpu migrations"
The following point:
2. per-CPU pvclock time info is updated if the underlying CPU changes.
Is not true anymore since "
x86: kvm: Revert "remove sched notifier for cross-cpu migrations"
The following point:
2. per-CPU pvclock time info is updated if the underlying CPU changes.
Is not true anymore since "KVM: x86: update pvclock area conditionally, on cpu migration".
Add task migration notification back.
Problem noticed by Andy Lutomirski.
Signed-off-by: Marcelo Tosatti <[email protected]> CC: [email protected] # 3.11+
show more ...
|
|
Revision tags: v4.0-rc5, v4.0-rc4, v4.0-rc3, v4.0-rc2, v4.0-rc1, v3.19, v3.19-rc7, v3.19-rc6, v3.19-rc5, v3.19-rc4, v3.19-rc3, v3.19-rc2, v3.19-rc1, v3.18, v3.18-rc7, v3.18-rc6, v3.18-rc5, v3.18-rc4, v3.18-rc3, v3.18-rc2, v3.18-rc1, v3.17, v3.17-rc7, v3.17-rc6, v3.17-rc5, v3.17-rc4, v3.17-rc3, v3.17-rc2, v3.17-rc1, v3.16, v3.16-rc7, v3.16-rc6, v3.16-rc5, v3.16-rc4, v3.16-rc3, v3.16-rc2, v3.16-rc1, v3.15, v3.15-rc8, v3.15-rc7, v3.15-rc6, v3.15-rc5, v3.15-rc4, v3.15-rc3, v3.15-rc2, v3.15-rc1, v3.14, v3.14-rc8, v3.14-rc7, v3.14-rc6, v3.14-rc5, v3.14-rc4, v3.14-rc3, v3.14-rc2, v3.14-rc1, v3.13, v3.13-rc8, v3.13-rc7, v3.13-rc6, v3.13-rc5, v3.13-rc4, v3.13-rc3, v3.13-rc2, v3.13-rc1, v3.12, v3.12-rc7, v3.12-rc6, v3.12-rc5 |
|
| #
8b414521 |
| 12-Oct-2013 |
Marcelo Tosatti <[email protected]> |
hung_task: add method to reset detector
In certain occasions it is possible for a hung task detector positive to be false: continuation from a paused VM, for example.
Add a method to reset detectio
hung_task: add method to reset detector
In certain occasions it is possible for a hung task detector positive to be false: continuation from a paused VM, for example.
Add a method to reset detection, similar as is done with other kernel watchdogs.
Acked-by: Don Zickus <[email protected]> Acked-by: Paolo Bonzini <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]> Signed-off-by: Gleb Natapov <[email protected]>
show more ...
|
| #
d63285e9 |
| 12-Oct-2013 |
Marcelo Tosatti <[email protected]> |
pvclock: detect watchdog reset at pvclock read
Implement reset of kernel watchdogs at pvclock read time. This avoids adding special code to every watchdog.
This is possible for watchdogs which meas
pvclock: detect watchdog reset at pvclock read
Implement reset of kernel watchdogs at pvclock read time. This avoids adding special code to every watchdog.
This is possible for watchdogs which measure time based on sched_clock() or ktime_get() variants.
Suggested by Don Zickus.
Acked-by: Don Zickus <[email protected]> Acked-by: Paolo Bonzini <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]> Signed-off-by: Gleb Natapov <[email protected]>
show more ...
|
|
Revision tags: v3.12-rc4, v3.12-rc3, v3.12-rc2, v3.12-rc1, v3.11, v3.11-rc7, v3.11-rc6, v3.11-rc5, v3.11-rc4, v3.11-rc3, v3.11-rc2, v3.11-rc1 |
|
| #
e04c5d76 |
| 11-Jul-2013 |
Marcelo Tosatti <[email protected]> |
remove sched notifier for cross-cpu migrations
Linux as a guest on KVM hypervisor, the only user of the pvclock vsyscall interface, does not require notification on task migration because:
1. cpu I
remove sched notifier for cross-cpu migrations
Linux as a guest on KVM hypervisor, the only user of the pvclock vsyscall interface, does not require notification on task migration because:
1. cpu ID number maps 1:1 to per-CPU pvclock time info. 2. per-CPU pvclock time info is updated if the underlying CPU changes. 3. that version is increased whenever underlying CPU changes.
Which is sufficient to guarantee nanoseconds counter is calculated properly.
Signed-off-by: Marcelo Tosatti <[email protected]> Acked-by: Peter Zijlstra <[email protected]> Signed-off-by: Gleb Natapov <[email protected]>
show more ...
|
|
Revision tags: v3.10, v3.10-rc7, v3.10-rc6, v3.10-rc5, v3.10-rc4, v3.10-rc3, v3.10-rc2, v3.10-rc1, v3.9, v3.9-rc8, v3.9-rc7, v3.9-rc6, v3.9-rc5, v3.9-rc4, v3.9-rc3, v3.9-rc2, v3.9-rc1 |
|
| #
3d2a80a2 |
| 27-Feb-2013 |
Peter Hurley <[email protected]> |
x86/kvm: Fix pvclock vsyscall fixmap
The physical memory fixmapped for the pvclock clock_gettime vsyscall was allocated, and thus is not a kernel symbol. __pa() is the proper method to use in this c
x86/kvm: Fix pvclock vsyscall fixmap
The physical memory fixmapped for the pvclock clock_gettime vsyscall was allocated, and thus is not a kernel symbol. __pa() is the proper method to use in this case.
Fixes the crash below when booting a next-20130204+ smp guest on a 3.8-rc5+ KVM host.
[ 0.666410] udevd[97]: starting version 175 [ 0.674043] udevd[97]: udevd:[97]: segfault at ffffffffff5fd020 ip 00007fff069e277f sp 00007fff068c9ef8 error d
Acked-by: Marcelo Tosatti <[email protected]> Signed-off-by: Peter Hurley <[email protected]> Signed-off-by: Gleb Natapov <[email protected]>
show more ...
|
|
Revision tags: v3.8, v3.8-rc7, v3.8-rc6, v3.8-rc5, v3.8-rc4, v3.8-rc3, v3.8-rc2, v3.8-rc1, v3.7, v3.7-rc8 |
|
| #
71056ae2 |
| 28-Nov-2012 |
Marcelo Tosatti <[email protected]> |
x86: pvclock: generic pvclock vsyscall initialization
Originally from Jeremy Fitzhardinge.
Introduce generic, non hypervisor specific, pvclock initialization routines.
Signed-off-by: Marcelo Tosat
x86: pvclock: generic pvclock vsyscall initialization
Originally from Jeremy Fitzhardinge.
Introduce generic, non hypervisor specific, pvclock initialization routines.
Signed-off-by: Marcelo Tosatti <[email protected]>
show more ...
|
| #
2697902b |
| 28-Nov-2012 |
Marcelo Tosatti <[email protected]> |
x86: pvclock: introduce helper to read flags
Acked-by: Glauber Costa <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
|