|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5 |
|
| #
cb380909 |
| 27-Feb-2025 |
Keith Busch <[email protected]> |
vhost: return task creation error instead of NULL
Lets callers distinguish why the vhost task creation failed. No one currently cares why it failed, so no real runtime change from this patch, but th
vhost: return task creation error instead of NULL
Lets callers distinguish why the vhost task creation failed. No one currently cares why it failed, so no real runtime change from this patch, but that will not be the case for long.
Signed-off-by: Keith Busch <[email protected]> Message-ID: <[email protected]> Reviewed-by: Mike Christie <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2, v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7 |
|
| #
7ad47239 |
| 29-Apr-2024 |
Michael S. Tsirkin <[email protected]> |
vhost: move smp_rmb() into vhost_get_avail_idx()
All callers of vhost_get_avail_idx() use smp_rmb() to order the available ring entry read and avail_idx read.
Make vhost_get_avail_idx() call smp_rm
vhost: move smp_rmb() into vhost_get_avail_idx()
All callers of vhost_get_avail_idx() use smp_rmb() to order the available ring entry read and avail_idx read.
Make vhost_get_avail_idx() call smp_rmb() itself whenever the avail_idx is accessed. This way, the callers don't need to worry about the memory barrier. As a side benefit, we also validate the index on all paths now, which will hopefully help prevent/catch earlier future bugs.
Note that current code is inconsistent in how the errors are handled. They are treated as an empty ring in some places, but as non-empty ring in other places. This patch doesn't attempt to change the existing behaviour.
No functional change intended.
Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Gavin Shan <[email protected]> Acked-by: Will Deacon <[email protected]> Message-Id: <[email protected]>
show more ...
|
|
Revision tags: v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1 |
|
| #
db5247d9 |
| 16-Mar-2024 |
Mike Christie <[email protected]> |
vhost_task: Handle SIGKILL by flushing work and exiting
Instead of lingering until the device is closed, this has us handle SIGKILL by:
1. marking the worker as killed so we no longer try to use it
vhost_task: Handle SIGKILL by flushing work and exiting
Instead of lingering until the device is closed, this has us handle SIGKILL by:
1. marking the worker as killed so we no longer try to use it with new virtqueues and new flush operations. 2. setting the virtqueue to worker mapping so no new works are queued. 3. running all the exiting works.
Suggested-by: Edward Adam Davis <[email protected]> Reported-and-tested-by: [email protected] Message-Id: <[email protected]> Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
ba704ff4 |
| 16-Mar-2024 |
Mike Christie <[email protected]> |
vhost: Release worker mutex during flushes
In the next patches where the worker can be killed while in use, we need to be able to take the worker mutex and kill queued works for new IO and flushes,
vhost: Release worker mutex during flushes
In the next patches where the worker can be killed while in use, we need to be able to take the worker mutex and kill queued works for new IO and flushes, and set some new flags to prevent new __vhost_vq_attach_worker calls from swapping in/out killed workers.
If we are holding the worker mutex during a flush and the flush's work is still in the queue, the worker code that will handle the SIGKILL cleanup won't be able to take the mutex and perform it's cleanup. So this patch has us drop the worker mutex while waiting for the flush to complete.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
34cf9ba5 |
| 16-Mar-2024 |
Mike Christie <[email protected]> |
vhost: Use virtqueue mutex for swapping worker
__vhost_vq_attach_worker uses the vhost_dev mutex to serialize the swapping of a virtqueue's worker. This was done for simplicity because we are alread
vhost: Use virtqueue mutex for swapping worker
__vhost_vq_attach_worker uses the vhost_dev mutex to serialize the swapping of a virtqueue's worker. This was done for simplicity because we are already holding that mutex.
In the next patches where the worker can be killed while in use, we need finer grained locking because some drivers will hold the vhost_dev mutex while flushing. However in the SIGKILL handler in the next patches, we will need to be able to swap workers (set current one to NULL), kill queued works and stop new flushes while flushes are in progress.
To prepare us, this has us use the virtqueue mutex for swapping workers instead of the vhost_dev one.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
d9e59eec |
| 16-Mar-2024 |
Mike Christie <[email protected]> |
vhost: Remove vhost_vq_flush
vhost_vq_flush is no longer used so remove it.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]
vhost: Remove vhost_vq_flush
vhost_vq_flush is no longer used so remove it.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
76f40853 |
| 11-Mar-2024 |
Xianting Tian <[email protected]> |
vhost: correct misleading printing information
Guest moved avail idx not used idx when we need to print log if '(vq->avail_idx - last_avail_idx) > vq->num', so fix it.
Signed-off-by: Xianting Tian
vhost: correct misleading printing information
Guest moved avail idx not used idx when we need to print log if '(vq->avail_idx - last_avail_idx) > vq->num', so fix it.
Signed-off-by: Xianting Tian <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
df9ace76 |
| 28-Mar-2024 |
Gavin Shan <[email protected]> |
vhost: Add smp_rmb() in vhost_enable_notify()
A smp_rmb() has been missed in vhost_enable_notify(), inspired by Will. Otherwise, it's not ensured the available ring entries pushed by guest can be ob
vhost: Add smp_rmb() in vhost_enable_notify()
A smp_rmb() has been missed in vhost_enable_notify(), inspired by Will. Otherwise, it's not ensured the available ring entries pushed by guest can be observed by vhost in time, leading to stale available ring entries fetched by vhost in vhost_get_vq_desc(), as reported by Yihuang Yu on NVidia's grace-hopper (ARM64) platform.
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \ -accel kvm -machine virt,gic-version=host -cpu host \ -smp maxcpus=1,cpus=1,sockets=1,clusters=1,cores=1,threads=1 \ -m 4096M,slots=16,maxmem=64G \ -object memory-backend-ram,id=mem0,size=4096M \ : \ -netdev tap,id=vnet0,vhost=true \ -device virtio-net-pci,bus=pcie.8,netdev=vnet0,mac=52:54:00:f1:26:b0 : guest# netperf -H 10.26.1.81 -l 60 -C -c -t UDP_STREAM virtio_net virtio0: output.0:id 100 is not a head!
Add the missed smp_rmb() in vhost_enable_notify(). When it returns true, it means there's still pending tx buffers. Since it might read indices, so it still can bypass the smp_rmb() in vhost_get_vq_desc(). Note that it should be safe until vq->avail_idx is changed by commit d3bb267bbdcb ("vhost: cache avail index in vhost_enable_notify()").
Fixes: d3bb267bbdcb ("vhost: cache avail index in vhost_enable_notify()") Cc: <[email protected]> # v5.18+ Reported-by: Yihuang Yu <[email protected]> Suggested-by: Will Deacon <[email protected]> Signed-off-by: Gavin Shan <[email protected]> Acked-by: Jason Wang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Stefano Garzarella <[email protected]>
show more ...
|
| #
22e1992c |
| 28-Mar-2024 |
Gavin Shan <[email protected]> |
vhost: Add smp_rmb() in vhost_vq_avail_empty()
A smp_rmb() has been missed in vhost_vq_avail_empty(), spotted by Will. Otherwise, it's not ensured the available ring entries pushed by guest can be o
vhost: Add smp_rmb() in vhost_vq_avail_empty()
A smp_rmb() has been missed in vhost_vq_avail_empty(), spotted by Will. Otherwise, it's not ensured the available ring entries pushed by guest can be observed by vhost in time, leading to stale available ring entries fetched by vhost in vhost_get_vq_desc(), as reported by Yihuang Yu on NVidia's grace-hopper (ARM64) platform.
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \ -accel kvm -machine virt,gic-version=host -cpu host \ -smp maxcpus=1,cpus=1,sockets=1,clusters=1,cores=1,threads=1 \ -m 4096M,slots=16,maxmem=64G \ -object memory-backend-ram,id=mem0,size=4096M \ : \ -netdev tap,id=vnet0,vhost=true \ -device virtio-net-pci,bus=pcie.8,netdev=vnet0,mac=52:54:00:f1:26:b0 : guest# netperf -H 10.26.1.81 -l 60 -C -c -t UDP_STREAM virtio_net virtio0: output.0:id 100 is not a head!
Add the missed smp_rmb() in vhost_vq_avail_empty(). When tx_can_batch() returns true, it means there's still pending tx buffers. Since it might read indices, so it still can bypass the smp_rmb() in vhost_get_vq_desc(). Note that it should be safe until vq->avail_idx is changed by commit 275bf960ac697 ("vhost: better detection of available buffers").
Fixes: 275bf960ac69 ("vhost: better detection of available buffers") Cc: <[email protected]> # v4.11+ Reported-by: Yihuang Yu <[email protected]> Suggested-by: Will Deacon <[email protected]> Signed-off-by: Gavin Shan <[email protected]> Acked-by: Jason Wang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]> Reviewed-by: Stefano Garzarella <[email protected]>
show more ...
|
|
Revision tags: v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3 |
|
| #
3652117f |
| 22-Nov-2023 |
Christian Brauner <[email protected]> |
eventfd: simplify eventfd_signal()
Ever since the eventfd type was introduced back in 2007 in commit e1ad7468c77d ("signal/timer/event: eventfd core") the eventfd_signal() function only ever passed
eventfd: simplify eventfd_signal()
Ever since the eventfd type was introduced back in 2007 in commit e1ad7468c77d ("signal/timer/event: eventfd core") the eventfd_signal() function only ever passed 1 as a value for @n. There's no point in keeping that additional argument.
Link: https://lore.kernel.org/r/[email protected] Acked-by: Xu Yilun <[email protected]> Acked-by: Andrew Donnellan <[email protected]> # ocxl Acked-by: Eric Farman <[email protected]> # s390 Reviewed-by: Jan Kara <[email protected]> Reviewed-by: Jens Axboe <[email protected]> Signed-off-by: Christian Brauner <[email protected]>
show more ...
|
|
Revision tags: v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6, v6.6-rc5, v6.6-rc4 |
|
| #
ca50ec37 |
| 27-Sep-2023 |
Eric Auger <[email protected]> |
vhost: Allow null msg.size on VHOST_IOTLB_INVALIDATE
Commit e2ae38cf3d91 ("vhost: fix hung thread due to erroneous iotlb entries") Forbade vhost iotlb msg with null size to prevent entries with size
vhost: Allow null msg.size on VHOST_IOTLB_INVALIDATE
Commit e2ae38cf3d91 ("vhost: fix hung thread due to erroneous iotlb entries") Forbade vhost iotlb msg with null size to prevent entries with size = start = 0 and last = ULONG_MAX to end up in the iotlb.
Then commit 95932ab2ea07 ("vhost: allow batching hint without size") only applied the check for VHOST_IOTLB_UPDATE and VHOST_IOTLB_INVALIDATE message types to fix a regression observed with batching hit.
Still, the introduction of that check introduced a regression for some users attempting to invalidate the whole ULONG_MAX range by setting the size to 0. This is the case with qemu/smmuv3/vhost integration which does not work anymore. It Looks safe to partially revert the original commit and allow VHOST_IOTLB_INVALIDATE messages with null size. vhost_iotlb_del_range() will compute a correct end iova. Same for vhost_vdpa_iotlb_unmap().
Signed-off-by: Eric Auger <[email protected]> Fixes: e2ae38cf3d91 ("vhost: fix hung thread due to erroneous iotlb entries") Cc: [email protected] # v5.17+ Acked-by: Jason Wang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc3, v6.6-rc2, v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1 |
|
| #
228a27cf |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: Allow worker switching while work is queueing
This patch drops the requirement that we can only switch workers if work has not been queued by using RCU for the vq based queueing paths and a m
vhost: Allow worker switching while work is queueing
This patch drops the requirement that we can only switch workers if work has not been queued by using RCU for the vq based queueing paths and a mutex for the device wide flush.
We can also use this to support SIGKILL properly in the future where we should exit almost immediately after getting that signal. With this patch, when get_signal returns true, we can set the vq->worker to NULL and do a synchronize_rcu to prevent new work from being queued to the vhost_task that has been killed.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
c1ecd8e9 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: allow userspace to create workers
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengi
vhost: allow userspace to create workers
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3
the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter how many jobs, virtqueues, and CPUs are used.
To better utilize virtqueues and available CPUs, this patch allows userspace to create workers and bind them to vqs. You can have N workers per dev and also share N workers with M vqs on that dev.
This patch adds the interface related code and the next patch will hook vhost-scsi into it. The patches do not try to hook net and vsock into the interface because:
1. multiple workers don't seem to help vsock. The problem is that with only 2 virtqueues we never fully use the existing worker when doing bidirectional tests. This seems to match vhost-scsi where we don't see the worker as a bottleneck until 3 virtqueues are used.
2. net already has a way to use multiple workers.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
1cdaafa1 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: replace single worker pointer with xarray
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store
vhost: replace single worker pointer with xarray
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store mupltiple workers and look them up.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
cef25866 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: add helper to parse userspace vring state/file
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. Th
vhost: add helper to parse userspace vring state/file
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. This moves the vhost_vring_ioctl code to do this to a helper so it can be shared.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
27eca189 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: remove vhost_work_queue
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie <michael.christie@ora
vhost: remove vhost_work_queue
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
493b94bf |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: convert poll work to be vq based
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs
vhost: convert poll work to be vq based
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs to be handled by different workers, so to allow drivers to execute operations like queue, stop, flush, etc on specific polls/vqs we need to know the mappings.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
a6fc0473 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: take worker or vq for flushing
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc.
vhost: take worker or vq for flushing
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc. It also adds a helper to flush specific virtqueues, so vhost-scsi can flush IO vqs from it's ctl vq.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
0921dddc |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: take worker or vq instead of dev for queueing
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during q
vhost: take worker or vq instead of dev for queueing
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on.
This temp leaves vhost_work_queue. It will be removed when the drivers are converted in the next patches.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
9784df15 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost, vhost_net: add helper to check if vq has work
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, s
vhost, vhost_net: add helper to check if vq has work
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, so this adds a helper to check if a vq has work pending and converts vhost-net to use it.
Signed-off-by: Mike Christie <[email protected]> Acked-by: Jason Wang <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
737bdb64 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: add vhost_worker pointer to vhost_virtqueue
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so in later patches in this set we can q
vhost: add vhost_worker pointer to vhost_virtqueue
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so in later patches in this set we can queue/flush specific vqs and their workers.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
c011bb66 |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: dynamically allocate vhost_worker
This patchset allows us to allocate multiple workers, so this has us move from the vhost_worker that's embedded in the vhost_dev to dynamically allocating it
vhost: dynamically allocate vhost_worker
This patchset allows us to allocate multiple workers, so this has us move from the vhost_worker that's embedded in the vhost_dev to dynamically allocating it.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
| #
3e11c6eb |
| 26-Jun-2023 |
Mike Christie <[email protected]> |
vhost: create worker at end of vhost_dev_set_owner
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so after we have called vhost_worker_create it can be calling vhost_work_queue and t
vhost: create worker at end of vhost_dev_set_owner
vsock can start queueing work after VHOST_VSOCK_SET_GUEST_CID, so after we have called vhost_worker_create it can be calling vhost_work_queue and trying to access the vhost worker/task. If vhost_dev_alloc_iovecs fails, then vhost_worker_free could free the worker/task from under vsock.
This moves vhost_worker_create to the end of vhost_dev_set_owner where we know we can no longer fail in that path. If it fails after the VHOST_SET_OWNER and userspace closes the device, then the normal vsock release handling will do the right thing.
Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|
|
Revision tags: v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1 |
|
| #
55d8122f |
| 24-Apr-2023 |
Shannon Nelson <[email protected]> |
vhost: support PACKED when setting-getting vring_base
Use the right structs for PACKED or split vqs when setting and getting the vring base.
Fixes: 4c8cf31885f6 ("vhost: introduce vDPA-based backen
vhost: support PACKED when setting-getting vring_base
Use the right structs for PACKED or split vqs when setting and getting the vring base.
Fixes: 4c8cf31885f6 ("vhost: introduce vDPA-based backend") Signed-off-by: Shannon Nelson <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]> Acked-by: Jason Wang <[email protected]>
show more ...
|
| #
4b13cbef |
| 07-Jun-2023 |
Mike Christie <[email protected]> |
vhost: Fix worker hangs due to missed wake up calls
We can race where we have added work to the work_list, but vhost_task_fn has passed that check but not yet set us into TASK_INTERRUPTIBLE. wake_up
vhost: Fix worker hangs due to missed wake up calls
We can race where we have added work to the work_list, but vhost_task_fn has passed that check but not yet set us into TASK_INTERRUPTIBLE. wake_up_process will see us in TASK_RUNNING and just return.
This bug was intoduced in commit f9010dbdce91 ("fork, vhost: Use CLONE_THREAD to fix freezer/ps regression") when I moved the setting of TASK_INTERRUPTIBLE to simplfy the code and avoid get_signal from logging warnings about being in the wrong state. This moves the setting of TASK_INTERRUPTIBLE back to before we test if we need to stop the task to avoid a possible race there as well. We then have vhost_worker set TASK_RUNNING if it finds work similar to before.
Fixes: f9010dbdce91 ("fork, vhost: Use CLONE_THREAD to fix freezer/ps regression") Signed-off-by: Mike Christie <[email protected]> Message-Id: <[email protected]> Signed-off-by: Michael S. Tsirkin <[email protected]>
show more ...
|