|
Revision tags: v6.15, v6.15-rc7, v6.15-rc6, v6.15-rc5, v6.15-rc4, v6.15-rc3, v6.15-rc2, v6.15-rc1, v6.14, v6.14-rc7, v6.14-rc6, v6.14-rc5, v6.14-rc4 |
|
| #
aed06d36 |
| 22-Feb-2025 |
Dr. David Alan Gilbert <[email protected]> |
ceph: Remove osd_client deadcode
osd_req_op_extent_osd_data_pagelist() was added in 2013 as part of commit a4ce40a9a7c1 ("libceph: combine initializing and setting osd data") but never used.
The la
ceph: Remove osd_client deadcode
osd_req_op_extent_osd_data_pagelist() was added in 2013 as part of commit a4ce40a9a7c1 ("libceph: combine initializing and setting osd data") but never used.
The last use of osd_req_op_cls_request_data_pagelist() was removed in 2017's commit ecd4a68a26a2 ("rbd: switch rbd_obj_method_sync() to ceph_osdc_call()")
Remove them.
Signed-off-by: Dr. David Alan Gilbert <[email protected]> Reviewed-by: Viacheslav Dubeyko <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v6.14-rc3, v6.14-rc2, v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4, v6.13-rc3, v6.13-rc2, v6.13-rc1, v6.12, v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4, v6.12-rc3, v6.12-rc2 |
|
| #
32844fd7 |
| 06-Oct-2024 |
Dr. David Alan Gilbert <[email protected]> |
libceph: Remove unused ceph_osdc_watch_check
ceph_osdc_watch_check() has been unused since it was added in commit b07d3c4bd727 ("libceph: support for checking on status of watch")
Remove it.
Signe
libceph: Remove unused ceph_osdc_watch_check
ceph_osdc_watch_check() has been unused since it was added in commit b07d3c4bd727 ("libceph: support for checking on status of watch")
Remove it.
Signed-off-by: Dr. David Alan Gilbert <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v6.12-rc1, v6.11, v6.11-rc7, v6.11-rc6, v6.11-rc5, v6.11-rc4 |
|
| #
9a948c0c |
| 14-Aug-2024 |
Yue Haibing <[email protected]> |
ceph: Remove unused declarations
These functions is never implemented and used.
Signed-off-by: Yue Haibing <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya
ceph: Remove unused declarations
These functions is never implemented and used.
Signed-off-by: Yue Haibing <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v6.11-rc3, v6.11-rc2, v6.11-rc1, v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5, v6.10-rc4, v6.10-rc3, v6.10-rc2, v6.10-rc1, v6.9, v6.9-rc7, v6.9-rc6, v6.9-rc5, v6.9-rc4, v6.9-rc3, v6.9-rc2, v6.9-rc1, v6.8, v6.8-rc7, v6.8-rc6, v6.8-rc5, v6.8-rc4, v6.8-rc3, v6.8-rc2, v6.8-rc1, v6.7, v6.7-rc8, v6.7-rc7, v6.7-rc6, v6.7-rc5, v6.7-rc4, v6.7-rc3, v6.7-rc2, v6.7-rc1, v6.6, v6.6-rc7, v6.6-rc6 |
|
| #
cd7d469c |
| 13-Oct-2023 |
Xiubo Li <[email protected]> |
libceph: fail sparse-read if the data length doesn't match
Once this happens that means there have bugs.
Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Si
libceph: fail sparse-read if the data length doesn't match
Once this happens that means there have bugs.
Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
| #
aaefabc4 |
| 07-Nov-2023 |
Xiubo Li <[email protected]> |
ceph: try to allocate a smaller extent map for sparse read
In fscrypt case and for a smaller read length we can predict the max count of the extent map. And for small read length use cases this coul
ceph: try to allocate a smaller extent map for sparse read
In fscrypt case and for a smaller read length we can predict the max count of the extent map. And for small read length use cases this could save some memories.
[ idryomov: squash into a single patch to avoid build break, drop redundant variable in ceph_alloc_sparse_ext_map() ]
Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc5, v6.6-rc4, v6.6-rc3, v6.6-rc2 |
|
| #
5234193e |
| 15-Sep-2023 |
Kees Cook <[email protected]> |
ceph: Annotate struct ceph_osd_request with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by attribute. Flexible array members annotated with __counted_by can
ceph: Annotate struct ceph_osd_request with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by attribute. Flexible array members annotated with __counted_by can have their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS (for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family functions).
As found with Coccinelle[1], add __counted_by for struct ceph_osd_request.
[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci
Cc: Ilya Dryomov <[email protected]> Cc: Xiubo Li <[email protected]> Cc: Jeff Layton <[email protected]> Cc: [email protected] Reviewed-by: "Gustavo A. R. Silva" <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
show more ...
|
|
Revision tags: v6.6-rc1, v6.5, v6.5-rc7, v6.5-rc6, v6.5-rc5, v6.5-rc4, v6.5-rc3, v6.5-rc2, v6.5-rc1, v6.4, v6.4-rc7, v6.4-rc6, v6.4-rc5, v6.4-rc4, v6.4-rc3, v6.4-rc2, v6.4-rc1, v6.3, v6.3-rc7, v6.3-rc6, v6.3-rc5, v6.3-rc4, v6.3-rc3, v6.3-rc2, v6.3-rc1, v6.2, v6.2-rc8, v6.2-rc7, v6.2-rc6, v6.2-rc5, v6.2-rc4, v6.2-rc3, v6.2-rc2, v6.2-rc1, v6.1, v6.1-rc8, v6.1-rc7, v6.1-rc6, v6.1-rc5, v6.1-rc4, v6.1-rc3, v6.1-rc2, v6.1-rc1, v6.0, v6.0-rc7, v6.0-rc6, v6.0-rc5, v6.0-rc4, v6.0-rc3 |
|
| #
69dd3b39 |
| 25-Aug-2022 |
Jeff Layton <[email protected]> |
libceph: add CEPH_OSD_OP_ASSERT_VER support
...and record the user_version in the reply in a new field in ceph_osd_request, so we can populate the assert_ver appropriately. Shuffle the fields a bit
libceph: add CEPH_OSD_OP_ASSERT_VER support
...and record the user_version in the reply in a new field in ceph_osd_request, so we can populate the assert_ver appropriately. Shuffle the fields a bit too so that the new field fits in an existing hole on x86_64.
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Reviewed-and-tested-by: Luís Henriques <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v6.0-rc2, v6.0-rc1, v5.19, v5.19-rc8, v5.19-rc7, v5.19-rc6, v5.19-rc5 |
|
| #
dee0c5f8 |
| 01-Jul-2022 |
Jeff Layton <[email protected]> |
libceph: add new iov_iter-based ceph_msg_data_type and ceph_osd_data_type
Add an iov_iter to the unions in ceph_msg_data and ceph_msg_data_cursor. Instead of requiring a list of pages or bvecs, we c
libceph: add new iov_iter-based ceph_msg_data_type and ceph_osd_data_type
Add an iov_iter to the unions in ceph_msg_data and ceph_msg_data_cursor. Instead of requiring a list of pages or bvecs, we can just use an iov_iter directly, and avoid extra allocations.
We assume that the pages represented by the iter are pinned such that they shouldn't incur page faults, which is the case for the iov_iters created by netfs.
While working on this, Al Viro informed me that he was going to change iov_iter_get_pages to auto-advance the iterator as that pattern is more or less required for ITER_PIPE anyway. We emulate that here for now by advancing in the _next op and tracking that amount in the "lastlen" field.
In the event that _next is called twice without an intervening _advance, we revert the iov_iter by the remaining lastlen before calling iov_iter_get_pages.
Cc: Al Viro <[email protected]> Cc: David Howells <[email protected]> Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Reviewed-and-tested-by: Luís Henriques <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.19-rc4, v5.19-rc3, v5.19-rc2, v5.19-rc1, v5.18, v5.18-rc7, v5.18-rc6, v5.18-rc5, v5.18-rc4, v5.18-rc3, v5.18-rc2, v5.18-rc1, v5.17, v5.17-rc8, v5.17-rc7, v5.17-rc6, v5.17-rc5, v5.17-rc4 |
|
| #
f628d799 |
| 11-Feb-2022 |
Jeff Layton <[email protected]> |
libceph: add sparse read support to OSD client
Have get_reply check for the presence of sparse read ops in the request and set the sparse_read boolean in the msg. That will queue the messenger layer
libceph: add sparse read support to OSD client
Have get_reply check for the presence of sparse read ops in the request and set the sparse_read boolean in the msg. That will queue the messenger layer to use the sparse read codepath instead of the normal data receive.
Add a new sparse_read operation for the OSD client, driven by its own state machine. The messenger will repeatedly call the sparse_read operation, and it will pass back the necessary info to set up to read the next extent of data, while zero-filling the sparse regions.
The state machine will stop at the end of the last extent, and will attach the extent map buffer to the ceph_osd_req_op so that the caller can use it.
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Reviewed-and-tested-by: Luís Henriques <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
| #
a679e50f |
| 16-Mar-2022 |
Jeff Layton <[email protected]> |
libceph: define struct ceph_sparse_extent and add some helpers
When the OSD sends back a sparse read reply, it contains an array of these structures. Define the structure and add a couple of helpers
libceph: define struct ceph_sparse_extent and add some helpers
When the OSD sends back a sparse read reply, it contains an array of these structures. Define the structure and add a couple of helpers for dealing with them.
Also add a place in struct ceph_osd_req_op to store the extent buffer, and code to free it if it's populated when the req is torn down.
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Reviewed-and-tested-by: Luís Henriques <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
| #
08b8a044 |
| 14-Mar-2022 |
Jeff Layton <[email protected]> |
libceph: add spinlock around osd->o_requests
In a later patch, we're going to need to search for a request in the rbtree, but taking the o_mutex is inconvenient as we already hold the con mutex at t
libceph: add spinlock around osd->o_requests
In a later patch, we're going to need to search for a request in the rbtree, but taking the o_mutex is inconvenient as we already hold the con mutex at the point where we need it.
Add a new spinlock that we take when inserting and erasing entries from the o_requests tree. Search of the rbtree can be done with either the mutex or the spinlock, but insertion and removal requires both.
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Reviewed-and-tested-by: Luís Henriques <[email protected]> Reviewed-by: Milind Changire <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
| #
a8af0d68 |
| 30-Jun-2022 |
Jeff Layton <[email protected]> |
libceph: clean up ceph_osdc_start_request prototype
This function always returns 0, and ignores the nofail boolean. Drop the nofail argument, make the function void return and fix up the callers.
S
libceph: clean up ceph_osdc_start_request prototype
This function always returns 0, and ignores the nofail boolean. Drop the nofail argument, make the function void return and fix up the callers.
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
| #
75dbb685 |
| 14-May-2022 |
Ilya Dryomov <[email protected]> |
libceph: fix potential use-after-free on linger ping and resends
request_reinit() is not only ugly as the comment rightfully suggests, but also unsafe. Even though it is called with osdc->lock held
libceph: fix potential use-after-free on linger ping and resends
request_reinit() is not only ugly as the comment rightfully suggests, but also unsafe. Even though it is called with osdc->lock held for write in all cases, resetting the OSD request refcount can still race with handle_reply() and result in use-after-free. Taking linger ping as an example:
handle_timeout thread handle_reply thread
down_read(&osdc->lock) req = lookup_request(...) ... finish_request(req) # unregisters up_read(&osdc->lock) __complete_request(req) linger_ping_cb(req)
# req->r_kref == 2 because handle_reply still holds its ref
down_write(&osdc->lock) send_linger_ping(lreq) req = lreq->ping_req # same req # cancel_linger_request is NOT # called - handle_reply already # unregistered request_reinit(req) WARN_ON(req->r_kref != 1) # fires request_init(req) kref_init(req->r_kref)
# req->r_kref == 1 after kref_init
ceph_osdc_put_request(req) kref_put(req->r_kref)
# req->r_kref == 0 after kref_put, req is freed
<further req initialization/use> !!!
This happens because send_linger_ping() always (re)uses the same OSD request for watch ping requests, relying on cancel_linger_request() to unregister it from the OSD client and rip its messages out from the messenger. send_linger() does the same for watch/notify registration and watch reconnect requests. Unfortunately cancel_request() doesn't guarantee that after it returns the OSD client would be completely done with the OSD request -- a ref could still be held and the callback (if specified) could still be invoked too.
The original motivation for request_reinit() was inability to deal with allocation failures in send_linger() and send_linger_ping(). Switching to using osdc->req_mempool (currently only used by CephFS) respects that and allows us to get rid of request_reinit().
Cc: [email protected] Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Xiubo Li <[email protected]> Acked-by: Jeff Layton <[email protected]>
show more ...
|
|
Revision tags: v5.17-rc3, v5.17-rc2, v5.17-rc1, v5.16, v5.16-rc8, v5.16-rc7, v5.16-rc6, v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1 |
|
| #
aca39d9e |
| 04-Nov-2021 |
Luís Henriques <[email protected]> |
libceph, ceph: move ceph_osdc_copy_from() into cephfs code
This patch moves ceph_osdc_copy_from() function out of libceph code into cephfs. There are no other users for this function, and there is
libceph, ceph: move ceph_osdc_copy_from() into cephfs code
This patch moves ceph_osdc_copy_from() function out of libceph code into cephfs. There are no other users for this function, and there is the need (in another patch) to access internal ceph_osd_request struct members.
Signed-off-by: Luís Henriques <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.15, v5.15-rc7, v5.15-rc6, v5.15-rc5, v5.15-rc4, v5.15-rc3, v5.15-rc2, v5.15-rc1, v5.14, v5.14-rc7, v5.14-rc6, v5.14-rc5, v5.14-rc4, v5.14-rc3, v5.14-rc2, v5.14-rc1, v5.13, v5.13-rc7, v5.13-rc6, v5.13-rc5, v5.13-rc4, v5.13-rc3, v5.13-rc2, v5.13-rc1, v5.12, v5.12-rc8, v5.12-rc7, v5.12-rc6, v5.12-rc5, v5.12-rc4, v5.12-rc3, v5.12-rc2, v5.12-rc1, v5.12-rc1-dontuse, v5.11, v5.11-rc7, v5.11-rc6, v5.11-rc5, v5.11-rc4, v5.11-rc3, v5.11-rc2, v5.11-rc1, v5.10, v5.10-rc7, v5.10-rc6, v5.10-rc5, v5.10-rc4, v5.10-rc3, v5.10-rc2, v5.10-rc1, v5.9, v5.9-rc8, v5.9-rc7, v5.9-rc6, v5.9-rc5, v5.9-rc4, v5.9-rc3, v5.9-rc2, v5.9-rc1, v5.8, v5.8-rc7, v5.8-rc6, v5.8-rc5, v5.8-rc4 |
|
| #
042f6498 |
| 01-Jul-2020 |
Jeff Layton <[email protected]> |
libceph: just have osd_req_op_init() return a pointer
The caller can just ignore the return. No need for this wrapper that just casts the other function to void.
[ idryomov: argument alignment ]
S
libceph: just have osd_req_op_init() return a pointer
The caller can just ignore the return. No need for this wrapper that just casts the other function to void.
[ idryomov: argument alignment ]
Signed-off-by: Jeff Layton <[email protected]> Reviewed-by: Ilya Dryomov <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.8-rc3, v5.8-rc2, v5.8-rc1, v5.7 |
|
| #
d3798acc |
| 29-May-2020 |
Ilya Dryomov <[email protected]> |
libceph: support for alloc hint flags
Allow indicating future I/O pattern via flags. This is supported since Kraken (and bluestore persists flags together with expected_object_size and expected_wri
libceph: support for alloc hint flags
Allow indicating future I/O pattern via flags. This is supported since Kraken (and bluestore persists flags together with expected_object_size and expected_write_size).
Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Jason Dillaman <[email protected]>
show more ...
|
|
Revision tags: v5.7-rc7 |
|
| #
117d96a0 |
| 23-May-2020 |
Ilya Dryomov <[email protected]> |
libceph: support for balanced and localized reads
OSD-side issues with reads from replica have been resolved in Octopus. Reading from replica should be safe wrt. unstable or uncommitted state now,
libceph: support for balanced and localized reads
OSD-side issues with reads from replica have been resolved in Octopus. Reading from replica should be safe wrt. unstable or uncommitted state now, so add support for balanced and localized reads.
There are two cases when a read from replica can't be served:
- OSD may silently drop the request, expecting the client to notice that the acting set has changed and resend via the usual means (handled with t->used_replica)
- OSD may return EAGAIN, expecting the client to resend to the primary, ignoring replica read flags (see handle_reply())
Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Jeff Layton <[email protected]>
show more ...
|
|
Revision tags: v5.7-rc6, v5.7-rc5, v5.7-rc4, v5.7-rc3, v5.7-rc2, v5.7-rc1, v5.6, v5.6-rc7 |
|
| #
97e27aaa |
| 20-Mar-2020 |
Xiubo Li <[email protected]> |
ceph: add read/write latency metric support
Calculate the latency for OSD read requests. Add a new r_end_stamp field to struct ceph_osd_request that will hold the time of that the reply was received
ceph: add read/write latency metric support
Calculate the latency for OSD read requests. Add a new r_end_stamp field to struct ceph_osd_request that will hold the time of that the reply was received. Use that to calculate the RTT for each call, and divide the sum of those by number of calls to get averate RTT.
Keep a tally of RTT for OSD writes and number of calls to track average latency of OSD writes.
URL: https://tracker.ceph.com/issues/43215 Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.6-rc6, v5.6-rc5, v5.6-rc4, v5.6-rc3, v5.6-rc2, v5.6-rc1 |
|
| #
5107d7d5 |
| 29-Jan-2020 |
Xiubo Li <[email protected]> |
ceph: move ceph_osdc_{read,write}pages to ceph.ko
Since these helpers are only used by ceph.ko, move them there and rename them with _sync_ qualifiers.
Signed-off-by: Xiubo Li <[email protected]> R
ceph: move ceph_osdc_{read,write}pages to ceph.ko
Since these helpers are only used by ceph.ko, move them there and rename them with _sync_ qualifiers.
Signed-off-by: Xiubo Li <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.5, v5.5-rc7, v5.5-rc6 |
|
| #
78beb0ff |
| 08-Jan-2020 |
Luis Henriques <[email protected]> |
ceph: use copy-from2 op in copy_file_range
Instead of using the copy-from operation, switch copy_file_range to the new copy-from2 operation, which allows to send the truncate_seq and truncate_size p
ceph: use copy-from2 op in copy_file_range
Instead of using the copy-from operation, switch copy_file_range to the new copy-from2 operation, which allows to send the truncate_seq and truncate_size parameters.
If an OSD does not support the copy-from2 operation it will return -EOPNOTSUPP. In that case, the kernel client will stop trying to do remote object copies for this fs client and will always use the generic VFS copy_file_range.
Signed-off-by: Luis Henriques <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.5-rc5, v5.5-rc4, v5.5-rc3, v5.5-rc2, v5.5-rc1, v5.4, v5.4-rc8, v5.4-rc7, v5.4-rc6, v5.4-rc5, v5.4-rc4, v5.4-rc3, v5.4-rc2, v5.4-rc1, v5.3, v5.3-rc8, v5.3-rc7, v5.3-rc6, v5.3-rc5, v5.3-rc4, v5.3-rc3, v5.3-rc2 |
|
| #
2cef0ba8 |
| 25-Jul-2019 |
Yan, Zheng <[email protected]> |
libceph: add function that clears osd client's abort_err
Signed-off-by: "Yan, Zheng" <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
|
| #
120a75ea |
| 25-Jul-2019 |
Yan, Zheng <[email protected]> |
libceph: add function that reset client's entity addr
This function also re-open connections to OSD/MON, and re-send in-flight OSD requests after re-opening connections to OSD.
Signed-off-by: "Yan,
libceph: add function that reset client's entity addr
This function also re-open connections to OSD/MON, and re-send in-flight OSD requests after re-opening connections to OSD.
Signed-off-by: "Yan, Zheng" <[email protected]> Reviewed-by: Jeff Layton <[email protected]> Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|
|
Revision tags: v5.3-rc1, v5.2, v5.2-rc7, v5.2-rc6, v5.2-rc5 |
|
| #
4cf3e6df |
| 14-Jun-2019 |
Ilya Dryomov <[email protected]> |
libceph: export osd_req_op_data() macro
We already have one exported wrapper around it for extent.osd_data and rbd_object_map_update_finish() needs another one for cls.request_data.
Signed-off-by:
libceph: export osd_req_op_data() macro
We already have one exported wrapper around it for extent.osd_data and rbd_object_map_update_finish() needs another one for cls.request_data.
Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Dongsheng Yang <[email protected]> Reviewed-by: Jeff Layton <[email protected]>
show more ...
|
| #
68ada915 |
| 14-Jun-2019 |
Ilya Dryomov <[email protected]> |
libceph: change ceph_osdc_call() to take page vector for response
This will be used for loading object map. rbd_obj_read_sync() isn't suitable because object map must be accessed through class meth
libceph: change ceph_osdc_call() to take page vector for response
This will be used for loading object map. rbd_obj_read_sync() isn't suitable because object map must be accessed through class methods.
Signed-off-by: Ilya Dryomov <[email protected]> Reviewed-by: Dongsheng Yang <[email protected]> Reviewed-by: Jeff Layton <[email protected]>
show more ...
|
| #
94e85771 |
| 08-Jul-2019 |
Ilya Dryomov <[email protected]> |
libceph: rename r_unsafe_item to r_private_item
This list item remained from when we had safe and unsafe replies (commit vs ack). It has since become a private list item for use by clients.
Signed
libceph: rename r_unsafe_item to r_private_item
This list item remained from when we had safe and unsafe replies (commit vs ack). It has since become a private list item for use by clients.
Signed-off-by: Ilya Dryomov <[email protected]>
show more ...
|