Not many aware, but io_uring submission queue has two levels. The first
level usually appears as sq_array and stores indexes into the actual SQ.
To my knowledge, no one has ever seriously used it, nor liburing exposes
it to users. Add IORING_SETUP_NO_SQARRAY, when set we don't bother
creating and using the sq_array and SQ heads/tails will be pointing
directly into the SQ. Improves memory footprint, in term of both
allocations as well as cache usage, and also should make io_get_sqe()
less branchy in the end.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0ffa3268a5ef61d326201ff43a233315c96312e0.1692916914.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Queues heads and tails cache line aligned. That makes sq, cq taking 4
lines or 5 lines if we include the rest of struct io_rings (e.g.
sq_flags is frequently accessed).
Since modern io_uring is mostly single threaded, it doesn't make much
send to spread them as such, it wastes space and puts additional pressure
on caches. Put them all into a single line.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9c8deddf9a7ed32069235a530d1e117fb460bc4c.1692916914.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
io_do_iopoll() and io_submit_flush_completions() are pretty similar,
both filling CQEs and then free a list of requests. Don't duplicate it
and make iopoll use __io_submit_flush_completions(), which also helps
with inlining and other optimisations.
For that, we need to first find all completed iopoll requests and splice
them from the iopoll list and then pass it down. This adds one extra
list traversal, which should be fine as requests will stay hot in cache.
CQ locking is already conditional, introduce ->lockless_cq and skip
locking for IOPOLL as it's protected by ->uring_lock.
We also add a wakeup optimisation for IOPOLL to __io_cq_unlock_post(),
so it works just like io_cqring_ev_posted_iopoll().
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3840473f5e8a960de35b77292026691880f6bdbc.1692916914.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
If the cached cqe check passes in io_get_cqe*() it already means that
the cqe we return is valid and non-zero, however the compiler is unable
to optimise null checks like in io_fill_cqe_req().
Do a bit of trickery, return success/fail boolean from io_get_cqe*()
and store cqe in the cqe parameter. That makes it do the right thing,
erasing the check together with the introduced indirection.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/322ea4d3377d3d4efd8ae90ab8ed28a99f518210.1692916914.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
While looking at io_fill_cqe_req()'s asm I stumbled on our trace points
turning into the chunk below:
trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
req->cqe.res, req->cqe.flags,
req->extra1, req->extra2);
io_uring/io_uring.c:898: trace_io_uring_complete(req->ctx, req, req->cqe.user_data,
movq 232(%rbx), %rdi # req_44(D)->big_cqe.extra2, _5
movq 224(%rbx), %rdx # req_44(D)->big_cqe.extra1, _6
movl 84(%rbx), %r9d # req_44(D)->cqe.D.81184.flags, _7
movl 80(%rbx), %r8d # req_44(D)->cqe.res, _8
movq 72(%rbx), %rcx # req_44(D)->cqe.user_data, _9
movq 88(%rbx), %rsi # req_44(D)->ctx, _10
./arch/x86/include/asm/jump_label.h:27: asm_volatile_goto("1:"
1:jmp .L1772 # objtool NOPs this #
...
It does a jump_label for actual tracing, but those 6 moves will stay
there in the hottest io_uring path. As an optimisation, add a
trace_io_uring_complete_enabled() check, which is also uses jump_labels,
it tricks the compiler into behaving. It removes the junk without
changing anything else int the hot path.
Note: apparently, it's not only me noticing it, and people are also
working it around. We should remove the check when it's solved
generically or rework tracing.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/555d8312644b3776f4be7e23f9b92943875c4bc7.1692916914.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
If we setup the ring with SQPOLL, then that polling thread has its
own io-wq setup. This means that if the application uses
IORING_REGISTER_IOWQ_AFF to set the io-wq affinity, we should not be
setting it for the invoking task, but rather the sqpoll task.
Add an sqpoll helper that parks the thread and updates the affinity,
and use that one if we're using SQPOLL.
Fixes: fe76421d1d ("io_uring: allow user configurable IO thread CPU affinity")
Cc: stable@vger.kernel.org # 5.10+
Link: https://github.com/axboe/liburing/discussions/884
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Now all callers of io_aux_cqe() set allow_overflow to false, remove the
parameter and not allow overflowing auxilary multishot cqes.
When CQ is full the function callers and all multishot requests in
general are expected to complete the request. That prevents indefinite
in-background grows of the overflow list and let's the userspace to
handle the backlog at its own pace.
Resubmitting a request should also be faster than accounting a bunch of
overflows, so it should be better for perf when it happens, but a well
behaving userspace should be trying to avoid overflows in any case.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/bb20d14d708ea174721e58bb53786b0521e4dd6d.1691757663.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
All we really care about is finding a free worker. If said worker is
already running, it's either starting new work already or it's just
finishing up existing work. For the latter, we'll be finding this work
item next anyway, and for the former, if the worker does go to sleep,
it'll create a new worker anyway as we have pending items.
This reduces try_to_wake_up() overhead considerably:
23.16% -10.46% [kernel.kallsyms] [k] try_to_wake_up
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
When we check if we have work to run, we grab the acct lock, check,
drop it, and then return the result. If we do have work to run, then
running the work will again grab acct->lock and get the work item.
This causes us to grab acct->lock more frequently than we need to.
If we have work to do, have io_acct_run_queue() return with the acct
lock still acquired. io_worker_handle_work() is then always invoked
with the acct lock already held.
In a simple test cases that stats files (IORING_OP_STATX always hits
io-wq), we see a nice reduction in locking overhead with this change:
19.32% -12.55% [kernel.kallsyms] [k] __cmpwait_case_32
20.90% -12.07% [kernel.kallsyms] [k] queued_spin_lock_slowpath
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The worker free list is RCU protected, and checks for workers going away
when iterating it. There's no need to hold the wq->lock around the
lookup.
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We never use io_move_task_work_from_local() before it's defined in the
file anyway, so kill the forward declaration.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
The caller holds a reference to the ring itself, so by definition
the ring cannot go away. There's no need to play games with tryget
for the reference, as we don't need an extra reference at all.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We return 0 for success, or -error when there's an error. Move the 'ret'
variable into the loop where we are actually using it, to make it
clearer that we don't carry this variable forward for return outside of
the loop.
While at it, also move the need_resched() break condition out of the
while check itself, keeping it with the signal pending check.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add IORING_ASYNC_CANCEL_OP flag for cancelation, which allows the
application to target cancelation based on the opcode of the original
request.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Add a flag to explicitly match on user_data in the request for
cancelation purposes. This is the default behavior if none of the
other match flags are set, but if we ALSO want to match on user_data,
then this flag can be set.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We always need to check/update the cancel sequence if
IORING_ASYNC_CANCEL_ALL is set. Also kill the redundant check for
IORING_ASYNC_CANCEL_ANY at the end, if we get here we know it's
not set as we would've matched it higher up.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
We have different match code in a variety of spots. Start the cleanup of
this by abstracting out a helper that can be used to check if a given
request matches the cancelation criteria outlined in io_cancel_data.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
In preparation for using a generic handler to match requests for
cancelation purposes, ensure that ctx is set in io_cancel_data. The
timeout handlers don't check for this as it'll always match, but we'll
need it set going forward.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This isn't strictly necessary for this callsite, as it uses it's
internal lookup for this cancelation purpose. But let's be consistent
with how it's used in general and set ctx as well.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pull xtensa fixes from Max Filippov:
- fix interaction between unaligned exception handler and load/store
exception handler
- fix parsing ISS network interface specification string
- add comment about etherdev freeing to ISS network driver
* tag 'xtensa-20230716' of https://github.com/jcmvbkbc/linux-xtensa:
xtensa: fix unaligned and load/store configuration interaction
xtensa: ISS: fix call to split_if_spec
xtensa: ISS: add comment about etherdev freeing
Pull perf fix from Borislav Petkov:
- Fix a lockdep warning when the event given is the first one, no event
group exists yet but the code still goes and iterates over event
siblings
* tag 'perf_urgent_for_v6.5_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/x86: Fix lockdep warning in for_each_sibling_event() on SPR
Pull objtool fixes from Borislav Petkov:
- Mark copy_iovec_from_user() __noclone in order to prevent gcc from
doing an inter-procedural optimization and confuse objtool
- Initialize struct elf fully to avoid build failures
* tag 'objtool_urgent_for_v6.5_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
iov_iter: Mark copy_iovec_from_user() noclone
objtool: initialize all of struct elf
Pull scheduler fixes from Borislav Petkov:
- Remove a cgroup from under a polling process properly
- Fix the idle sibling selection
* tag 'sched_urgent_for_v6.5_rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/psi: use kernfs polling functions for PSI trigger polling
sched/fair: Use recent_used_cpu to test p->cpus_ptr
Pull pin control fixes from Linus Walleij:
"I'm mostly on vacation but what would vacation be without a few
critical fixes so people can use their gaming laptops when hiding away
from the sun (or rain)?
- Fix a really annoying interrupt storm in the AMD driver affecting
Asus TUF gaming notebooks
- Fix device tree parsing in the Renesas driver"
* tag 'pinctrl-v6.5-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: amd: Unify debounce handling into amd_pinconf_set()
pinctrl: amd: Drop pull up select configuration
pinctrl: amd: Use amd_pinconf_set() for all config options
pinctrl: amd: Only use special debounce behavior for GPIO 0
pinctrl: renesas: rzg2l: Handle non-unique subnode names
pinctrl: renesas: rzv2m: Handle non-unique subnode names
Pull smb client fixes from Steve French:
- Two reconnect fixes: important fix to address inFlight count to leak
(which can leak credits), and fix for better handling a deleted share
- DFS fix
- SMB1 cleanup fix
- deferred close fix
* tag '6.5-rc1-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6:
cifs: fix mid leak during reconnection after timeout threshold
cifs: is_network_name_deleted should return a bool
smb: client: fix missed ses refcounting
smb: client: Fix -Wstringop-overflow issues
cifs: if deferred close is disabled then close files immediately
Pull powerpc fixes from Michael Ellerman:
- Fix Speculation_Store_Bypass reporting in /proc/self/status on
Power10
- Fix HPT with 4K pages since recent changes by implementing pmd_same()
- Fix 64-bit native_hpte_remove() to be irq-safe
Thanks to Aneesh Kumar K.V, Nageswara R Sastry, and Russell Currey.
* tag 'powerpc-6.5-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/mm/book3s64/hash/4k: Add pmd_same callback for 4K page size
powerpc/64e: Fix obtool warnings in exceptions-64e.S
powerpc/security: Fix Speculation_Store_Bypass reporting on Power10
powerpc/64s: Fix native_hpte_remove() to be irq-safe