printk: ringbuffer: Fix data block max size check

Currently data_check_size() limits data blocks to a maximum size of
the full buffer minus an ID (long integer):

    max_size <= DATA_SIZE(data_ring) - sizeof(long)

However, this is not an appropriate limit due to the nature of
wrapping data blocks. For example, if a data block is larger than
half the buffer:

    size = (DATA_SIZE(data_ring) / 2) + 8

and begins exactly in the middle of the buffer, then:

    - the data block will wrap
    - the ID will be stored at exactly half of the buffer
    - the record data begins at the beginning of the buffer
    - the record data ends 8 bytes _past_ exactly half of the buffer

The record overwrites itself, i.e. needs more space than the full
buffer!

Luckily printk() is not vulnerable to this problem because
truncate_msg() limits printk-messages to 1/4 of the ringbuffer.
Indeed, by adjusting the printk_ringbuffer KUnit test, which does not
use printk() and its truncate_msg() check, it is easy to see that the
ringbuffer becomes corrupted for records larger than half the buffer
size.

The corruption occurs because data_push_tail() expects it will never
be requested to push the tail beyond the head.

Avoid this problem by adjusting data_check_size() to limit record
sizes to half the buffer size. Also add WARN_ON_ONCE() before
relevant data_push_tail() calls to validate that there are no such
illegal requests. WARN_ON_ONCE() is used, rather than just adding
extra checks to data_push_tail() because it is considered a bug to
attempt such illegal actions.

Link: https://lore.kernel.org/lkml/aMLrGCQSyC8odlFZ@pathway.suse.cz
Signed-off-by: John Ogness <john.ogness@linutronix.de>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
This commit is contained in:
John Ogness
2025-09-26 00:55:59 +02:06
committed by Petr Mladek
parent 35a813e010
commit 4d164e08cd

View File

@@ -393,25 +393,21 @@ static unsigned int to_blk_size(unsigned int size)
* Sanity checker for reserve size. The ringbuffer code assumes that a data
* block does not exceed the maximum possible size that could fit within the
* ringbuffer. This function provides that basic size check so that the
* assumption is safe.
* assumption is safe. In particular, it guarantees that data_push_tail() will
* never attempt to push the tail beyond the head.
*/
static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
{
struct prb_data_block *db = NULL;
/* Data-less blocks take no space. */
if (size == 0)
return true;
/*
* Ensure the alignment padded size could possibly fit in the data
* array. The largest possible data block must still leave room for
* at least the ID of the next block.
* If data blocks were allowed to be larger than half the data ring
* size, a wrapping data block could require more space than the full
* ringbuffer.
*/
size = to_blk_size(size);
if (size > DATA_SIZE(data_ring) - sizeof(db->id))
return false;
return true;
return to_blk_size(size) <= DATA_SIZE(data_ring) / 2;
}
/* Query the state of a descriptor. */
@@ -1051,8 +1047,17 @@ static char *data_alloc(struct printk_ringbuffer *rb, unsigned int size,
do {
next_lpos = get_next_lpos(data_ring, begin_lpos, size);
if (!data_push_tail(rb, next_lpos - DATA_SIZE(data_ring))) {
/* Failed to allocate, specify a data-less block. */
/*
* data_check_size() prevents data block allocation that could
* cause illegal ringbuffer states. But double check that the
* used space will not be bigger than the ring buffer. Wrapped
* messages need to reserve more space, see get_next_lpos().
*
* Specify a data-less block when the check or the allocation
* fails.
*/
if (WARN_ON_ONCE(next_lpos - begin_lpos > DATA_SIZE(data_ring)) ||
!data_push_tail(rb, next_lpos - DATA_SIZE(data_ring))) {
blk_lpos->begin = FAILED_LPOS;
blk_lpos->next = FAILED_LPOS;
return NULL;
@@ -1140,8 +1145,18 @@ static char *data_realloc(struct printk_ringbuffer *rb, unsigned int size,
return &blk->data[0];
}
if (!data_push_tail(rb, next_lpos - DATA_SIZE(data_ring)))
/*
* data_check_size() prevents data block reallocation that could
* cause illegal ringbuffer states. But double check that the
* new used space will not be bigger than the ring buffer. Wrapped
* messages need to reserve more space, see get_next_lpos().
*
* Specify failure when the check or the allocation fails.
*/
if (WARN_ON_ONCE(next_lpos - blk_lpos->begin > DATA_SIZE(data_ring)) ||
!data_push_tail(rb, next_lpos - DATA_SIZE(data_ring))) {
return NULL;
}
/* The memory barrier involvement is the same as data_alloc:A. */
if (!atomic_long_try_cmpxchg(&data_ring->head_lpos, &head_lpos,