Merge tag 'docs-7.1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/docs/linux

Pull documentation fixes from Jonathan Corbet:
 "This is Willy Tarreau's new document clarifying the definition and
  handling of security-related bugs, which we're trying to get out there
  quickly on the theory that some of the bug reporters might actually
  read and pay attention to it"

* tag 'docs-7.1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/docs/linux:
  docs: threat-model: don't limit root capabilities to CAP_SYS_ADMIN
  docs: security-bugs: add a link to the threat-model documentation
  Documentation: security-bugs: clarify requirements for AI-assisted reports
  Documentation: security-bugs: explain what is and is not a security bug
  Documentation: security-bugs: do not systematically Cc the security team
This commit is contained in:
Linus Torvalds
2026-05-15 12:24:09 -07:00
3 changed files with 340 additions and 2 deletions

View File

@@ -86,6 +86,7 @@ regressions and security problems.
debugging/index
handling-regressions
security-bugs
threat-model
cve
embargoed-hardware-issues

View File

@@ -66,6 +66,42 @@ In addition, the following information are highly desirable:
the issue appear. It is useful to share them, as they can be helpful to
keep end users protected during the time it takes them to apply the fix.
What qualifies as a security bug
--------------------------------
It is important that most bugs are handled publicly so as to involve the widest
possible audience and find the best solution. By nature, bugs that are handled
in closed discussions between a small set of participants are less likely to
produce the best possible fix (e.g., risk of missing valid use cases, limited
testing abilities).
It turns out that the majority of the bugs reported via the security team are
just regular bugs that have been improperly qualified as security bugs due to
a lack of awareness of the Linux kernel's threat model, as described in
Documentation/process/threat-model.rst, and ought to have been sent through
the normal channels described in Documentation/admin-guide/reporting-issues.rst
instead.
The security list exists for urgent bugs that grant an attacker a capability
they are not supposed to have on a correctly configured production system, and
can be easily exploited, representing an imminent threat to many users. Before
reporting, consider whether the issue actually crosses a trust boundary on such
a system.
**If you resorted to AI assistance to identify a bug, you must treat it as
public**. While you may have valid reasons to believe it is not, the security
team's experience shows that bugs discovered this way systematically surface
simultaneously across multiple researchers, often on the same day. In this
case, do not publicly share a reproducer, as this could cause unintended harm;
just mention that one is available and maintainers might ask for it privately
if they need it.
If you are unsure whether an issue qualifies, err on the side of reporting
privately: the security team would rather triage a borderline report than miss
a real vulnerability. Reporting ordinary bugs to the security list, however,
does not make them move faster and consumes triage capacity that other reports
need.
Identifying contacts
--------------------
@@ -74,7 +110,7 @@ affected subsystem's maintainers and Cc: the Linux kernel security team. Do
not send it to a public list at this stage, unless you have good reasons to
consider the issue as being public or trivial to discover (e.g. result of a
widely available automated vulnerability scanning tool that can be repeated by
anyone).
anyone, or use of AI-based tools).
If you're sending a report for issues affecting multiple parts in the kernel,
even if they're fairly similar issues, please send individual messages (think
@@ -131,6 +167,64 @@ the Linux kernel security team only. Your message will be triaged, and you
will receive instructions about whom to contact, if needed. Your message may
equally be forwarded as-is to the relevant maintainers.
Responsible use of AI to find bugs
----------------------------------
A significant fraction of bug reports submitted to the security team are
actually the result of code reviews assisted by AI tools. While this can be an
efficient means to find bugs in rarely explored areas, it causes an overload on
maintainers, who are sometimes forced to ignore such reports due to their poor
quality or accuracy. As such, reporters must be particularly cautious about a
number of points which tend to make these reports needlessly difficult to
handle:
* **Length**: AI-generated reports tend to be excessively long, containing
multiple sections and excessive detail. This makes it difficult to spot
important information such as affected files, versions, and impact. Please
ensure that a clear summary of the problem and all critical details are
presented first. Do not require triage engineers to scan multiple pages of
text. Configure your tools to produce concise, human-style reports.
* **Formatting**: Most AI-generated reports are littered with Markdown tags.
These decorations complicate the search for important information and do
not survive the quoting processes involved in forwarding or replying.
Please **always convert your report to plain text** without any formatting
decorations before sending it.
* **Impact Evaluation**: Many AI-generated reports lack an understanding
of the kernel's threat model (see Documentation/process/threat-model.rst)
and go to great lengths inventing theoretical consequences. This adds
noise and complicates triage. Please stick to verifiable facts (e.g.,
"this bug permits any user to gain CAP_NET_ADMIN") without enumerating
speculative implications. Have your tool read this documentation as
part of the evaluation process.
* **Reproducer**: AI-based tools are often capable of generating reproducers.
Please always ensure your tool provides one and **test it thoroughly**. If
the reproducer does not work, or if the tool cannot produce one, the
validity of the report should be seriously questioned. Note that since the
report will be posted to a public list, the reproducer should only be
shared upon maintainers' request.
* **Propose a Fix**: Many AI tools are actually better at writing code than
evaluating it. Please ask your tool to propose a fix and **test it** before
reporting the problem. If the fix cannot be tested because it relies on
rare hardware or almost extinct network protocols, the issue is likely not
a security bug. In any case, if a fix is proposed, it must adhere to
Documentation/process/submitting-patches.rst and include a 'Fixes:' tag
designating the commit that introduced the bug.
Failure to consider these points exposes your report to the risk of being
ignored.
Use common sense when evaluating the report. If the affected file has not been
touched for more than one year and is maintained by a single individual, it is
likely that usage has declined and exposed users are virtually non-existent
(e.g., drivers for very old hardware, obsolete filesystems). In such cases,
there is no need to consume a maintainer's time with an unimportant report. If
the issue is clearly trivial and publicly discoverable, you should report it
directly to the public mailing lists.
Sending the report
------------------
@@ -148,7 +242,15 @@ run additional tests. Reports where the reporter does not respond promptly
or cannot effectively discuss their findings may be abandoned if the
communication does not quickly improve.
The report must be sent to maintainers, with the security team in ``Cc:``.
The report must be sent to maintainers. If there are two or fewer
recipients in your message, you must also always Cc: the Linux kernel
security team who will ensure the message is delivered to the proper
people, and will be able to assist small maintainer teams with processes
they may not be familiar with. For larger teams, Cc: the Linux kernel
security team for your first few reports or when seeking specific help,
such as when resending a message which got no response within a week.
Once you have become comfortable with the process for a few reports, it is
no longer necessary to Cc: the security list when sending to large teams.
The Linux kernel security team can be contacted by email at
<security@kernel.org>. This is a private list of security officers
who will help verify the bug report and assist developers working on a fix.

View File

@@ -0,0 +1,235 @@
The Linux Kernel threat model
=============================
There are a lot of assumptions regarding what the kernel does and does not
protect against. These assumptions tend to cause confusion for bug reports
(:doc:`security-related ones <security-bugs>` vs :doc:`non-security ones
<../admin-guide/reporting-issues>`), and can complicate security enforcement
when the responsibilities for some boundaries is not clear between the kernel,
distros, administrators and users.
This document tries to clarify the responsibilities of the kernel in this
domain.
The kernel's responsibilities
-----------------------------
The kernel abstracts access to local hardware resources and to remote systems
in a way that allows multiple local users to get a fair share of the available
resources granted to them, and, when the underlying hardware permits, to assign
a level of confidentiality to their communications and to the data they are
processing or storing.
The kernel assumes that the underlying hardware behaves according to its
specifications. This includes the integrity of the CPU's instruction set, the
transparency of the branch prediction unit and the cache units, the consistency
of the Memory Management Unit (MMU), the isolation of DMA-capable peripherals
(e.g., via IOMMU), state transitions in controllers, ranges of values read from
registers, the respect of documented hardware limitations, etc.
When hardware fails to maintain its specified isolation (e.g., CPU bugs,
side-channels, hardware response to unexpected inputs), the kernel will usually
attempt to implement reasonable mitigations. These are best-effort measures
intended to reduce the attack surface or elevate the cost of an attack within
the limits of the hardware's facilities; they do not constitute a
kernel-provided safety guarantee.
Users always perform their activities under the authority of an administrator
who is able to grant or deny various types of permissions that may affect how
users benefit from available resources, or the level of confidentiality of
their activities. Administrators may also delegate all or part of their own
permissions to some users, particularly via capabilities but not only. All this
is performed via configuration (sysctl, file-system permissions etc).
The Linux Kernel applies a certain collection of default settings that match
its threat model. Distros have their own threat model and will come with their
own configuration presets, that the administrator may have to adjust to better
suit their expectations (relax or restrict).
By default, the Linux Kernel guarantees the following protections when running
on common processors featuring privilege levels and memory management units:
* **User-based isolation**: an unprivileged user may restrict access to their
own data from other unprivileged users running on the same system. This
includes:
* stored data, via file system permissions
* in-memory data (pages are not accessible by default to other users)
* process activity (ptrace is not permitted to other users)
* inter-process communication (other users may not observe data exchanged via
UNIX domain sockets or other IPC mechanisms).
* network communications within the same or with other systems
* **Capability-based protection**:
* users not having elevated capabilities (including but not limited to
CAP_SYS_ADMIN) may not alter the
kernel's configuration, memory nor state, change other users' view of the
file system layout, grant any user capabilities they do not have, nor
affect the system's availability (shutdown, reboot, panic, hang, or making
the system unresponsive via unbounded resource exhaustion).
* users not having the ``CAP_NET_ADMIN`` capability may not alter the network
configuration, intercept nor spoof network communications from other users
nor systems.
* users not having ``CAP_SYS_PTRACE`` may not observe other users' processes
activities.
When ``CONFIG_USER_NS`` is set, the kernel also permits unprivileged users to
create their own user namespace in which they have all capabilities, but with a
number of restrictions (they may not perform actions that have impacts on the
initial user namespace, such as changing time, loading modules or mounting
block devices). Please refer to ``user_namespaces(7)`` for more details, the
possibilities of user namespaces are not covered in this document.
The kernel also offers a lot of troubleshooting and debugging facilities, which
can constitute attack vectors when placed in wrong hands. While some of them
are designed to be accessible to regular local users with a low risk (e.g.
kernel logs via ``/proc/kmsg``), some would expose enough information to
represent a risk in most places and the decision to expose them is under the
administrator's responsibility (perf events, traces), and others are not
designed to be accessed by non-privileged users (e.g. debugfs). Access to these
facilities by a user who has been explicitly granted permission by an
administrator does not constitute a security breach.
Bugs that permit to violate the principles above constitute security breaches.
However, bugs that permit one violation only once another one was already
achieved are only weaknesses. The kernel applies a number of self-protection
measures whose purpose is to avoid crossing a security boundary when certain
classes of bugs are found, but a failure of these extra protections do not
constitute a vulnerability alone.
What does not constitute a security bug
---------------------------------------
In the Linux kernel's threat model, the following classes of problems are
**NOT** considered as Linux Kernel security bugs. However, when it is believed
that the kernel could do better, they should be reported, so that they can be
reviewed and fixed where reasonably possible, but they will be handled as any
regular bug:
* **Configuration**:
* outdated kernels and particularly end-of-life branches are out of the scope
of the kernel's threat model: administrators are responsible for keeping
their system up to date. For a bug to qualify as a security bug, it must be
demonstrated that it affects actively maintained versions.
* build-level: changes to the kernel configuration that are explicitly
documented as lowering the security level (e.g. ``CONFIG_NOMMU``), or
targeted at developers only.
* OS-level: changes to command line parameters, sysctls, filesystem
permissions, user capabilities, exposure of privileged interfaces, that
explicitly increase exposure by either offering non-default access to
unprivileged users, or reduce the kernel's ability to enforce some
protections or mitigations. Example: write access to procfs or debugfs.
* issues triggered only when using features intended for development or
debugging (e.g., LOCKDEP, KASAN, FAULT_INJECTION): these features are known
to introduce overhead and potential instability and are not intended for
production use.
* issues affecting drivers exposed under CONFIG_STAGING, as well as features
marked EXPERIMENTAL in the configuration.
* loading of explicitly insecure/broken/staging modules, and generally any
using any subsystem marked as experimental or not intended for production
use.
* running out-of-tree modules or unofficial kernel forks; these should be
reported to the relevant vendor.
* **Excess of initial privileges**:
* actions performed by a user already possessing the privileges required to
perform that action or modify that state (e.g. ``CAP_SYS_ADMIN``,
``CAP_NET_ADMIN``, ``CAP_SYS_RAWIO``, ``CAP_SYS_MODULE`` with no further
boundary being crossed).
* actions performed in user namespace that do not bypass the restrictions
imposed to the initial user (e.g. ptrace usage, signal delivery, resource
usage, access to FS/device/sysctl/memory, network binding, system/network
configuration etc).
* anything performed by the root user in the initial namespace (e.g. kernel
oops when writing to a privileged device).
* **Out of production use**:
This covers theoretical/probabilistic attacks that rely on laboratory
conditions with zero system noise, or those requiring an unrealistic number
of attempts (e.g., billions of trials) that would be detected by standard
system monitoring long before success, such as:
* prediction of random numbers that only works in a totally silent
environment (such as IP ID, TCP ports or sequence numbers that can only be
guessed in a lab).
* activity observation and information leaks based on probabilistic
approaches that are prone to measurement noise and not realistically
reproducible on a production system.
* issues that can only be triggered by heavy attacks (e.g. brute force) whose
impact on the system makes it unlikely or impossible to remain undetected
before they succeed (e.g. consuming all memory before succeeding).
* problems seen only under development simulators, emulators, or combinations
that do not exist on real systems at the time of reporting (issues
involving tens of millions of threads, tens of thousands of CPUs,
unrealistic CPU frequencies, RAM sizes or disk capacities, network speeds.
* issues whose reproduction requires hardware modification or emulation,
including fake USB devices that pretend to be another one.
* as well as issues that can be triggered at a cost that is orders of
magnitude higher than the expected benefits (e.g. fully functional keyboard
emulator only to retrieve 7 uninitialized bytes in a structure, or
brute-force method involving millions of connection attempts to guess a
port number).
* **Hardening failures**:
* ability to bypass some of the kernel's hardening measures with no
demonstrable exploit path (e.g. ASLR bypass, events timing or probing with
no demonstrable consequence). These are just weaknesses, not
vulnerabilities.
* missing argument checks and failure to report certain errors with no
immediate consequence.
* **Random information leaks**:
This concerns information leaks of small data parts that happen to be there
and that cannot be chosen by the attacker, or face access restrictions:
* structure padding reported by syscalls or other interfaces.
* identifiers, partial data, non-terminated strings reported in error
messages.
* Leaks of kernel memory addresses/pointers do not constitute an immediately
exploitable vector and are not security bugs, though they must be reported
and fixed.
* **Crafted file system images**:
* bugs triggered by mounting a corrupted or maliciously crafted file system
image are generally not security bugs, as the kernel assumes the underlying
storage media is under the administrator's control, unless the filesystem
driver is specifically documented as being hardened against untrusted media.
* issues that are resolved, mitigated, or detected by running a filesystem
consistency check (fsck) on the image prior to mounting.
* **Physical access**:
Issues that require physical access to the machine, hardware modification, or
the use of specialized hardware (e.g., logic analyzers, DMA-attack tools over
PCI-E/Thunderbolt) are out of scope unless the system is explicitly
configured with technologies meant to defend against such attacks
(e.g. IOMMU).
* **Functional and performance regressions**:
Any issue that can be mitigated by setting proper permissions and limits
doesn't qualify as a security bug.