Merge branch 'mauro' into docs-mw

Mauro says:

This series replace get_abi.pl with a Python version.

I originally started it due to some issues I noticed when searching for
ABI symbols. While I could just go ahead and fix the already existing
script, I noticed that the script maintainance didn't have much care over
all those years, probably because it is easier to find Python programmers
those days.

Also, the code is complex and was not using modules or classes and
were using lots of global variables.

So, I decided to rewrite it in Python. I started with a manual conversion
for each function. Yet, to avoid future maintainership issues, I opted to
divide the main code on three classes, each on a sepaparate file.

Just like the original RFC, I opted to keep the Sphinx kernel-abi module
on three different phases:

- call get_abi.py as an exec file;
- import AbiParser on a minimal integration scenario;
- cleanup the code to avoid needing to parse line numbers from the text.

This way, if something goes wrong, it would be easier to just revert any
offending patches, It also provides a better rationale about what each
logical change is doing.

The initial patches on this series do some preparation work and
cleans some ABI symbol bugs that lack ":" delimiter.
This commit is contained in:
Jonathan Corbet
2025-02-10 11:28:12 -07:00
29 changed files with 1829 additions and 1321 deletions

View File

@@ -4,7 +4,7 @@ For details to this subsystem look at Documentation/driver-api/rfkill.rst.
What: /sys/class/rfkill/rfkill[0-9]+/claim
Date: 09-Jul-2007
KernelVersion v2.6.22
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: This file was deprecated because there no longer was a way to
claim just control over a single rfkill instance.

View File

@@ -16,7 +16,7 @@ Description: The rfkill class subsystem folder.
What: /sys/class/rfkill/rfkill[0-9]+/name
Date: 09-Jul-2007
KernelVersion v2.6.22
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Name assigned by driver to this key (interface or driver name).
Values: arbitrary string.
@@ -24,7 +24,7 @@ Values: arbitrary string.
What: /sys/class/rfkill/rfkill[0-9]+/type
Date: 09-Jul-2007
KernelVersion v2.6.22
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Driver type string ("wlan", "bluetooth", etc).
Values: See include/linux/rfkill.h.
@@ -32,7 +32,7 @@ Values: See include/linux/rfkill.h.
What: /sys/class/rfkill/rfkill[0-9]+/persistent
Date: 09-Jul-2007
KernelVersion v2.6.22
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Whether the soft blocked state is initialised from non-volatile
storage at startup.
@@ -44,7 +44,7 @@ Values: A numeric value:
What: /sys/class/rfkill/rfkill[0-9]+/state
Date: 09-Jul-2007
KernelVersion v2.6.22
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Current state of the transmitter.
This file was scheduled to be removed in 2014, but due to its
@@ -67,7 +67,7 @@ Values: A numeric value.
What: /sys/class/rfkill/rfkill[0-9]+/hard
Date: 12-March-2010
KernelVersion v2.6.34
KernelVersion: v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current hardblock state. This file is read only.
Values: A numeric value.
@@ -81,7 +81,7 @@ Values: A numeric value.
What: /sys/class/rfkill/rfkill[0-9]+/soft
Date: 12-March-2010
KernelVersion v2.6.34
KernelVersion: v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current softblock state. This file is read and write.
Values: A numeric value.

View File

@@ -246,14 +246,14 @@ Description: Controls whether PRS disable is turned on for the workqueue.
capability.
What: /sys/bus/dsa/devices/wq<m>.<n>/occupancy
Date May 25, 2021
Date: May 25, 2021
KernelVersion: 5.14.0
Contact: dmaengine@vger.kernel.org
Description: Show the current number of entries in this WQ if WQ Occupancy
Support bit WQ capabilities is 1.
What: /sys/bus/dsa/devices/wq<m>.<n>/enqcmds_retries
Date Oct 29, 2021
Date: Oct 29, 2021
KernelVersion: 5.17.0
Contact: dmaengine@vger.kernel.org
Description: Indicate the number of retires for an enqcmds submission on a sharedwq.

View File

@@ -1,241 +1,241 @@
What: /sys/bus/coresight/devices/<cti-name>/enable
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable/Disable the CTI hardware.
What: /sys/bus/coresight/devices/<cti-name>/powered
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Indicate if the CTI hardware is powered.
What: /sys/bus/coresight/devices/<cti-name>/ctmid
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Display the associated CTM ID
What: /sys/bus/coresight/devices/<cti-name>/nr_trigger_cons
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Number of devices connected to triggers on this CTI
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/name
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Name of connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/in_signals
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Input trigger signals from connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/in_types
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Functional types for the input trigger signals
from connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/out_signals
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Output trigger signals to connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/triggers<N>/out_types
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Functional types for the output trigger signals
to connected device <N>
What: /sys/bus/coresight/devices/<cti-name>/regs/inout_sel
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Select the index for inen and outen registers.
What: /sys/bus/coresight/devices/<cti-name>/regs/inen
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write the CTIINEN register selected by inout_sel.
What: /sys/bus/coresight/devices/<cti-name>/regs/outen
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write the CTIOUTEN register selected by inout_sel.
What: /sys/bus/coresight/devices/<cti-name>/regs/gate
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write CTIGATE register.
What: /sys/bus/coresight/devices/<cti-name>/regs/asicctl
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Read or write ASICCTL register.
What: /sys/bus/coresight/devices/<cti-name>/regs/intack
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Write the INTACK register.
What: /sys/bus/coresight/devices/<cti-name>/regs/appset
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Set CTIAPPSET register to activate channel. Read back to
determine current value of register.
What: /sys/bus/coresight/devices/<cti-name>/regs/appclear
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Write APPCLEAR register to deactivate channel.
What: /sys/bus/coresight/devices/<cti-name>/regs/apppulse
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Write APPPULSE to pulse a channel active for one clock
cycle.
What: /sys/bus/coresight/devices/<cti-name>/regs/chinstatus
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Read current status of channel inputs.
What: /sys/bus/coresight/devices/<cti-name>/regs/choutstatus
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) read current status of channel outputs.
What: /sys/bus/coresight/devices/<cti-name>/regs/triginstatus
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) read current status of input trigger signals
What: /sys/bus/coresight/devices/<cti-name>/regs/trigoutstatus
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) read current status of output trigger signals.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigin_attach
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Attach a CTI input trigger to a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigin_detach
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Detach a CTI input trigger from a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_attach
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Attach a CTI output trigger to a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_detach
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Detach a CTI output trigger from a CTM channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_gate_enable
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable CTIGATE for single channel (Write) or list enabled
channels through the gate (R).
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_gate_disable
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Disable CTIGATE for single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_set
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Activate a single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_clear
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Deactivate a single channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_pulse
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Pulse a single channel - activate for a single clock cycle.
What: /sys/bus/coresight/devices/<cti-name>/channels/trigout_filtered
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) List of output triggers filtered across all connections.
What: /sys/bus/coresight/devices/<cti-name>/channels/trig_filter_enable
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Enable or disable trigger output signal filtering.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_inuse
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) show channels with at least one attached trigger signal.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_free
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) show channels with no attached trigger signals.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_sel
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (RW) Write channel number to select a channel to view, read to
see selected channel number.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_in
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Read to see input triggers connected to selected view
channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_out
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Read) Read to see output triggers connected to selected view
channel.
What: /sys/bus/coresight/devices/<cti-name>/channels/chan_xtrigs_reset
Date: March 2020
KernelVersion 5.7
KernelVersion: 5.7
Contact: Mike Leach or Mathieu Poirier
Description: (Write) Clear all channel / trigger programming.

View File

@@ -1,6 +1,6 @@
What: /sys/bus/coresight/devices/<tpdm-name>/integration_test
Date: January 2023
KernelVersion 6.2
KernelVersion: 6.2
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(Write) Run integration test for tpdm. Integration test
@@ -14,7 +14,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/reset_dataset
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(Write) Reset the dataset of the tpdm.
@@ -24,7 +24,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_trig_type
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the trigger type of the DSB for tpdm.
@@ -35,7 +35,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_trig_ts
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the trigger timestamp of the DSB for tpdm.
@@ -46,7 +46,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_mode
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the programming mode of the DSB for tpdm.
@@ -60,7 +60,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_edge/ctrl_idx
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the index number of the edge detection for the DSB
@@ -69,7 +69,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_edge/ctrl_val
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
Write a data to control the edge detection corresponding to
@@ -85,7 +85,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_edge/ctrl_mask
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
Write a data to mask the edge detection corresponding to the index
@@ -97,21 +97,21 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_edge/edcr[0:15]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
Read a set of the edge control value of the DSB in TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_edge/edcmr[0:7]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
Read a set of the edge control mask of the DSB in TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_trig_patt/xpr[0:7]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the value of the trigger pattern for the DSB
@@ -119,7 +119,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_trig_patt/xpmr[0:7]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the mask of the trigger pattern for the DSB
@@ -127,21 +127,21 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/tpr[0:7]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the value of the pattern for the DSB subunit TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/tpmr[0:7]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the mask of the pattern for the DSB subunit TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/enable_ts
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(Write) Set the pattern timestamp of DSB tpdm. Read
@@ -153,7 +153,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/set_type
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(Write) Set the pattern type of DSB tpdm. Read
@@ -165,7 +165,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_msr/msr[0:31]
Date: March 2023
KernelVersion 6.7
KernelVersion: 6.7
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the MSR(mux select register) for the DSB subunit
@@ -173,7 +173,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_mode
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description: (Write) Set the data collection mode of CMB tpdm. Continuous
change creates CMB data set elements on every CMBCLK edge.
@@ -187,7 +187,7 @@ Description: (Write) Set the data collection mode of CMB tpdm. Continuous
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_trig_patt/xpr[0:1]
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the value of the trigger pattern for the CMB
@@ -195,7 +195,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_trig_patt/xpmr[0:1]
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the mask of the trigger pattern for the CMB
@@ -203,21 +203,21 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/tpr[0:1]
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the value of the pattern for the CMB subunit TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/dsb_patt/tpmr[0:1]
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the mask of the pattern for the CMB subunit TPDM.
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_patt/enable_ts
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(Write) Set the pattern timestamp of CMB tpdm. Read
@@ -229,7 +229,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_trig_ts
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the trigger timestamp of the CMB for tpdm.
@@ -240,7 +240,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_ts_all
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Read or write the status of timestamp upon all interface.
@@ -252,7 +252,7 @@ Description:
What: /sys/bus/coresight/devices/<tpdm-name>/cmb_msr/msr[0:31]
Date: January 2024
KernelVersion 6.9
KernelVersion: 6.9
Contact: Jinlong Mao (QUIC) <quic_jinlmao@quicinc.com>, Tao Zhang (QUIC) <quic_taozha@quicinc.com>
Description:
(RW) Set/Get the MSR(mux select register) for the CMB subunit

View File

@@ -347,7 +347,7 @@ Description: Used to control configure extension list:
- [c] means add/del cold file extension
What: /sys/fs/f2fs/<disk>/unusable
Date April 2019
Date: April 2019
Contact: "Daniel Rosenberg" <drosen@google.com>
Description: If checkpoint=disable, it displays the number of blocks that
are unusable.
@@ -355,7 +355,7 @@ Description: If checkpoint=disable, it displays the number of blocks that
would be unusable if checkpoint=disable were to be set.
What: /sys/fs/f2fs/<disk>/encoding
Date July 2019
Date: July 2019
Contact: "Daniel Rosenberg" <drosen@google.com>
Description: Displays name and version of the encoding set for the filesystem.
If no encoding is set, displays (none)

View File

@@ -131,7 +131,7 @@ Description:
CAUTION: Using it will cause your machine's real-time (CMOS)
clock to be set to a random invalid time after a resume.
What; /sys/power/pm_trace_dev_match
What: /sys/power/pm_trace_dev_match
Date: October 2010
Contact: James Hogan <jhogan@kernel.org>
Description:

View File

@@ -0,0 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
Obsolete ABI Files
==================
.. kernel-abi:: obsolete
:no-symbols:

View File

@@ -1,3 +1,5 @@
.. SPDX-License-Identifier: GPL-2.0
ABI obsolete symbols
====================
@@ -7,5 +9,5 @@ marked to be removed at some later point in time.
The description of the interface will document the reason why it is
obsolete and when it can be expected to be removed.
.. kernel-abi:: ABI/obsolete
:rst:
.. kernel-abi:: obsolete
:no-files:

View File

@@ -0,0 +1,6 @@
.. SPDX-License-Identifier: GPL-2.0
ABI README
==========
.. kernel-abi:: README

View File

@@ -0,0 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
Removed ABI Files
=================
.. kernel-abi:: removed
:no-symbols:

View File

@@ -1,5 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
ABI removed symbols
===================
.. kernel-abi:: ABI/removed
:rst:
.. kernel-abi:: removed
:no-files:

View File

@@ -0,0 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
Stable ABI Files
================
.. kernel-abi:: stable
:no-symbols:

View File

@@ -1,3 +1,5 @@
.. SPDX-License-Identifier: GPL-2.0
ABI stable symbols
==================
@@ -10,5 +12,5 @@ for at least 2 years.
Most interfaces (like syscalls) are expected to never change and always
be available.
.. kernel-abi:: ABI/stable
:rst:
.. kernel-abi:: stable
:no-files:

View File

@@ -0,0 +1,7 @@
.. SPDX-License-Identifier: GPL-2.0
Testing ABI Files
=================
.. kernel-abi:: testing
:no-symbols:

View File

@@ -1,3 +1,5 @@
.. SPDX-License-Identifier: GPL-2.0
ABI testing symbols
===================
@@ -16,5 +18,5 @@ Programs that use these interfaces are strongly encouraged to add their
name to the description of these interfaces, so that the kernel
developers can easily notify them if any changes occur.
.. kernel-abi:: ABI/testing
:rst:
.. kernel-abi:: testing
:no-files:

View File

@@ -1,7 +1,12 @@
.. SPDX-License-Identifier: GPL-2.0
=====================
Linux ABI description
=====================
ABI symbols
-----------
.. toctree::
:maxdepth: 2
@@ -9,3 +14,15 @@ Linux ABI description
abi-testing
abi-obsolete
abi-removed
ABI files
---------
.. toctree::
:maxdepth: 2
abi-readme-file
abi-stable-files
abi-testing-files
abi-obsolete-files
abi-removed-files

View File

@@ -11,6 +11,8 @@ from sphinx.errors import NoUri
import re
from itertools import chain
from kernel_abi import get_kernel_abi
#
# Python 2 lacks re.ASCII...
#
@@ -48,6 +50,8 @@ RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=ascii_p3)
# an optional extension
#
RE_doc = re.compile(r'(\bDocumentation/)?((\.\./)*[\w\-/]+)\.(rst|txt)')
RE_abi_file = re.compile(r'(\bDocumentation/ABI/[\w\-/]+)')
RE_abi_symbol = re.compile(r'(\b/(sys|config|proc)/[\w\-/]+)')
RE_namespace = re.compile(r'^\s*..\s*c:namespace::\s*(\S+)\s*$')
@@ -84,10 +88,14 @@ def markup_refs(docname, app, node):
# Associate each regex with the function that will markup its matches
#
markup_func_sphinx2 = {RE_doc: markup_doc_ref,
RE_abi_file: markup_abi_ref,
RE_abi_symbol: markup_abi_ref,
RE_function: markup_c_ref,
RE_generic_type: markup_c_ref}
markup_func_sphinx3 = {RE_doc: markup_doc_ref,
RE_abi_file: markup_abi_ref,
RE_abi_symbol: markup_abi_ref,
RE_function: markup_func_ref_sphinx3,
RE_struct: markup_c_ref,
RE_union: markup_c_ref,
@@ -270,6 +278,45 @@ def markup_doc_ref(docname, app, match):
else:
return nodes.Text(match.group(0))
#
# Try to replace a documentation reference of the form Documentation/ABI/...
# with a cross reference to that page
#
def markup_abi_ref(docname, app, match):
stddom = app.env.domains['std']
#
# Go through the dance of getting an xref out of the std domain
#
kernel_abi = get_kernel_abi()
fname = match.group(1)
target = kernel_abi.xref(fname)
# Kernel ABI doesn't describe such file or symbol
if not target:
return nodes.Text(match.group(0))
pxref = addnodes.pending_xref('', refdomain = 'std', reftype = 'ref',
reftarget = target, modname = None,
classname = None, refexplicit = False)
#
# XXX The Latex builder will throw NoUri exceptions here,
# work around that by ignoring them.
#
try:
xref = stddom.resolve_xref(app.env, docname, app.builder, 'ref',
target, pxref, None)
except NoUri:
xref = None
#
# Return the xref if we got it; otherwise just return the plain text.
#
if xref:
return xref
else:
return nodes.Text(match.group(0))
def get_c_namespace(app, docname):
source = app.env.doc2path(docname)
with open(source) as f:

View File

@@ -14,7 +14,7 @@ u"""
:license: GPL Version 2, June 1991 see Linux/COPYING for details.
The ``kernel-abi`` (:py:class:`KernelCmd`) directive calls the
scripts/get_abi.pl script to parse the Kernel ABI files.
scripts/get_abi.py script to parse the Kernel ABI files.
Overview of directive's argument and options.
@@ -32,107 +32,137 @@ u"""
"""
import codecs
import os
import subprocess
import sys
import re
import kernellog
import sys
from docutils import nodes, statemachine
from docutils.statemachine import ViewList
from docutils.parsers.rst import directives, Directive
from docutils.utils.error_reporting import ErrorString
from sphinx.util.docutils import switch_source_input
from sphinx.util import logging
__version__ = '1.0'
srctree = os.path.abspath(os.environ["srctree"])
sys.path.insert(0, os.path.join(srctree, "scripts/lib/abi"))
from abi_parser import AbiParser
__version__ = "1.0"
logger = logging.getLogger('kernel_abi')
path = os.path.join(srctree, "Documentation/ABI")
_kernel_abi = None
def get_kernel_abi():
u"""
Initialize kernel_abi global var, if not initialized yet.
This is needed to avoid warnings during Sphinx module initialization.
"""
global _kernel_abi
if not _kernel_abi:
# Parse ABI symbols only once
_kernel_abi = AbiParser(path, logger=logger)
_kernel_abi.parse_abi()
_kernel_abi.check_issues()
return _kernel_abi
def setup(app):
app.add_directive("kernel-abi", KernelCmd)
return dict(
version = __version__
, parallel_read_safe = True
, parallel_write_safe = True
)
return {
"version": __version__,
"parallel_read_safe": True,
"parallel_write_safe": True
}
class KernelCmd(Directive):
u"""KernelABI (``kernel-abi``) directive"""
required_arguments = 1
optional_arguments = 2
optional_arguments = 3
has_content = False
final_argument_whitespace = True
parser = None
option_spec = {
"debug" : directives.flag,
"rst" : directives.unchanged
"debug": directives.flag,
"no-symbols": directives.flag,
"no-files": directives.flag,
}
def run(self):
kernel_abi = get_kernel_abi()
doc = self.state.document
if not doc.settings.file_insertion_enabled:
raise self.warning("docutils: file insertion disabled")
srctree = os.path.abspath(os.environ["srctree"])
args = [
os.path.join(srctree, 'scripts/get_abi.pl'),
'rest',
'--enable-lineno',
'--dir', os.path.join(srctree, 'Documentation', self.arguments[0]),
]
if 'rst' in self.options:
args.append('--rst-source')
lines = subprocess.check_output(args, cwd=os.path.dirname(doc.current_source)).decode('utf-8')
nodeList = self.nestedParse(lines, self.arguments[0])
return nodeList
def nestedParse(self, lines, fname):
env = self.state.document.settings.env
content = ViewList()
node = nodes.section()
if "debug" in self.options:
code_block = "\n\n.. code-block:: rst\n :linenos:\n"
for l in lines.split("\n"):
code_block += "\n " + l
lines = code_block + "\n\n"
abi_type = self.arguments[0]
line_regex = re.compile(r"^\.\. LINENO (\S+)\#([0-9]+)$")
ln = 0
if "no-symbols" in self.options:
show_symbols = False
else:
show_symbols = True
if "no-files" in self.options:
show_file = False
else:
show_file = True
tab_width = self.options.get('tab-width',
self.state.document.settings.tab_width)
old_f = None
n = 0
f = fname
for line in lines.split("\n"):
n = n + 1
match = line_regex.search(line)
if match:
new_f = match.group(1)
# Sphinx parser is lazy: it stops parsing contents in the
# middle, if it is too big. So, handle it per input file
if new_f != f and content:
self.do_parse(content, node)
content = ViewList()
# Add the file to Sphinx build dependencies
env.note_dependency(os.path.abspath(f))
f = new_f
# sphinx counts lines from 0
ln = int(match.group(2)) - 1
n_sym = 0
for msg, f, ln in kernel_abi.doc(show_file=show_file,
show_symbols=show_symbols,
filter_path=abi_type):
n_sym += 1
msg_list = statemachine.string2lines(msg, tab_width,
convert_whitespace=True)
if "debug" in self.options:
lines = [
"", "", ".. code-block:: rst",
" :linenos:", ""
]
for m in msg_list:
lines.append(" " + m)
else:
content.append(line, f, ln)
lines = msg_list
kernellog.info(self.state.document.settings.env.app, "%s: parsed %i lines" % (fname, n))
for line in lines:
# sphinx counts lines from 0
content.append(line, f, ln - 1)
n += 1
if content:
self.do_parse(content, node)
if f != old_f:
# Add the file to Sphinx build dependencies
env.note_dependency(os.path.abspath(f))
old_f = f
# Sphinx doesn't like to parse big messages. So, let's
# add content symbol by symbol
if content:
self.do_parse(content, node)
content = ViewList()
if show_symbols and not show_file:
logger.verbose("%s ABI: %i symbols (%i ReST lines)" % (abi_type, n_sym, n))
elif not show_symbols and show_file:
logger.verbose("%s ABI: %i files (%i ReST lines)" % (abi_type, n_sym, n))
else:
logger.verbose("%s ABI: %i data (%i ReST lines)" % (abi_type, n_sym, n))
return node.children

View File

@@ -39,7 +39,7 @@ from docutils.statemachine import ViewList
from docutils.parsers.rst import directives, Directive
import sphinx
from sphinx.util.docutils import switch_source_input
import kernellog
from sphinx.util import logging
__version__ = '1.0'
@@ -56,6 +56,7 @@ class KernelDocDirective(Directive):
'functions': directives.unchanged,
}
has_content = False
logger = logging.getLogger('kerneldoc')
def run(self):
env = self.state.document.settings.env
@@ -109,8 +110,7 @@ class KernelDocDirective(Directive):
cmd += [filename]
try:
kernellog.verbose(env.app,
'calling kernel-doc \'%s\'' % (" ".join(cmd)))
self.logger.verbose("calling kernel-doc '%s'" % (" ".join(cmd)))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
@@ -120,8 +120,8 @@ class KernelDocDirective(Directive):
if p.returncode != 0:
sys.stderr.write(err)
kernellog.warn(env.app,
'kernel-doc \'%s\' failed with return code %d' % (" ".join(cmd), p.returncode))
self.logger.warning("kernel-doc '%s' failed with return code %d"
% (" ".join(cmd), p.returncode))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
elif env.config.kerneldoc_verbosity > 0:
sys.stderr.write(err)
@@ -148,8 +148,8 @@ class KernelDocDirective(Directive):
return node.children
except Exception as e: # pylint: disable=W0703
kernellog.warn(env.app, 'kernel-doc \'%s\' processing failed with: %s' %
(" ".join(cmd), str(e)))
self.logger.warning("kernel-doc '%s' processing failed with: %s" %
(" ".join(cmd), str(e)))
return [nodes.error(None, nodes.paragraph(text = "kernel-doc missing"))]
def do_parse(self, result, node):

View File

@@ -1,22 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
#
# Sphinx has deprecated its older logging interface, but the replacement
# only goes back to 1.6. So here's a wrapper layer to keep around for
# as long as we support 1.4.
#
# We don't support 1.4 anymore, but we'll keep the wrappers around until
# we change all the code to not use them anymore :)
#
import sphinx
from sphinx.util import logging
logger = logging.getLogger('kerneldoc')
def warn(app, message):
logger.warning(message)
def verbose(app, message):
logger.verbose(message)
def info(app, message):
logger.info(message)

View File

@@ -59,12 +59,14 @@ from docutils.parsers.rst import directives
from docutils.parsers.rst.directives import images
import sphinx
from sphinx.util.nodes import clean_astext
import kernellog
from sphinx.util import logging
Figure = images.Figure
__version__ = '1.0.0'
logger = logging.getLogger('kfigure')
# simple helper
# -------------
@@ -170,7 +172,7 @@ def setupTools(app):
"""
global dot_cmd, dot_Tpdf, convert_cmd, rsvg_convert_cmd # pylint: disable=W0603
global inkscape_cmd, inkscape_ver_one # pylint: disable=W0603
kernellog.verbose(app, "kfigure: check installed tools ...")
logger.verbose("kfigure: check installed tools ...")
dot_cmd = which('dot')
convert_cmd = which('convert')
@@ -178,7 +180,7 @@ def setupTools(app):
inkscape_cmd = which('inkscape')
if dot_cmd:
kernellog.verbose(app, "use dot(1) from: " + dot_cmd)
logger.verbose("use dot(1) from: " + dot_cmd)
try:
dot_Thelp_list = subprocess.check_output([dot_cmd, '-Thelp'],
@@ -190,10 +192,11 @@ def setupTools(app):
dot_Tpdf_ptn = b'pdf'
dot_Tpdf = re.search(dot_Tpdf_ptn, dot_Thelp_list)
else:
kernellog.warn(app, "dot(1) not found, for better output quality install "
"graphviz from https://www.graphviz.org")
logger.warning(
"dot(1) not found, for better output quality install graphviz from https://www.graphviz.org"
)
if inkscape_cmd:
kernellog.verbose(app, "use inkscape(1) from: " + inkscape_cmd)
logger.verbose("use inkscape(1) from: " + inkscape_cmd)
inkscape_ver = subprocess.check_output([inkscape_cmd, '--version'],
stderr=subprocess.DEVNULL)
ver_one_ptn = b'Inkscape 1'
@@ -204,26 +207,27 @@ def setupTools(app):
else:
if convert_cmd:
kernellog.verbose(app, "use convert(1) from: " + convert_cmd)
logger.verbose("use convert(1) from: " + convert_cmd)
else:
kernellog.verbose(app,
logger.verbose(
"Neither inkscape(1) nor convert(1) found.\n"
"For SVG to PDF conversion, "
"install either Inkscape (https://inkscape.org/) (preferred) or\n"
"ImageMagick (https://www.imagemagick.org)")
"For SVG to PDF conversion, install either Inkscape (https://inkscape.org/) (preferred) or\n"
"ImageMagick (https://www.imagemagick.org)"
)
if rsvg_convert_cmd:
kernellog.verbose(app, "use rsvg-convert(1) from: " + rsvg_convert_cmd)
kernellog.verbose(app, "use 'dot -Tsvg' and rsvg-convert(1) for DOT -> PDF conversion")
logger.verbose("use rsvg-convert(1) from: " + rsvg_convert_cmd)
logger.verbose("use 'dot -Tsvg' and rsvg-convert(1) for DOT -> PDF conversion")
dot_Tpdf = False
else:
kernellog.verbose(app,
logger.verbose(
"rsvg-convert(1) not found.\n"
" SVG rendering of convert(1) is done by ImageMagick-native renderer.")
" SVG rendering of convert(1) is done by ImageMagick-native renderer."
)
if dot_Tpdf:
kernellog.verbose(app, "use 'dot -Tpdf' for DOT -> PDF conversion")
logger.verbose("use 'dot -Tpdf' for DOT -> PDF conversion")
else:
kernellog.verbose(app, "use 'dot -Tsvg' and convert(1) for DOT -> PDF conversion")
logger.verbose("use 'dot -Tsvg' and convert(1) for DOT -> PDF conversion")
# integrate conversion tools
@@ -257,13 +261,12 @@ def convert_image(img_node, translator, src_fname=None):
# in kernel builds, use 'make SPHINXOPTS=-v' to see verbose messages
kernellog.verbose(app, 'assert best format for: ' + img_node['uri'])
logger.verbose('assert best format for: ' + img_node['uri'])
if in_ext == '.dot':
if not dot_cmd:
kernellog.verbose(app,
"dot from graphviz not available / include DOT raw.")
logger.verbose("dot from graphviz not available / include DOT raw.")
img_node.replace_self(file2literal(src_fname))
elif translator.builder.format == 'latex':
@@ -290,10 +293,11 @@ def convert_image(img_node, translator, src_fname=None):
if translator.builder.format == 'latex':
if not inkscape_cmd and convert_cmd is None:
kernellog.warn(app,
"no SVG to PDF conversion available / include SVG raw."
"\nIncluding large raw SVGs can cause xelatex error."
"\nInstall Inkscape (preferred) or ImageMagick.")
logger.warning(
"no SVG to PDF conversion available / include SVG raw.\n"
"Including large raw SVGs can cause xelatex error.\n"
"Install Inkscape (preferred) or ImageMagick."
)
img_node.replace_self(file2literal(src_fname))
else:
dst_fname = path.join(translator.builder.outdir, fname + '.pdf')
@@ -306,15 +310,14 @@ def convert_image(img_node, translator, src_fname=None):
_name = dst_fname[len(str(translator.builder.outdir)) + 1:]
if isNewer(dst_fname, src_fname):
kernellog.verbose(app,
"convert: {out}/%s already exists and is newer" % _name)
logger.verbose("convert: {out}/%s already exists and is newer" % _name)
else:
ok = False
mkdir(path.dirname(dst_fname))
if in_ext == '.dot':
kernellog.verbose(app, 'convert DOT to: {out}/' + _name)
logger.verbose('convert DOT to: {out}/' + _name)
if translator.builder.format == 'latex' and not dot_Tpdf:
svg_fname = path.join(translator.builder.outdir, fname + '.svg')
ok1 = dot2format(app, src_fname, svg_fname)
@@ -325,7 +328,7 @@ def convert_image(img_node, translator, src_fname=None):
ok = dot2format(app, src_fname, dst_fname)
elif in_ext == '.svg':
kernellog.verbose(app, 'convert SVG to: {out}/' + _name)
logger.verbose('convert SVG to: {out}/' + _name)
ok = svg2pdf(app, src_fname, dst_fname)
if not ok:
@@ -354,7 +357,7 @@ def dot2format(app, dot_fname, out_fname):
with open(out_fname, "w") as out:
exit_code = subprocess.call(cmd, stdout = out)
if exit_code != 0:
kernellog.warn(app,
logger.warning(
"Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
return bool(exit_code == 0)
@@ -388,13 +391,14 @@ def svg2pdf(app, svg_fname, pdf_fname):
pass
if exit_code != 0:
kernellog.warn(app, "Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
logger.warning("Error #%d when calling: %s" %
(exit_code, " ".join(cmd)))
if warning_msg:
kernellog.warn(app, "Warning msg from %s: %s"
% (cmd_name, str(warning_msg, 'utf-8')))
logger.warning( "Warning msg from %s: %s" %
(cmd_name, str(warning_msg, 'utf-8')))
elif warning_msg:
kernellog.verbose(app, "Warning msg from %s (likely harmless):\n%s"
% (cmd_name, str(warning_msg, 'utf-8')))
logger.verbose("Warning msg from %s (likely harmless):\n%s" %
(cmd_name, str(warning_msg, 'utf-8')))
return bool(exit_code == 0)
@@ -418,7 +422,8 @@ def svg2pdf_by_rsvg(app, svg_fname, pdf_fname):
# use stdout and stderr from parent
exit_code = subprocess.call(cmd)
if exit_code != 0:
kernellog.warn(app, "Error #%d when calling: %s" % (exit_code, " ".join(cmd)))
logger.warning("Error #%d when calling: %s" %
(exit_code, " ".join(cmd)))
ok = bool(exit_code == 0)
return ok
@@ -513,15 +518,15 @@ def visit_kernel_render(self, node):
app = self.builder.app
srclang = node.get('srclang')
kernellog.verbose(app, 'visit kernel-render node lang: "%s"' % (srclang))
logger.verbose('visit kernel-render node lang: "%s"' % srclang)
tmp_ext = RENDER_MARKUP_EXT.get(srclang, None)
if tmp_ext is None:
kernellog.warn(app, 'kernel-render: "%s" unknown / include raw.' % (srclang))
logger.warning( 'kernel-render: "%s" unknown / include raw.' % srclang)
return
if not dot_cmd and tmp_ext == '.dot':
kernellog.verbose(app, "dot from graphviz not available / include raw.")
logger.verbose("dot from graphviz not available / include raw.")
return
literal_block = node[0]

View File

@@ -92,7 +92,7 @@ while (<IN>) {
next if ($f =~ m,^Next/,);
# Makefiles and scripts contain nasty expressions to parse docs
next if ($f =~ m/Makefile/ || $f =~ m/\.sh$/);
next if ($f =~ m/Makefile/ || $f =~ m/\.(sh|py|pl|~|rej|org|orig)$/);
# It doesn't make sense to parse hidden files
next if ($f =~ m#/\.#);

File diff suppressed because it is too large Load Diff

214
scripts/get_abi.py Executable file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env python3
# pylint: disable=R0903
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
# SPDX-License-Identifier: GPL-2.0
"""
Parse ABI documentation and produce results from it.
"""
import argparse
import logging
import os
import sys
# Import Python modules
LIB_DIR = "lib/abi"
SRC_DIR = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, os.path.join(SRC_DIR, LIB_DIR))
from abi_parser import AbiParser # pylint: disable=C0413
from abi_regex import AbiRegex # pylint: disable=C0413
from helpers import ABI_DIR, DEBUG_HELP # pylint: disable=C0413
from system_symbols import SystemSymbols # pylint: disable=C0413
# Command line classes
REST_DESC = """
Produce output in ReST format.
The output is done on two sections:
- Symbols: show all parsed symbols in alphabetic order;
- Files: cross reference the content of each file with the symbols on it.
"""
class AbiRest:
"""Initialize an argparse subparser for rest output"""
def __init__(self, subparsers):
"""Initialize argparse subparsers"""
parser = subparsers.add_parser("rest",
formatter_class=argparse.RawTextHelpFormatter,
description=REST_DESC)
parser.add_argument("--enable-lineno", action="store_true",
help="enable lineno")
parser.add_argument("--raw", action="store_true",
help="output text as contained in the ABI files. "
"It not used, output will contain dynamically"
" generated cross references when possible.")
parser.add_argument("--no-file", action="store_true",
help="Don't the files section")
parser.add_argument("--show-hints", help="Show-hints")
parser.set_defaults(func=self.run)
def run(self, args):
"""Run subparser"""
parser = AbiParser(args.dir, debug=args.debug)
parser.parse_abi()
parser.check_issues()
for t in parser.doc(args.raw, not args.no_file):
if args.enable_lineno:
print (f".. LINENO {t[1]}#{t[2]}\n\n")
print(t[0])
class AbiValidate:
"""Initialize an argparse subparser for ABI validation"""
def __init__(self, subparsers):
"""Initialize argparse subparsers"""
parser = subparsers.add_parser("validate",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="list events")
parser.set_defaults(func=self.run)
def run(self, args):
"""Run subparser"""
parser = AbiParser(args.dir, debug=args.debug)
parser.parse_abi()
parser.check_issues()
class AbiSearch:
"""Initialize an argparse subparser for ABI search"""
def __init__(self, subparsers):
"""Initialize argparse subparsers"""
parser = subparsers.add_parser("search",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="Search ABI using a regular expression")
parser.add_argument("expression",
help="Case-insensitive search pattern for the ABI symbol")
parser.set_defaults(func=self.run)
def run(self, args):
"""Run subparser"""
parser = AbiParser(args.dir, debug=args.debug)
parser.parse_abi()
parser.search_symbols(args.expression)
UNDEFINED_DESC="""
Check undefined ABIs on local machine.
Read sysfs devnodes and check if the devnodes there are defined inside
ABI documentation.
The search logic tries to minimize the number of regular expressions to
search per each symbol.
By default, it runs on a single CPU, as Python support for CPU threads
is still experimental, and multi-process runs on Python is very slow.
On experimental tests, if the number of ABI symbols to search per devnode
is contained on a limit of ~150 regular expressions, using a single CPU
is a lot faster than using multiple processes. However, if the number of
regular expressions to check is at the order of ~30000, using multiple
CPUs speeds up the check.
"""
class AbiUndefined:
"""
Initialize an argparse subparser for logic to check undefined ABI at
the current machine's sysfs
"""
def __init__(self, subparsers):
"""Initialize argparse subparsers"""
parser = subparsers.add_parser("undefined",
formatter_class=argparse.RawTextHelpFormatter,
description=UNDEFINED_DESC)
parser.add_argument("-S", "--sysfs-dir", default="/sys",
help="directory where sysfs is mounted")
parser.add_argument("-s", "--search-string",
help="search string regular expression to limit symbol search")
parser.add_argument("-H", "--show-hints", action="store_true",
help="Hints about definitions for missing ABI symbols.")
parser.add_argument("-j", "--jobs", "--max-workers", type=int, default=1,
help="If bigger than one, enables multiprocessing.")
parser.add_argument("-c", "--max-chunk-size", type=int, default=50,
help="Maximum number of chunk size")
parser.add_argument("-f", "--found", action="store_true",
help="Also show found items. "
"Helpful to debug the parser."),
parser.add_argument("-d", "--dry-run", action="store_true",
help="Don't actually search for undefined. "
"Helpful to debug the parser."),
parser.set_defaults(func=self.run)
def run(self, args):
"""Run subparser"""
abi = AbiRegex(args.dir, debug=args.debug,
search_string=args.search_string)
abi_symbols = SystemSymbols(abi=abi, hints=args.show_hints,
sysfs=args.sysfs_dir)
abi_symbols.check_undefined_symbols(dry_run=args.dry_run,
found=args.found,
max_workers=args.jobs,
chunk_size=args.max_chunk_size)
def main():
"""Main program"""
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument("-d", "--debug", type=int, default=0, help="debug level")
parser.add_argument("-D", "--dir", default=ABI_DIR, help=DEBUG_HELP)
subparsers = parser.add_subparsers()
AbiRest(subparsers)
AbiValidate(subparsers)
AbiSearch(subparsers)
AbiUndefined(subparsers)
args = parser.parse_args()
if args.debug:
level = logging.DEBUG
else:
level = logging.INFO
logging.basicConfig(level=level, format="[%(levelname)s] %(message)s")
if "func" in args:
args.func(args)
else:
sys.exit(f"Please specify a valid command for {sys.argv[0]}")
# Call main method
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,628 @@
#!/usr/bin/env python3
# pylint: disable=R0902,R0903,R0911,R0912,R0913,R0914,R0915,R0917,C0302
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
# SPDX-License-Identifier: GPL-2.0
"""
Parse ABI documentation and produce results from it.
"""
from argparse import Namespace
import logging
import os
import re
from pprint import pformat
from random import randrange, seed
# Import Python modules
from helpers import AbiDebug, ABI_DIR
class AbiParser:
"""Main class to parse ABI files"""
TAGS = r"(what|where|date|kernelversion|contact|description|users)"
XREF = r"(?:^|\s|\()(\/(?:sys|config|proc|dev|kvd)\/[^,.:;\)\s]+)(?:[,.:;\)\s]|\Z)"
def __init__(self, directory, logger=None,
enable_lineno=False, show_warnings=True, debug=0):
"""Stores arguments for the class and initialize class vars"""
self.directory = directory
self.enable_lineno = enable_lineno
self.show_warnings = show_warnings
self.debug = debug
if not logger:
self.log = logging.getLogger("get_abi")
else:
self.log = logger
self.data = {}
self.what_symbols = {}
self.file_refs = {}
self.what_refs = {}
# Ignore files that contain such suffixes
self.ignore_suffixes = (".rej", ".org", ".orig", ".bak", "~")
# Regular expressions used on parser
self.re_abi_dir = re.compile(r"(.*)" + ABI_DIR)
self.re_tag = re.compile(r"(\S+)(:\s*)(.*)", re.I)
self.re_valid = re.compile(self.TAGS)
self.re_start_spc = re.compile(r"(\s*)(\S.*)")
self.re_whitespace = re.compile(r"^\s+")
# Regular used on print
self.re_what = re.compile(r"(\/?(?:[\w\-]+\/?){1,2})")
self.re_escape = re.compile(r"([\.\x01-\x08\x0e-\x1f\x21-\x2f\x3a-\x40\x7b-\xff])")
self.re_unprintable = re.compile(r"([\x00-\x2f\x3a-\x40\x5b-\x60\x7b-\xff]+)")
self.re_title_mark = re.compile(r"\n[\-\*\=\^\~]+\n")
self.re_doc = re.compile(r"Documentation/(?!devicetree)(\S+)\.rst")
self.re_abi = re.compile(r"(Documentation/ABI/)([\w\/\-]+)")
self.re_xref_node = re.compile(self.XREF)
def warn(self, fdata, msg, extra=None):
"""Displays a parse error if warning is enabled"""
if not self.show_warnings:
return
msg = f"{fdata.fname}:{fdata.ln}: {msg}"
if extra:
msg += "\n\t\t" + extra
self.log.warning(msg)
def add_symbol(self, what, fname, ln=None, xref=None):
"""Create a reference table describing where each 'what' is located"""
if what not in self.what_symbols:
self.what_symbols[what] = {"file": {}}
if fname not in self.what_symbols[what]["file"]:
self.what_symbols[what]["file"][fname] = []
if ln and ln not in self.what_symbols[what]["file"][fname]:
self.what_symbols[what]["file"][fname].append(ln)
if xref:
self.what_symbols[what]["xref"] = xref
def _parse_line(self, fdata, line):
"""Parse a single line of an ABI file"""
new_what = False
new_tag = False
content = None
match = self.re_tag.match(line)
if match:
new = match.group(1).lower()
sep = match.group(2)
content = match.group(3)
match = self.re_valid.search(new)
if match:
new_tag = match.group(1)
else:
if fdata.tag == "description":
# New "tag" is actually part of description.
# Don't consider it a tag
new_tag = False
elif fdata.tag != "":
self.warn(fdata, f"tag '{fdata.tag}' is invalid", line)
if new_tag:
# "where" is Invalid, but was a common mistake. Warn if found
if new_tag == "where":
self.warn(fdata, "tag 'Where' is invalid. Should be 'What:' instead")
new_tag = "what"
if new_tag == "what":
fdata.space = None
if content not in self.what_symbols:
self.add_symbol(what=content, fname=fdata.fname, ln=fdata.ln)
if fdata.tag == "what":
fdata.what.append(content.strip("\n"))
else:
if fdata.key:
if "description" not in self.data.get(fdata.key, {}):
self.warn(fdata, f"{fdata.key} doesn't have a description")
for w in fdata.what:
self.add_symbol(what=w, fname=fdata.fname,
ln=fdata.what_ln, xref=fdata.key)
fdata.label = content
new_what = True
key = "abi_" + content.lower()
fdata.key = self.re_unprintable.sub("_", key).strip("_")
# Avoid duplicated keys but using a defined seed, to make
# the namespace identical if there aren't changes at the
# ABI symbols
seed(42)
while fdata.key in self.data:
char = randrange(0, 51) + ord("A")
if char > ord("Z"):
char += ord("a") - ord("Z") - 1
fdata.key += chr(char)
if fdata.key and fdata.key not in self.data:
self.data[fdata.key] = {
"what": [content],
"file": [fdata.file_ref],
"path": fdata.ftype,
"line_no": fdata.ln,
}
fdata.what = self.data[fdata.key]["what"]
self.what_refs[content] = fdata.key
fdata.tag = new_tag
fdata.what_ln = fdata.ln
if fdata.nametag["what"]:
t = (content, fdata.key)
if t not in fdata.nametag["symbols"]:
fdata.nametag["symbols"].append(t)
return
if fdata.tag and new_tag:
fdata.tag = new_tag
if new_what:
fdata.label = ""
if "description" in self.data[fdata.key]:
self.data[fdata.key]["description"] += "\n\n"
if fdata.file_ref not in self.data[fdata.key]["file"]:
self.data[fdata.key]["file"].append(fdata.file_ref)
if self.debug == AbiDebug.WHAT_PARSING:
self.log.debug("what: %s", fdata.what)
if not fdata.what:
self.warn(fdata, "'What:' should come first:", line)
return
if new_tag == "description":
fdata.space = None
if content:
sep = sep.replace(":", " ")
c = " " * len(new_tag) + sep + content
c = c.expandtabs()
match = self.re_start_spc.match(c)
if match:
# Preserve initial spaces for the first line
fdata.space = match.group(1)
content = match.group(2) + "\n"
self.data[fdata.key][fdata.tag] = content
return
# Store any contents before tags at the database
if not fdata.tag and "what" in fdata.nametag:
fdata.nametag["description"] += line
return
if fdata.tag == "description":
content = line.expandtabs()
if self.re_whitespace.sub("", content) == "":
self.data[fdata.key][fdata.tag] += "\n"
return
if fdata.space is None:
match = self.re_start_spc.match(content)
if match:
# Preserve initial spaces for the first line
fdata.space = match.group(1)
content = match.group(2) + "\n"
else:
if content.startswith(fdata.space):
content = content[len(fdata.space):]
else:
fdata.space = ""
if fdata.tag == "what":
w = content.strip("\n")
if w:
self.data[fdata.key][fdata.tag].append(w)
else:
self.data[fdata.key][fdata.tag] += content
return
content = line.strip()
if fdata.tag:
if fdata.tag == "what":
w = content.strip("\n")
if w:
self.data[fdata.key][fdata.tag].append(w)
else:
self.data[fdata.key][fdata.tag] += "\n" + content.rstrip("\n")
return
# Everything else is error
if content:
self.warn(fdata, "Unexpected content", line)
def parse_readme(self, nametag, fname):
"""Parse ABI README file"""
nametag["what"] = ["ABI file contents"]
nametag["path"] = "README"
with open(fname, "r", encoding="utf8", errors="backslashreplace") as fp:
for line in fp:
match = self.re_tag.match(line)
if match:
new = match.group(1).lower()
match = self.re_valid.search(new)
if match:
nametag["description"] += "\n:" + line
continue
nametag["description"] += line
def parse_file(self, fname, path, basename):
"""Parse a single file"""
ref = f"abi_file_{path}_{basename}"
ref = self.re_unprintable.sub("_", ref).strip("_")
# Store per-file state into a namespace variable. This will be used
# by the per-line parser state machine and by the warning function.
fdata = Namespace
fdata.fname = fname
fdata.name = basename
pos = fname.find(ABI_DIR)
if pos > 0:
f = fname[pos:]
else:
f = fname
fdata.file_ref = (f, ref)
self.file_refs[f] = ref
fdata.ln = 0
fdata.what_ln = 0
fdata.tag = ""
fdata.label = ""
fdata.what = []
fdata.key = None
fdata.xrefs = None
fdata.space = None
fdata.ftype = path.split("/")[0]
fdata.nametag = {}
fdata.nametag["what"] = [f"ABI file {path}/{basename}"]
fdata.nametag["type"] = "File"
fdata.nametag["path"] = fdata.ftype
fdata.nametag["file"] = [fdata.file_ref]
fdata.nametag["line_no"] = 1
fdata.nametag["description"] = ""
fdata.nametag["symbols"] = []
self.data[ref] = fdata.nametag
if self.debug & AbiDebug.WHAT_OPEN:
self.log.debug("Opening file %s", fname)
if basename == "README":
self.parse_readme(fdata.nametag, fname)
return
with open(fname, "r", encoding="utf8", errors="backslashreplace") as fp:
for line in fp:
fdata.ln += 1
self._parse_line(fdata, line)
if "description" in fdata.nametag:
fdata.nametag["description"] = fdata.nametag["description"].lstrip("\n")
if fdata.key:
if "description" not in self.data.get(fdata.key, {}):
self.warn(fdata, f"{fdata.key} doesn't have a description")
for w in fdata.what:
self.add_symbol(what=w, fname=fname, xref=fdata.key)
def _parse_abi(self, root=None):
"""Internal function to parse documentation ABI recursively"""
if not root:
root = self.directory
with os.scandir(root) as obj:
for entry in obj:
name = os.path.join(root, entry.name)
if entry.is_dir():
self._parse_abi(name)
continue
if not entry.is_file():
continue
basename = os.path.basename(name)
if basename.startswith("."):
continue
if basename.endswith(self.ignore_suffixes):
continue
path = self.re_abi_dir.sub("", os.path.dirname(name))
self.parse_file(name, path, basename)
def parse_abi(self, root=None):
"""Parse documentation ABI"""
self._parse_abi(root)
if self.debug & AbiDebug.DUMP_ABI_STRUCTS:
self.log.debug(pformat(self.data))
def desc_txt(self, desc):
"""Print description as found inside ABI files"""
desc = desc.strip(" \t\n")
return desc + "\n\n"
def xref(self, fname):
"""
Converts a Documentation/ABI + basename into a ReST cross-reference
"""
xref = self.file_refs.get(fname)
if not xref:
return None
else:
return xref
def desc_rst(self, desc):
"""Enrich ReST output by creating cross-references"""
# Remove title markups from the description
# Having titles inside ABI files will only work if extra
# care would be taken in order to strictly follow the same
# level order for each markup.
desc = self.re_title_mark.sub("\n\n", "\n" + desc)
desc = desc.rstrip(" \t\n").lstrip("\n")
# Python's regex performance for non-compiled expressions is a lot
# than Perl, as Perl automatically caches them at their
# first usage. Here, we'll need to do the same, as otherwise the
# performance penalty is be high
new_desc = ""
for d in desc.split("\n"):
if d == "":
new_desc += "\n"
continue
# Use cross-references for doc files where needed
d = self.re_doc.sub(r":doc:`/\1`", d)
# Use cross-references for ABI generated docs where needed
matches = self.re_abi.findall(d)
for m in matches:
abi = m[0] + m[1]
xref = self.file_refs.get(abi)
if not xref:
# This may happen if ABI is on a separate directory,
# like parsing ABI testing and symbol is at stable.
# The proper solution is to move this part of the code
# for it to be inside sphinx/kernel_abi.py
self.log.info("Didn't find ABI reference for '%s'", abi)
else:
new = self.re_escape.sub(r"\\\1", m[1])
d = re.sub(fr"\b{abi}\b", f":ref:`{new} <{xref}>`", d)
# Seek for cross reference symbols like /sys/...
# Need to be careful to avoid doing it on a code block
if d[0] not in [" ", "\t"]:
matches = self.re_xref_node.findall(d)
for m in matches:
# Finding ABI here is more complex due to wildcards
xref = self.what_refs.get(m)
if xref:
new = self.re_escape.sub(r"\\\1", m)
d = re.sub(fr"\b{m}\b", f":ref:`{new} <{xref}>`", d)
new_desc += d + "\n"
return new_desc + "\n\n"
def doc(self, output_in_txt=False, show_symbols=True, show_file=True,
filter_path=None):
"""Print ABI at stdout"""
part = None
for key, v in sorted(self.data.items(),
key=lambda x: (x[1].get("type", ""),
x[1].get("what"))):
wtype = v.get("type", "Symbol")
file_ref = v.get("file")
names = v.get("what", [""])
if wtype == "File":
if not show_file:
continue
else:
if not show_symbols:
continue
if filter_path:
if v.get("path") != filter_path:
continue
msg = ""
if wtype != "File":
cur_part = names[0]
if cur_part.find("/") >= 0:
match = self.re_what.match(cur_part)
if match:
symbol = match.group(1).rstrip("/")
cur_part = "Symbols under " + symbol
if cur_part and cur_part != part:
part = cur_part
msg += part + "\n"+ "-" * len(part) +"\n\n"
msg += f".. _{key}:\n\n"
max_len = 0
for i in range(0, len(names)): # pylint: disable=C0200
names[i] = "**" + self.re_escape.sub(r"\\\1", names[i]) + "**"
max_len = max(max_len, len(names[i]))
msg += "+-" + "-" * max_len + "-+\n"
for name in names:
msg += f"| {name}" + " " * (max_len - len(name)) + " |\n"
msg += "+-" + "-" * max_len + "-+\n"
msg += "\n"
for ref in file_ref:
if wtype == "File":
msg += f".. _{ref[1]}:\n\n"
else:
base = os.path.basename(ref[0])
msg += f"Defined on file :ref:`{base} <{ref[1]}>`\n\n"
if wtype == "File":
msg += names[0] +"\n" + "-" * len(names[0]) +"\n\n"
desc = v.get("description")
if not desc and wtype != "File":
msg += f"DESCRIPTION MISSING for {names[0]}\n\n"
if desc:
if output_in_txt:
msg += self.desc_txt(desc)
else:
msg += self.desc_rst(desc)
symbols = v.get("symbols")
if symbols:
msg += "Has the following ABI:\n\n"
for w, label in symbols:
# Escape special chars from content
content = self.re_escape.sub(r"\\\1", w)
msg += f"- :ref:`{content} <{label}>`\n\n"
users = v.get("users")
if users and users.strip(" \t\n"):
users = users.strip("\n").replace('\n', '\n\t')
msg += f"Users:\n\t{users}\n\n"
ln = v.get("line_no", 1)
yield (msg, file_ref[0][0], ln)
def check_issues(self):
"""Warn about duplicated ABI entries"""
for what, v in self.what_symbols.items():
files = v.get("file")
if not files:
# Should never happen if the parser works properly
self.log.warning("%s doesn't have a file associated", what)
continue
if len(files) == 1:
continue
f = []
for fname, lines in sorted(files.items()):
if not lines:
f.append(f"{fname}")
elif len(lines) == 1:
f.append(f"{fname}:{lines[0]}")
else:
m = fname + "lines "
m += ", ".join(str(x) for x in lines)
f.append(m)
self.log.warning("%s is defined %d times: %s", what, len(f), "; ".join(f))
def search_symbols(self, expr):
""" Searches for ABI symbols """
regex = re.compile(expr, re.I)
found_keys = 0
for t in sorted(self.data.items(), key=lambda x: [0]):
v = t[1]
wtype = v.get("type", "")
if wtype == "File":
continue
for what in v.get("what", [""]):
if regex.search(what):
found_keys += 1
kernelversion = v.get("kernelversion", "").strip(" \t\n")
date = v.get("date", "").strip(" \t\n")
contact = v.get("contact", "").strip(" \t\n")
users = v.get("users", "").strip(" \t\n")
desc = v.get("description", "").strip(" \t\n")
files = []
for f in v.get("file", ()):
files.append(f[0])
what = str(found_keys) + ". " + what
title_tag = "-" * len(what)
print(f"\n{what}\n{title_tag}\n")
if kernelversion:
print(f"Kernel version:\t\t{kernelversion}")
if date:
print(f"Date:\t\t\t{date}")
if contact:
print(f"Contact:\t\t{contact}")
if users:
print(f"Users:\t\t\t{users}")
print("Defined on file(s):\t" + ", ".join(files))
if desc:
desc = desc.strip("\n")
print(f"\n{desc}\n")
if not found_keys:
print(f"Regular expression /{expr}/ not found.")

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
# xxpylint: disable=R0903
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
# SPDX-License-Identifier: GPL-2.0
"""
Convert ABI what into regular expressions
"""
import re
import sys
from pprint import pformat
from abi_parser import AbiParser
from helpers import AbiDebug
class AbiRegex(AbiParser):
"""Extends AbiParser to search ABI nodes with regular expressions"""
# Escape only ASCII visible characters
escape_symbols = r"([\x21-\x29\x2b-\x2d\x3a-\x40\x5c\x60\x7b-\x7e])"
leave_others = "others"
# Tuples with regular expressions to be compiled and replacement data
re_whats = [
# Drop escape characters that might exist
(re.compile("\\\\"), ""),
# Temporarily escape dot characters
(re.compile(r"\."), "\xf6"),
# Temporarily change [0-9]+ type of patterns
(re.compile(r"\[0\-9\]\+"), "\xff"),
# Temporarily change [\d+-\d+] type of patterns
(re.compile(r"\[0\-\d+\]"), "\xff"),
(re.compile(r"\[0:\d+\]"), "\xff"),
(re.compile(r"\[(\d+)\]"), "\xf4\\\\d+\xf5"),
# Temporarily change [0-9] type of patterns
(re.compile(r"\[(\d)\-(\d)\]"), "\xf4\1-\2\xf5"),
# Handle multiple option patterns
(re.compile(r"[\{\<\[]([\w_]+)(?:[,|]+([\w_]+)){1,}[\}\>\]]"), r"(\1|\2)"),
# Handle wildcards
(re.compile(r"([^\/])\*"), "\\1\\\\w\xf7"),
(re.compile(r"/\*/"), "/.*/"),
(re.compile(r"/\xf6\xf6\xf6"), "/.*"),
(re.compile(r"\<[^\>]+\>"), "\\\\w\xf7"),
(re.compile(r"\{[^\}]+\}"), "\\\\w\xf7"),
(re.compile(r"\[[^\]]+\]"), "\\\\w\xf7"),
(re.compile(r"XX+"), "\\\\w\xf7"),
(re.compile(r"([^A-Z])[XYZ]([^A-Z])"), "\\1\\\\w\xf7\\2"),
(re.compile(r"([^A-Z])[XYZ]$"), "\\1\\\\w\xf7"),
(re.compile(r"_[AB]_"), "_\\\\w\xf7_"),
# Recover [0-9] type of patterns
(re.compile(r"\xf4"), "["),
(re.compile(r"\xf5"), "]"),
# Remove duplicated spaces
(re.compile(r"\s+"), r" "),
# Special case: drop comparison as in:
# What: foo = <something>
# (this happens on a few IIO definitions)
(re.compile(r"\s*\=.*$"), ""),
# Escape all other symbols
(re.compile(escape_symbols), r"\\\1"),
(re.compile(r"\\\\"), r"\\"),
(re.compile(r"\\([\[\]\(\)\|])"), r"\1"),
(re.compile(r"(\d+)\\(-\d+)"), r"\1\2"),
(re.compile(r"\xff"), r"\\d+"),
# Special case: IIO ABI which a parenthesis.
(re.compile(r"sqrt(.*)"), r"sqrt(.*)"),
# Simplify regexes with multiple .*
(re.compile(r"(?:\.\*){2,}"), ""),
# Recover dot characters
(re.compile(r"\xf6"), "\\."),
# Recover plus characters
(re.compile(r"\xf7"), "+"),
]
re_has_num = re.compile(r"\\d")
# Symbol name after escape_chars that are considered a devnode basename
re_symbol_name = re.compile(r"(\w|\\[\.\-\:])+$")
# List of popular group names to be skipped to minimize regex group size
# Use AbiDebug.SUBGROUP_SIZE to detect those
skip_names = set(["devices", "hwmon"])
def regex_append(self, what, new):
"""
Get a search group for a subset of regular expressions.
As ABI may have thousands of symbols, using a for to search all
regular expressions is at least O(n^2). When there are wildcards,
the complexity increases substantially, eventually becoming exponential.
To avoid spending too much time on them, use a logic to split
them into groups. The smaller the group, the better, as it would
mean that searches will be confined to a small number of regular
expressions.
The conversion to a regex subset is tricky, as we need something
that can be easily obtained from the sysfs symbol and from the
regular expression. So, we need to discard nodes that have
wildcards.
If it can't obtain a subgroup, place the regular expression inside
a special group (self.leave_others).
"""
search_group = None
for search_group in reversed(new.split("/")):
if not search_group or search_group in self.skip_names:
continue
if self.re_symbol_name.match(search_group):
break
if not search_group:
search_group = self.leave_others
if self.debug & AbiDebug.SUBGROUP_MAP:
self.log.debug("%s: mapped as %s", what, search_group)
try:
if search_group not in self.regex_group:
self.regex_group[search_group] = []
self.regex_group[search_group].append(re.compile(new))
if self.search_string:
if what.find(self.search_string) >= 0:
print(f"What: {what}")
except re.PatternError:
self.log.warning("Ignoring '%s' as it produced an invalid regex:\n"
" '%s'", what, new)
def get_regexes(self, what):
"""
Given an ABI devnode, return a list of all regular expressions that
may match it, based on the sub-groups created by regex_append()
"""
re_list = []
patches = what.split("/")
patches.reverse()
patches.append(self.leave_others)
for search_group in patches:
if search_group in self.regex_group:
re_list += self.regex_group[search_group]
return re_list
def __init__(self, *args, **kwargs):
"""
Override init method to get verbose argument
"""
self.regex_group = None
self.search_string = None
self.re_string = None
if "search_string" in kwargs:
self.search_string = kwargs.get("search_string")
del kwargs["search_string"]
if self.search_string:
try:
self.re_string = re.compile(self.search_string)
except re.PatternError as e:
msg = f"{self.search_string} is not a valid regular expression"
raise ValueError(msg) from e
super().__init__(*args, **kwargs)
def parse_abi(self, *args, **kwargs):
super().parse_abi(*args, **kwargs)
self.regex_group = {}
print("Converting ABI What fields into regexes...", file=sys.stderr)
for t in sorted(self.data.items(), key=lambda x: x[0]):
v = t[1]
if v.get("type") == "File":
continue
v["regex"] = []
for what in v.get("what", []):
if not what.startswith("/sys"):
continue
new = what
for r, s in self.re_whats:
try:
new = r.sub(s, new)
except re.PatternError as e:
# Help debugging troubles with new regexes
raise re.PatternError(f"{e}\nwhile re.sub('{r.pattern}', {s}, str)") from e
v["regex"].append(new)
if self.debug & AbiDebug.REGEX:
self.log.debug("%-90s <== %s", new, what)
# Store regex into a subgroup to speedup searches
self.regex_append(what, new)
if self.debug & AbiDebug.SUBGROUP_DICT:
self.log.debug("%s", pformat(self.regex_group))
if self.debug & AbiDebug.SUBGROUP_SIZE:
biggestd_keys = sorted(self.regex_group.keys(),
key= lambda k: len(self.regex_group[k]),
reverse=True)
print("Top regex subgroups:", file=sys.stderr)
for k in biggestd_keys[:10]:
print(f"{k} has {len(self.regex_group[k])} elements", file=sys.stderr)

View File

@@ -0,0 +1,38 @@
#!/usr/bin/env python3
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
# pylint: disable=R0903
# SPDX-License-Identifier: GPL-2.0
"""
Helper classes for ABI parser
"""
ABI_DIR = "Documentation/ABI/"
class AbiDebug:
"""Debug levels"""
WHAT_PARSING = 1
WHAT_OPEN = 2
DUMP_ABI_STRUCTS = 4
UNDEFINED = 8
REGEX = 16
SUBGROUP_MAP = 32
SUBGROUP_DICT = 64
SUBGROUP_SIZE = 128
GRAPH = 256
DEBUG_HELP = """
1 - enable debug parsing logic
2 - enable debug messages on file open
4 - enable debug for ABI parse data
8 - enable extra debug information to identify troubles
with ABI symbols found at the local machine that
weren't found on ABI documentation (used only for
undefined subcommand)
16 - enable debug for what to regex conversion
32 - enable debug for symbol regex subgroups
64 - enable debug for sysfs graph tree variable
"""

View File

@@ -0,0 +1,378 @@
#!/usr/bin/env python3
# pylint: disable=R0902,R0912,R0914,R0915,R1702
# Copyright(c) 2025: Mauro Carvalho Chehab <mchehab@kernel.org>.
# SPDX-License-Identifier: GPL-2.0
"""
Parse ABI documentation and produce results from it.
"""
import os
import re
import sys
from concurrent import futures
from datetime import datetime
from random import shuffle
from helpers import AbiDebug
class SystemSymbols:
"""Stores arguments for the class and initialize class vars"""
def graph_add_file(self, path, link=None):
"""
add a file path to the sysfs graph stored at self.root
"""
if path in self.files:
return
name = ""
ref = self.root
for edge in path.split("/"):
name += edge + "/"
if edge not in ref:
ref[edge] = {"__name": [name.rstrip("/")]}
ref = ref[edge]
if link and link not in ref["__name"]:
ref["__name"].append(link.rstrip("/"))
self.files.add(path)
def print_graph(self, root_prefix="", root=None, level=0):
"""Prints a reference tree graph using UTF-8 characters"""
if not root:
root = self.root
level = 0
# Prevent endless traverse
if level > 5:
return
if level > 0:
prefix = "├──"
last_prefix = "└──"
else:
prefix = ""
last_prefix = ""
items = list(root.items())
names = root.get("__name", [])
for k, edge in items:
if k == "__name":
continue
if not k:
k = "/"
if len(names) > 1:
k += " links: " + ",".join(names[1:])
if edge == items[-1][1]:
print(root_prefix + last_prefix + k)
p = root_prefix
if level > 0:
p += " "
self.print_graph(p, edge, level + 1)
else:
print(root_prefix + prefix + k)
p = root_prefix + ""
self.print_graph(p, edge, level + 1)
def _walk(self, root):
"""
Walk through sysfs to get all devnodes that aren't ignored.
By default, uses /sys as sysfs mounting point. If another
directory is used, it replaces them to /sys at the patches.
"""
with os.scandir(root) as obj:
for entry in obj:
path = os.path.join(root, entry.name)
if self.sysfs:
p = path.replace(self.sysfs, "/sys", count=1)
else:
p = path
if self.re_ignore.search(p):
return
# Handle link first to avoid directory recursion
if entry.is_symlink():
real = os.path.realpath(path)
if not self.sysfs:
self.aliases[path] = real
else:
real = real.replace(self.sysfs, "/sys", count=1)
# Add absfile location to graph if it doesn't exist
if not self.re_ignore.search(real):
# Add link to the graph
self.graph_add_file(real, p)
elif entry.is_file():
self.graph_add_file(p)
elif entry.is_dir():
self._walk(path)
def __init__(self, abi, sysfs="/sys", hints=False):
"""
Initialize internal variables and get a list of all files inside
sysfs that can currently be parsed.
Please notice that there are several entries on sysfs that aren't
documented as ABI. Ignore those.
The real paths will be stored under self.files. Aliases will be
stored in separate, as self.aliases.
"""
self.abi = abi
self.log = abi.log
if sysfs != "/sys":
self.sysfs = sysfs.rstrip("/")
else:
self.sysfs = None
self.hints = hints
self.root = {}
self.aliases = {}
self.files = set()
dont_walk = [
# Those require root access and aren't documented at ABI
f"^{sysfs}/kernel/debug",
f"^{sysfs}/kernel/tracing",
f"^{sysfs}/fs/pstore",
f"^{sysfs}/fs/bpf",
f"^{sysfs}/fs/fuse",
# This is not documented at ABI
f"^{sysfs}/module",
f"^{sysfs}/fs/cgroup", # this is big and has zero docs under ABI
f"^{sysfs}/firmware", # documented elsewhere: ACPI, DT bindings
"sections|notes", # aren't actually part of ABI
# kernel-parameters.txt - not easy to parse
"parameters",
]
self.re_ignore = re.compile("|".join(dont_walk))
print(f"Reading {sysfs} directory contents...", file=sys.stderr)
self._walk(sysfs)
def check_file(self, refs, found):
"""Check missing ABI symbols for a given sysfs file"""
res_list = []
try:
for names in refs:
fname = names[0]
res = {
"found": False,
"fname": fname,
"msg": "",
}
res_list.append(res)
re_what = self.abi.get_regexes(fname)
if not re_what:
self.abi.log.warning(f"missing rules for {fname}")
continue
for name in names:
for r in re_what:
if self.abi.debug & AbiDebug.UNDEFINED:
self.log.debug("check if %s matches '%s'", name, r.pattern)
if r.match(name):
res["found"] = True
if found:
res["msg"] += f" {fname}: regex:\n\t"
continue
if self.hints and not res["found"]:
res["msg"] += f" {fname} not found. Tested regexes:\n"
for r in re_what:
res["msg"] += " " + r.pattern + "\n"
except KeyboardInterrupt:
pass
return res_list
def _ref_interactor(self, root):
"""Recursive function to interact over the sysfs tree"""
for k, v in root.items():
if isinstance(v, dict):
yield from self._ref_interactor(v)
if root == self.root or k == "__name":
continue
if self.abi.re_string:
fname = v["__name"][0]
if self.abi.re_string.search(fname):
yield v
else:
yield v
def get_fileref(self, all_refs, chunk_size):
"""Interactor to group refs into chunks"""
n = 0
refs = []
for ref in all_refs:
refs.append(ref)
n += 1
if n >= chunk_size:
yield refs
n = 0
refs = []
yield refs
def check_undefined_symbols(self, max_workers=None, chunk_size=50,
found=None, dry_run=None):
"""Seach ABI for sysfs symbols missing documentation"""
self.abi.parse_abi()
if self.abi.debug & AbiDebug.GRAPH:
self.print_graph()
all_refs = []
for ref in self._ref_interactor(self.root):
all_refs.append(ref["__name"])
if dry_run:
print("Would check", file=sys.stderr)
for ref in all_refs:
print(", ".join(ref))
return
print("Starting to search symbols (it may take several minutes):",
file=sys.stderr)
start = datetime.now()
old_elapsed = None
# Python doesn't support multithreading due to limitations on its
# global lock (GIL). While Python 3.13 finally made GIL optional,
# there are still issues related to it. Also, we want to have
# backward compatibility with older versions of Python.
#
# So, use instead multiprocess. However, Python is very slow passing
# data from/to multiple processes. Also, it may consume lots of memory
# if the data to be shared is not small. So, we need to group workload
# in chunks that are big enough to generate performance gains while
# not being so big that would cause out-of-memory.
num_refs = len(all_refs)
print(f"Number of references to parse: {num_refs}", file=sys.stderr)
if not max_workers:
max_workers = os.cpu_count()
elif max_workers > os.cpu_count():
max_workers = os.cpu_count()
max_workers = max(max_workers, 1)
max_chunk_size = int((num_refs + max_workers - 1) / max_workers)
chunk_size = min(chunk_size, max_chunk_size)
chunk_size = max(1, chunk_size)
if max_workers > 1:
executor = futures.ProcessPoolExecutor
# Place references in a random order. This may help improving
# performance, by mixing complex/simple expressions when creating
# chunks
shuffle(all_refs)
else:
# Python has a high overhead with processes. When there's just
# one worker, it is faster to not create a new process.
# Yet, User still deserves to have a progress print. So, use
# python's "thread", which is actually a single process, using
# an internal schedule to switch between tasks. No performance
# gains for non-IO tasks, but still it can be quickly interrupted
# from time to time to display progress.
executor = futures.ThreadPoolExecutor
not_found = []
f_list = []
with executor(max_workers=max_workers) as exe:
for refs in self.get_fileref(all_refs, chunk_size):
if refs:
try:
f_list.append(exe.submit(self.check_file, refs, found))
except KeyboardInterrupt:
return
total = len(f_list)
if not total:
if self.abi.re_string:
print(f"No ABI symbol matches {self.abi.search_string}")
else:
self.abi.log.warning("No ABI symbols found")
return
print(f"{len(f_list):6d} jobs queued on {max_workers} workers",
file=sys.stderr)
while f_list:
try:
t = futures.wait(f_list, timeout=1,
return_when=futures.FIRST_COMPLETED)
done = t[0]
for fut in done:
res_list = fut.result()
for res in res_list:
if not res["found"]:
not_found.append(res["fname"])
if res["msg"]:
print(res["msg"])
f_list.remove(fut)
except KeyboardInterrupt:
return
except RuntimeError as e:
self.abi.log.warning(f"Future: {e}")
break
if sys.stderr.isatty():
elapsed = str(datetime.now() - start).split(".", maxsplit=1)[0]
if len(f_list) < total:
elapsed += f" ({total - len(f_list)}/{total} jobs completed). "
if elapsed != old_elapsed:
print(elapsed + "\r", end="", flush=True,
file=sys.stderr)
old_elapsed = elapsed
elapsed = str(datetime.now() - start).split(".", maxsplit=1)[0]
print(elapsed, file=sys.stderr)
for f in sorted(not_found):
print(f"{f} not found.")