Merge tag 'net-6.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from bluetooth and netfilter.

  Current release - regressions:

   - dsa: sja1105: fix reception from VLAN-unaware bridges

   - Revert "net: stmmac: set PP_FLAG_DMA_SYNC_DEV only if XDP is
     enabled"

   - eth: fec: don't save PTP state if PTP is unsupported

  Current release - new code bugs:

   - smc: fix lack of icsk_syn_mss with IPPROTO_SMC, prevent null-deref

   - eth: airoha: update Tx CPU DMA ring idx at the end of xmit loop

   - phy: aquantia: AQR115c fix up PMA capabilities

  Previous releases - regressions:

   - tcp: 3 fixes for retrans_stamp and undo logic

  Previous releases - always broken:

   - net: do not delay dst_entries_add() in dst_release()

   - netfilter: restrict xtables extensions to families that are safe,
     syzbot found a way to combine ebtables with extensions that are
     never used by userspace tools

   - sctp: ensure sk_state is set to CLOSED if hashing fails in
     sctp_listen_start

   - mptcp: handle consistently DSS corruption, and prevent corruption
     due to large pmtu xmit"

* tag 'net-6.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (87 commits)
  MAINTAINERS: Add headers and mailing list to UDP section
  MAINTAINERS: consistently exclude wireless files from NETWORKING [GENERAL]
  slip: make slhc_remember() more robust against malicious packets
  net/smc: fix lacks of icsk_syn_mss with IPPROTO_SMC
  ppp: fix ppp_async_encode() illegal access
  docs: netdev: document guidance on cleanup patches
  phonet: Handle error of rtnl_register_module().
  mpls: Handle error of rtnl_register_module().
  mctp: Handle error of rtnl_register_module().
  bridge: Handle error of rtnl_register_module().
  vxlan: Handle error of rtnl_register_module().
  rtnetlink: Add bulk registration helpers for rtnetlink message handlers.
  net: do not delay dst_entries_add() in dst_release()
  mptcp: pm: do not remove closing subflows
  mptcp: fallback when MPTCP opts are dropped after 1st data
  tcp: fix mptcp DSS corruption due to large pmtu xmit
  mptcp: handle consistently DSS corruption
  net: netconsole: fix wrong warning
  net: dsa: refuse cross-chip mirroring operations
  net: fec: don't save PTP state if PTP is unsupported
  ...
This commit is contained in:
Linus Torvalds
2024-10-10 12:36:35 -07:00
115 changed files with 1360 additions and 509 deletions

View File

@@ -9,7 +9,7 @@ segments between trusted peers. It adds a new TCP header option with
a Message Authentication Code (MAC). MACs are produced from the content
of a TCP segment using a hashing function with a password known to both peers.
The intent of TCP-AO is to deprecate TCP-MD5 providing better security,
key rotation and support for variety of hashing algorithms.
key rotation and support for a variety of hashing algorithms.
1. Introduction
===============
@@ -164,9 +164,9 @@ A: It should not, no action needs to be performed [7.5.2.e]::
is not available, no action is required (RNextKeyID of a received
segment needs to match the MKTs SendID).
Q: How current_key is set and when does it change? It is a user-triggered
change, or is it by a request from the remote peer? Is it set by the user
explicitly, or by a matching rule?
Q: How is current_key set, and when does it change? Is it a user-triggered
change, or is it triggered by a request from the remote peer? Is it set by the
user explicitly, or by a matching rule?
A: current_key is set by RNextKeyID [6.1]::
@@ -233,8 +233,8 @@ always have one current_key [3.3]::
Q: Can a non-TCP-AO connection become a TCP-AO-enabled one?
A: No: for already established non-TCP-AO connection it would be impossible
to switch using TCP-AO as the traffic key generation requires the initial
A: No: for an already established non-TCP-AO connection it would be impossible
to switch to using TCP-AO, as the traffic key generation requires the initial
sequence numbers. Paraphrasing, starting using TCP-AO would require
re-establishing the TCP connection.
@@ -292,7 +292,7 @@ no transparency is really needed and modern BGP daemons already have
Linux provides a set of ``setsockopt()s`` and ``getsockopt()s`` that let
userspace manage TCP-AO on a per-socket basis. In order to add/delete MKTs
``TCP_AO_ADD_KEY`` and ``TCP_AO_DEL_KEY`` TCP socket options must be used
``TCP_AO_ADD_KEY`` and ``TCP_AO_DEL_KEY`` TCP socket options must be used.
It is not allowed to add a key on an established non-TCP-AO connection
as well as to remove the last key from TCP-AO connection.
@@ -361,7 +361,7 @@ not implemented.
4. ``setsockopt()`` vs ``accept()`` race
========================================
In contrast with TCP-MD5 established connection which has just one key,
In contrast with an established TCP-MD5 connection which has just one key,
TCP-AO connections may have many keys, which means that accepted connections
on a listen socket may have any amount of keys as well. As copying all those
keys on a first properly signed SYN would make the request socket bigger, that
@@ -374,7 +374,7 @@ keys from sockets that were already established, but not yet ``accept()``'ed,
hanging in the accept queue.
The reverse is valid as well: if userspace adds a new key for a peer on
a listener socket, the established sockets in accept queue won't
a listener socket, the established sockets in the accept queue won't
have the new keys.
At this moment, the resolution for the two races:
@@ -382,7 +382,7 @@ At this moment, the resolution for the two races:
and ``setsockopt(TCP_AO_DEL_KEY)`` vs ``accept()`` is delegated to userspace.
This means that it's expected that userspace would check the MKTs on the socket
that was returned by ``accept()`` to verify that any key rotation that
happened on listen socket is reflected on the newly established connection.
happened on the listen socket is reflected on the newly established connection.
This is a similar "do-nothing" approach to TCP-MD5 from the kernel side and
may be changed later by introducing new flags to ``tcp_ao_add``

View File

@@ -355,6 +355,8 @@ just do it. As a result, a sequence of smaller series gets merged quicker and
with better review coverage. Re-posting large series also increases the mailing
list traffic.
.. _rcs:
Local variable ordering ("reverse xmas tree", "RCS")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -391,6 +393,21 @@ APIs and helpers, especially scoped iterators. However, direct use of
``__free()`` within networking core and drivers is discouraged.
Similar guidance applies to declaring variables mid-function.
Clean-up patches
~~~~~~~~~~~~~~~~
Netdev discourages patches which perform simple clean-ups, which are not in
the context of other work. For example:
* Addressing ``checkpatch.pl`` warnings
* Addressing :ref:`Local variable ordering<rcs>` issues
* Conversions to device-managed APIs (``devm_`` helpers)
This is because it is felt that the churn that such changes produce comes
at a greater cost than the value of such clean-ups.
Conversely, spelling and grammar fixes are not discouraged.
Resending after review
~~~~~~~~~~~~~~~~~~~~~~

View File

@@ -10270,7 +10270,7 @@ F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml
F: drivers/bus/hisi_lpc.c
HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3)
M: Yisen Zhuang <yisen.zhuang@huawei.com>
M: Jian Shen <shenjian15@huawei.com>
M: Salil Mehta <salil.mehta@huawei.com>
M: Jijie Shao <shaojijie@huawei.com>
L: netdev@vger.kernel.org
@@ -10279,7 +10279,7 @@ W: http://www.hisilicon.com
F: drivers/net/ethernet/hisilicon/hns3/
HISILICON NETWORK SUBSYSTEM DRIVER
M: Yisen Zhuang <yisen.zhuang@huawei.com>
M: Jian Shen <shenjian15@huawei.com>
M: Salil Mehta <salil.mehta@huawei.com>
L: netdev@vger.kernel.org
S: Maintained
@@ -16201,8 +16201,19 @@ F: lib/random32.c
F: net/
F: tools/net/
F: tools/testing/selftests/net/
X: Documentation/networking/mac80211-injection.rst
X: Documentation/networking/mac80211_hwsim/
X: Documentation/networking/regulatory.rst
X: include/net/cfg80211.h
X: include/net/ieee80211_radiotap.h
X: include/net/iw_handler.h
X: include/net/mac80211.h
X: include/net/wext.h
X: net/9p/
X: net/bluetooth/
X: net/mac80211/
X: net/rfkill/
X: net/wireless/
NETWORKING [IPSEC]
M: Steffen Klassert <steffen.klassert@secunet.com>
@@ -24177,8 +24188,12 @@ F: drivers/usb/host/xhci*
USER DATAGRAM PROTOCOL (UDP)
M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
L: netdev@vger.kernel.org
S: Maintained
F: include/linux/udp.h
F: include/net/udp.h
F: include/trace/events/udp.h
F: include/uapi/linux/udp.h
F: net/ipv4/udp.c
F: net/ipv6/udp.c

View File

@@ -4038,16 +4038,29 @@ static void btusb_disconnect(struct usb_interface *intf)
static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
{
struct btusb_data *data = usb_get_intfdata(intf);
int err;
BT_DBG("intf %p", intf);
/* Don't suspend if there are connections */
if (hci_conn_count(data->hdev))
/* Don't auto-suspend if there are connections; external suspend calls
* shall never fail.
*/
if (PMSG_IS_AUTO(message) && hci_conn_count(data->hdev))
return -EBUSY;
if (data->suspend_count++)
return 0;
/* Notify Host stack to suspend; this has to be done before stopping
* the traffic since the hci_suspend_dev itself may generate some
* traffic.
*/
err = hci_suspend_dev(data->hdev);
if (err) {
data->suspend_count--;
return err;
}
spin_lock_irq(&data->txlock);
if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {
set_bit(BTUSB_SUSPENDING, &data->flags);
@@ -4055,6 +4068,7 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
} else {
spin_unlock_irq(&data->txlock);
data->suspend_count--;
hci_resume_dev(data->hdev);
return -EBUSY;
}
@@ -4175,6 +4189,8 @@ static int btusb_resume(struct usb_interface *intf)
spin_unlock_irq(&data->txlock);
schedule_work(&data->work);
hci_resume_dev(data->hdev);
return 0;
failed:

View File

@@ -27,6 +27,7 @@
#include <linux/phylink.h>
#include <linux/etherdevice.h>
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
#include <net/dsa.h>
#include "b53_regs.h"
@@ -224,6 +225,9 @@ static const struct b53_mib_desc b53_mibs_58xx[] = {
#define B53_MIBS_58XX_SIZE ARRAY_SIZE(b53_mibs_58xx)
#define B53_MAX_MTU_25 (1536 - ETH_HLEN - VLAN_HLEN - ETH_FCS_LEN)
#define B53_MAX_MTU (9720 - ETH_HLEN - VLAN_HLEN - ETH_FCS_LEN)
static int b53_do_vlan_op(struct b53_device *dev, u8 op)
{
unsigned int i;
@@ -2254,20 +2258,25 @@ static int b53_change_mtu(struct dsa_switch *ds, int port, int mtu)
bool allow_10_100;
if (is5325(dev) || is5365(dev))
return -EOPNOTSUPP;
return 0;
if (!dsa_is_cpu_port(ds, port))
return 0;
enable_jumbo = (mtu >= JMS_MIN_SIZE);
allow_10_100 = (dev->chip_id == BCM583XX_DEVICE_ID);
enable_jumbo = (mtu > ETH_DATA_LEN);
allow_10_100 = !is63xx(dev);
return b53_set_jumbo(dev, enable_jumbo, allow_10_100);
}
static int b53_get_max_mtu(struct dsa_switch *ds, int port)
{
return JMS_MAX_SIZE;
struct b53_device *dev = ds->priv;
if (is5325(dev) || is5365(dev))
return B53_MAX_MTU_25;
return B53_MAX_MTU;
}
static const struct phylink_mac_ops b53_phylink_mac_ops = {

View File

@@ -6,6 +6,7 @@
#include <linux/module.h>
#include <linux/gpio/consumer.h>
#include <linux/regmap.h>
#include <linux/iopoll.h>
#include <linux/mutex.h>
#include <linux/mii.h>
#include <linux/of.h>
@@ -839,6 +840,8 @@ static void lan9303_handle_reset(struct lan9303 *chip)
if (!chip->reset_gpio)
return;
gpiod_set_value_cansleep(chip->reset_gpio, 1);
if (chip->reset_duration != 0)
msleep(chip->reset_duration);
@@ -864,8 +867,34 @@ static int lan9303_disable_processing(struct lan9303 *chip)
static int lan9303_check_device(struct lan9303 *chip)
{
int ret;
int err;
u32 reg;
/* In I2C-managed configurations this polling loop will clash with
* switch's reading of EEPROM right after reset and this behaviour is
* not configurable. While lan9303_read() already has quite long retry
* timeout, seems not all cases are being detected as arbitration error.
*
* According to datasheet, EEPROM loader has 30ms timeout (in case of
* missing EEPROM).
*
* Loading of the largest supported EEPROM is expected to take at least
* 5.9s.
*/
err = read_poll_timeout(lan9303_read, ret,
!ret && reg & LAN9303_HW_CFG_READY,
20000, 6000000, false,
chip->regmap, LAN9303_HW_CFG, &reg);
if (ret) {
dev_err(chip->dev, "failed to read HW_CFG reg: %pe\n",
ERR_PTR(ret));
return ret;
}
if (err) {
dev_err(chip->dev, "HW_CFG not ready: 0x%08x\n", reg);
return err;
}
ret = lan9303_read(chip->regmap, LAN9303_CHIP_REV, &reg);
if (ret) {
dev_err(chip->dev, "failed to read chip revision register: %d\n",

View File

@@ -3158,7 +3158,6 @@ static int sja1105_setup(struct dsa_switch *ds)
* TPID is ETH_P_SJA1105, and the VLAN ID is the port pvid.
*/
ds->vlan_filtering_is_global = true;
ds->untag_bridge_pvid = true;
ds->fdb_isolation = true;
ds->max_num_bridges = DSA_TAG_8021Q_MAX_NUM_BRIDGES;

View File

@@ -318,11 +318,11 @@ static int adin1110_read_fifo(struct adin1110_port_priv *port_priv)
* from the ADIN1110 frame header.
*/
if (frame_size < ADIN1110_FRAME_HEADER_LEN + ADIN1110_FEC_LEN)
return ret;
return -EINVAL;
round_len = adin1110_round_len(frame_size);
if (round_len < 0)
return ret;
return -EINVAL;
frame_size_no_fcs = frame_size - ADIN1110_FRAME_HEADER_LEN - ADIN1110_FEC_LEN;
memset(priv->data, 0, ADIN1110_RD_HEADER_LEN);

View File

@@ -105,10 +105,6 @@ static struct net_device * __init mvme147lance_probe(void)
macaddr[3] = address&0xff;
eth_hw_addr_set(dev, macaddr);
printk("%s: MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n",
dev->name, dev->base_addr, MVME147_LANCE_IRQ,
dev->dev_addr);
lp = netdev_priv(dev);
lp->ram = __get_dma_pages(GFP_ATOMIC, 3); /* 32K */
if (!lp->ram) {
@@ -138,6 +134,9 @@ static struct net_device * __init mvme147lance_probe(void)
return ERR_PTR(err);
}
netdev_info(dev, "MVME147 at 0x%08lx, irq %d, Hardware Address %pM\n",
dev->base_addr, MVME147_LANCE_IRQ, dev->dev_addr);
return dev;
}

View File

@@ -1906,7 +1906,12 @@ static int ftgmac100_probe(struct platform_device *pdev)
goto err_phy_connect;
}
phydev = fixed_phy_register(PHY_POLL, &ncsi_phy_status, NULL);
phydev = fixed_phy_register(PHY_POLL, &ncsi_phy_status, np);
if (IS_ERR(phydev)) {
dev_err(&pdev->dev, "failed to register fixed PHY device\n");
err = PTR_ERR(phydev);
goto err_phy_connect;
}
err = phy_connect_direct(netdev, phydev, ftgmac100_adjust_link,
PHY_INTERFACE_MODE_MII);
if (err) {

View File

@@ -1077,7 +1077,8 @@ fec_restart(struct net_device *ndev)
u32 rcntl = OPT_FRAME_SIZE | 0x04;
u32 ecntl = FEC_ECR_ETHEREN;
fec_ptp_save_state(fep);
if (fep->bufdesc_ex)
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC
@@ -1340,7 +1341,8 @@ fec_stop(struct net_device *ndev)
netdev_err(ndev, "Graceful transmit stop did not complete!\n");
}
fec_ptp_save_state(fep);
if (fep->bufdesc_ex)
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC

View File

@@ -578,7 +578,7 @@ static int mal_probe(struct platform_device *ofdev)
printk(KERN_ERR "%pOF: Support for 405EZ not enabled!\n",
ofdev->dev.of_node);
err = -ENODEV;
goto fail;
goto fail_unmap;
#endif
}
@@ -742,6 +742,8 @@ static void mal_remove(struct platform_device *ofdev)
free_netdev(mal->dummy_dev);
dcr_unmap(mal->dcr_host, 0x100);
dma_free_coherent(&ofdev->dev,
sizeof(struct mal_descriptor) *
(NUM_TX_BUFF * mal->num_tx_chans +

View File

@@ -2472,9 +2472,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
/* if we are going to send_subcrq_direct this then we need to
* update the checksum before copying the data into ltb. Essentially
* these packets force disable CSO so that we can guarantee that
* FW does not need header info and we can send direct.
* FW does not need header info and we can send direct. Also, vnic
* server must be able to xmit standard packets without header data
*/
if (!skb_is_gso(skb) && !ind_bufp->index && !netdev_xmit_more()) {
if (*hdrs == 0 && !skb_is_gso(skb) &&
!ind_bufp->index && !netdev_xmit_more()) {
use_scrq_send_direct = true;
if (skb->ip_summed == CHECKSUM_PARTIAL &&
skb_checksum_help(skb))

View File

@@ -108,8 +108,8 @@ struct e1000_hw;
#define E1000_DEV_ID_PCH_RPL_I219_V22 0x0DC8
#define E1000_DEV_ID_PCH_MTP_I219_LM18 0x550A
#define E1000_DEV_ID_PCH_MTP_I219_V18 0x550B
#define E1000_DEV_ID_PCH_MTP_I219_LM19 0x550C
#define E1000_DEV_ID_PCH_MTP_I219_V19 0x550D
#define E1000_DEV_ID_PCH_ADP_I219_LM19 0x550C
#define E1000_DEV_ID_PCH_ADP_I219_V19 0x550D
#define E1000_DEV_ID_PCH_LNP_I219_LM20 0x550E
#define E1000_DEV_ID_PCH_LNP_I219_V20 0x550F
#define E1000_DEV_ID_PCH_LNP_I219_LM21 0x5510

View File

@@ -7899,10 +7899,10 @@ static const struct pci_device_id e1000_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_adp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_adp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_adp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM19), board_pch_adp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V19), board_pch_adp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_mtp },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_mtp },

View File

@@ -1734,6 +1734,7 @@ struct i40e_mac_filter *i40e_add_mac_filter(struct i40e_vsi *vsi,
struct hlist_node *h;
int bkt;
lockdep_assert_held(&vsi->mac_filter_hash_lock);
if (vsi->info.pvid)
return i40e_add_filter(vsi, macaddr,
le16_to_cpu(vsi->info.pvid));

View File

@@ -2213,8 +2213,10 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
vfres->vsi_res[0].qset_handle
= le16_to_cpu(vsi->info.qs_handle[0]);
if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) {
spin_lock_bh(&vsi->mac_filter_hash_lock);
i40e_del_mac_filter(vsi, vf->default_lan_addr.addr);
eth_zero_addr(vf->default_lan_addr.addr);
spin_unlock_bh(&vsi->mac_filter_hash_lock);
}
ether_addr_copy(vfres->vsi_res[0].default_mac_addr,
vf->default_lan_addr.addr);

View File

@@ -31,7 +31,7 @@ static const struct ice_tunnel_type_scan tnls[] = {
* Verifies various attributes of the package file, including length, format
* version, and the requirement of at least one segment.
*/
static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
static enum ice_ddp_state ice_verify_pkg(const struct ice_pkg_hdr *pkg, u32 len)
{
u32 seg_count;
u32 i;
@@ -57,13 +57,13 @@ static enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len)
/* all segments must fit within length */
for (i = 0; i < seg_count; i++) {
u32 off = le32_to_cpu(pkg->seg_offset[i]);
struct ice_generic_seg_hdr *seg;
const struct ice_generic_seg_hdr *seg;
/* segment header must fit */
if (len < off + sizeof(*seg))
return ICE_DDP_PKG_INVALID_FILE;
seg = (struct ice_generic_seg_hdr *)((u8 *)pkg + off);
seg = (void *)pkg + off;
/* segment body must fit */
if (len < off + le32_to_cpu(seg->seg_size))
@@ -119,13 +119,13 @@ static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver)
*
* This helper function validates a buffer's header.
*/
static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf)
static const struct ice_buf_hdr *ice_pkg_val_buf(const struct ice_buf *buf)
{
struct ice_buf_hdr *hdr;
const struct ice_buf_hdr *hdr;
u16 section_count;
u16 data_end;
hdr = (struct ice_buf_hdr *)buf->buf;
hdr = (const struct ice_buf_hdr *)buf->buf;
/* verify data */
section_count = le16_to_cpu(hdr->section_count);
if (section_count < ICE_MIN_S_COUNT || section_count > ICE_MAX_S_COUNT)
@@ -165,8 +165,8 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
* unexpected value has been detected (for example an invalid section count or
* an invalid buffer end value).
*/
static struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg,
struct ice_pkg_enum *state)
static const struct ice_buf_hdr *ice_pkg_enum_buf(struct ice_seg *ice_seg,
struct ice_pkg_enum *state)
{
if (ice_seg) {
state->buf_table = ice_find_buf_table(ice_seg);
@@ -1800,9 +1800,9 @@ int ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
* success it returns a pointer to the segment header, otherwise it will
* return NULL.
*/
static struct ice_generic_seg_hdr *
static const struct ice_generic_seg_hdr *
ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
struct ice_pkg_hdr *pkg_hdr)
const struct ice_pkg_hdr *pkg_hdr)
{
u32 i;
@@ -1813,11 +1813,9 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
/* Search all package segments for the requested segment type */
for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
struct ice_generic_seg_hdr *seg;
const struct ice_generic_seg_hdr *seg;
seg = (struct ice_generic_seg_hdr
*)((u8 *)pkg_hdr +
le32_to_cpu(pkg_hdr->seg_offset[i]));
seg = (void *)pkg_hdr + le32_to_cpu(pkg_hdr->seg_offset[i]);
if (le32_to_cpu(seg->seg_type) == seg_type)
return seg;
@@ -2354,12 +2352,12 @@ ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
*
* Return: zero when update was successful, negative values otherwise.
*/
int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len)
{
u8 *current_topo, *new_topo = NULL;
struct ice_run_time_cfg_seg *seg;
struct ice_buf_hdr *section;
struct ice_pkg_hdr *pkg_hdr;
u8 *new_topo = NULL, *topo __free(kfree) = NULL;
const struct ice_run_time_cfg_seg *seg;
const struct ice_buf_hdr *section;
const struct ice_pkg_hdr *pkg_hdr;
enum ice_ddp_state state;
u16 offset, size = 0;
u32 reg = 0;
@@ -2375,15 +2373,13 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
return -EOPNOTSUPP;
}
current_topo = kzalloc(ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
if (!current_topo)
topo = kzalloc(ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
if (!topo)
return -ENOMEM;
/* Get the current Tx topology */
status = ice_get_set_tx_topo(hw, current_topo, ICE_AQ_MAX_BUF_LEN, NULL,
&flags, false);
kfree(current_topo);
/* Get the current Tx topology flags */
status = ice_get_set_tx_topo(hw, topo, ICE_AQ_MAX_BUF_LEN, NULL, &flags,
false);
if (status) {
ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n");
@@ -2419,7 +2415,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
goto update_topo;
}
pkg_hdr = (struct ice_pkg_hdr *)buf;
pkg_hdr = (const struct ice_pkg_hdr *)buf;
state = ice_verify_pkg(pkg_hdr, len);
if (state) {
ice_debug(hw, ICE_DBG_INIT, "Failed to verify pkg (err: %d)\n",
@@ -2428,7 +2424,7 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
}
/* Find runtime configuration segment */
seg = (struct ice_run_time_cfg_seg *)
seg = (const struct ice_run_time_cfg_seg *)
ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE_RUN_TIME_CFG, pkg_hdr);
if (!seg) {
ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment is missing\n");
@@ -2461,8 +2457,10 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
return -EIO;
}
/* Get the new topology buffer */
new_topo = ((u8 *)section) + offset;
/* Get the new topology buffer, reuse current topo copy mem */
static_assert(ICE_PKG_BUF_SIZE == ICE_AQ_MAX_BUF_LEN);
new_topo = topo;
memcpy(new_topo, (u8 *)section + offset, size);
update_topo:
/* Acquire global lock to make sure that set topology issued

View File

@@ -438,7 +438,7 @@ struct ice_pkg_enum {
u32 buf_idx;
u32 type;
struct ice_buf_hdr *buf;
const struct ice_buf_hdr *buf;
u32 sect_idx;
void *sect;
u32 sect_type;
@@ -467,6 +467,6 @@ ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
void *ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state,
u32 sect_type);
int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len);
int ice_cfg_tx_topo(struct ice_hw *hw, const void *buf, u32 len);
#endif

View File

@@ -656,6 +656,8 @@ ice_dpll_output_state_set(const struct dpll_pin *pin, void *pin_priv,
struct ice_dpll_pin *p = pin_priv;
struct ice_dpll *d = dpll_priv;
if (state == DPLL_PIN_STATE_SELECTABLE)
return -EINVAL;
if (!enable && p->state[d->dpll_idx] == DPLL_PIN_STATE_DISCONNECTED)
return 0;
@@ -1843,6 +1845,8 @@ ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
struct dpll_pin *parent;
int ret, i;
if (WARN_ON((!vsi || !vsi->netdev)))
return -EINVAL;
ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF,
pf->dplls.clock_id);
if (ret)
@@ -1858,8 +1862,6 @@ ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
if (ret)
goto unregister_pins;
}
if (WARN_ON((!vsi || !vsi->netdev)))
return -EINVAL;
dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin);
return 0;

View File

@@ -582,10 +582,13 @@ ice_eswitch_br_switchdev_event(struct notifier_block *nb,
return NOTIFY_DONE;
}
static void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge)
void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge)
{
struct ice_esw_br_fdb_entry *entry, *tmp;
if (!bridge)
return;
list_for_each_entry_safe(entry, tmp, &bridge->fdb_list, list)
ice_eswitch_br_fdb_entry_notify_and_cleanup(bridge, entry);
}

View File

@@ -117,5 +117,6 @@ void
ice_eswitch_br_offloads_deinit(struct ice_pf *pf);
int
ice_eswitch_br_offloads_init(struct ice_pf *pf);
void ice_eswitch_br_fdb_flush(struct ice_esw_br *bridge);
#endif /* _ICE_ESWITCH_BR_H_ */

View File

@@ -87,7 +87,8 @@ ice_indr_setup_tc_cb(struct net_device *netdev, struct Qdisc *sch,
bool netif_is_ice(const struct net_device *dev)
{
return dev && (dev->netdev_ops == &ice_netdev_ops);
return dev && (dev->netdev_ops == &ice_netdev_ops ||
dev->netdev_ops == &ice_netdev_safe_mode_ops);
}
/**
@@ -520,25 +521,6 @@ static void ice_pf_dis_all_vsi(struct ice_pf *pf, bool locked)
pf->vf_agg_node[node].num_vsis = 0;
}
/**
* ice_clear_sw_switch_recipes - clear switch recipes
* @pf: board private structure
*
* Mark switch recipes as not created in sw structures. There are cases where
* rules (especially advanced rules) need to be restored, either re-read from
* hardware or added again. For example after the reset. 'recp_created' flag
* prevents from doing that and need to be cleared upfront.
*/
static void ice_clear_sw_switch_recipes(struct ice_pf *pf)
{
struct ice_sw_recipe *recp;
u8 i;
recp = pf->hw.switch_info->recp_list;
for (i = 0; i < ICE_MAX_NUM_RECIPES; i++)
recp[i].recp_created = false;
}
/**
* ice_prepare_for_reset - prep for reset
* @pf: board private structure
@@ -575,8 +557,9 @@ ice_prepare_for_reset(struct ice_pf *pf, enum ice_reset_req reset_type)
mutex_unlock(&pf->vfs.table_lock);
if (ice_is_eswitch_mode_switchdev(pf)) {
if (reset_type != ICE_RESET_PFR)
ice_clear_sw_switch_recipes(pf);
rtnl_lock();
ice_eswitch_br_fdb_flush(pf->eswitch.br_offloads->bridge);
rtnl_unlock();
}
/* release ADQ specific HW and SW resources */
@@ -4536,16 +4519,10 @@ ice_init_tx_topology(struct ice_hw *hw, const struct firmware *firmware)
u8 num_tx_sched_layers = hw->num_tx_sched_layers;
struct ice_pf *pf = hw->back;
struct device *dev;
u8 *buf_copy;
int err;
dev = ice_pf_to_dev(pf);
/* ice_cfg_tx_topo buf argument is not a constant,
* so we have to make a copy
*/
buf_copy = kmemdup(firmware->data, firmware->size, GFP_KERNEL);
err = ice_cfg_tx_topo(hw, buf_copy, firmware->size);
err = ice_cfg_tx_topo(hw, firmware->data, firmware->size);
if (!err) {
if (hw->num_tx_sched_layers > num_tx_sched_layers)
dev_info(dev, "Tx scheduling layers switching feature disabled\n");
@@ -4773,14 +4750,12 @@ int ice_init_dev(struct ice_pf *pf)
ice_init_feature_support(pf);
err = ice_init_ddp_config(hw, pf);
if (err)
return err;
/* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be
* set in pf->state, which will cause ice_is_safe_mode to return
* true
*/
if (ice_is_safe_mode(pf)) {
if (err || ice_is_safe_mode(pf)) {
/* we already got function/device capabilities but these don't
* reflect what the driver needs to do in safe mode. Instead of
* adding conditional logic everywhere to ignore these

View File

@@ -1096,8 +1096,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
return -ENOENT;
vsi = ice_get_vf_vsi(vf);
if (!vsi)
if (!vsi) {
ice_put_vf(vf);
return -ENOENT;
}
prev_msix = vf->num_msix;
prev_queues = vf->num_vf_qs;
@@ -1119,7 +1121,10 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
if (vf->first_vector_idx < 0)
goto unroll;
if (ice_vf_reconfig_vsi(vf) || ice_vf_init_host_cfg(vf, vsi)) {
vsi->req_txq = queues;
vsi->req_rxq = queues;
if (ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT)) {
/* Try to rebuild with previous values */
needs_rebuild = true;
goto unroll;
@@ -1142,12 +1147,16 @@ int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
vf->num_msix = prev_msix;
vf->num_vf_qs = prev_queues;
vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
if (vf->first_vector_idx < 0)
if (vf->first_vector_idx < 0) {
ice_put_vf(vf);
return -EINVAL;
}
if (needs_rebuild) {
ice_vf_reconfig_vsi(vf);
ice_vf_init_host_cfg(vf, vsi);
vsi->req_txq = prev_queues;
vsi->req_rxq = prev_queues;
ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
}
ice_ena_vf_mappings(vf);

View File

@@ -6322,8 +6322,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
if (!itr->vsi_list_info ||
!test_bit(vsi_handle, itr->vsi_list_info->vsi_map))
continue;
/* Clearing it so that the logic can add it back */
clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
f_entry.fltr_info.vsi_handle = vsi_handle;
f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
/* update the src in case it is VSI num */

View File

@@ -819,6 +819,17 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
rule_info.sw_act.flag |= ICE_FLTR_TX;
rule_info.sw_act.src = vsi->idx;
rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
/* This is a specific case. The destination VSI index is
* overwritten by the source VSI index. This type of filter
* should allow the packet to go to the LAN, not to the
* VSI passed here. It should set LAN_EN bit only. However,
* the VSI must be a valid one. Setting source VSI index
* here is safe. Even if the result from switch is set LAN_EN
* and LB_EN (which normally will pass the packet to this VSI)
* packet won't be seen on the VSI, because local loopback is
* turned off.
*/
rule_info.sw_act.vsi_handle = vsi->idx;
} else {
/* VF to VF */
rule_info.sw_act.flag |= ICE_FLTR_TX;

View File

@@ -256,7 +256,7 @@ static void ice_vf_pre_vsi_rebuild(struct ice_vf *vf)
*
* It brings the VSI down and then reconfigures it with the hardware.
*/
int ice_vf_reconfig_vsi(struct ice_vf *vf)
static int ice_vf_reconfig_vsi(struct ice_vf *vf)
{
struct ice_vsi *vsi = ice_get_vf_vsi(vf);
struct ice_pf *pf = vf->pf;
@@ -335,6 +335,13 @@ static int ice_vf_rebuild_host_vlan_cfg(struct ice_vf *vf, struct ice_vsi *vsi)
err = vlan_ops->add_vlan(vsi, &vf->port_vlan_info);
} else {
/* clear possible previous port vlan config */
err = ice_vsi_clear_port_vlan(vsi);
if (err) {
dev_err(dev, "failed to clear port VLAN via VSI parameters for VF %u, error %d\n",
vf->vf_id, err);
return err;
}
err = ice_vsi_add_vlan_zero(vsi);
}

View File

@@ -23,7 +23,6 @@
#warning "Only include ice_vf_lib_private.h in CONFIG_PCI_IOV virtualization files"
#endif
int ice_vf_reconfig_vsi(struct ice_vf *vf);
void ice_initialize_vf_entry(struct ice_vf *vf);
void ice_dis_vf_qs(struct ice_vf *vf);
int ice_check_vf_init(struct ice_vf *vf);

View File

@@ -787,3 +787,60 @@ int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi)
kfree(ctxt);
return err;
}
int ice_vsi_clear_port_vlan(struct ice_vsi *vsi)
{
struct ice_hw *hw = &vsi->back->hw;
struct ice_vsi_ctx *ctxt;
int err;
ctxt = kzalloc(sizeof(*ctxt), GFP_KERNEL);
if (!ctxt)
return -ENOMEM;
ctxt->info = vsi->info;
ctxt->info.port_based_outer_vlan = 0;
ctxt->info.port_based_inner_vlan = 0;
ctxt->info.inner_vlan_flags =
FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_TX_MODE_M,
ICE_AQ_VSI_INNER_VLAN_TX_MODE_ALL);
if (ice_is_dvm_ena(hw)) {
ctxt->info.inner_vlan_flags |=
FIELD_PREP(ICE_AQ_VSI_INNER_VLAN_EMODE_M,
ICE_AQ_VSI_INNER_VLAN_EMODE_NOTHING);
ctxt->info.outer_vlan_flags =
FIELD_PREP(ICE_AQ_VSI_OUTER_VLAN_TX_MODE_M,
ICE_AQ_VSI_OUTER_VLAN_TX_MODE_ALL);
ctxt->info.outer_vlan_flags |=
FIELD_PREP(ICE_AQ_VSI_OUTER_TAG_TYPE_M,
ICE_AQ_VSI_OUTER_TAG_VLAN_8100);
ctxt->info.outer_vlan_flags |=
ICE_AQ_VSI_OUTER_VLAN_EMODE_NOTHING <<
ICE_AQ_VSI_OUTER_VLAN_EMODE_S;
}
ctxt->info.sw_flags2 &= ~ICE_AQ_VSI_SW_FLAG_RX_VLAN_PRUNE_ENA;
ctxt->info.valid_sections =
cpu_to_le16(ICE_AQ_VSI_PROP_OUTER_TAG_VALID |
ICE_AQ_VSI_PROP_VLAN_VALID |
ICE_AQ_VSI_PROP_SW_VALID);
err = ice_update_vsi(hw, vsi->idx, ctxt, NULL);
if (err) {
dev_err(ice_pf_to_dev(vsi->back), "update VSI for clearing port based VLAN failed, err %d aq_err %s\n",
err, ice_aq_str(hw->adminq.sq_last_status));
} else {
vsi->info.port_based_outer_vlan =
ctxt->info.port_based_outer_vlan;
vsi->info.port_based_inner_vlan =
ctxt->info.port_based_inner_vlan;
vsi->info.outer_vlan_flags = ctxt->info.outer_vlan_flags;
vsi->info.inner_vlan_flags = ctxt->info.inner_vlan_flags;
vsi->info.sw_flags2 = ctxt->info.sw_flags2;
}
kfree(ctxt);
return err;
}

View File

@@ -36,5 +36,6 @@ int ice_vsi_ena_outer_insertion(struct ice_vsi *vsi, u16 tpid);
int ice_vsi_dis_outer_insertion(struct ice_vsi *vsi);
int ice_vsi_set_outer_port_vlan(struct ice_vsi *vsi, struct ice_vlan *vlan);
int ice_vsi_clear_outer_port_vlan(struct ice_vsi *vsi);
int ice_vsi_clear_port_vlan(struct ice_vsi *vsi);
#endif /* _ICE_VSI_VLAN_LIB_H_ */

View File

@@ -99,6 +99,7 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
intr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M;
intr->dyn_ctl_intena_msk_m = VF_INT_DYN_CTLN_INTENA_MSK_M;
intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S;
intr->dyn_ctl_intrvl_s = VF_INT_DYN_CTLN_INTERVAL_S;
intr->dyn_ctl_wb_on_itr_m = VF_INT_DYN_CTLN_WB_ON_ITR_M;
spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,

View File

@@ -666,7 +666,7 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
if (ctlq_msg->data_len) {
payload = ctlq_msg->ctx.indirect.payload->va;
payload_size = ctlq_msg->ctx.indirect.payload->size;
payload_size = ctlq_msg->data_len;
}
xn->reply_sz = payload_size;
@@ -1295,10 +1295,6 @@ int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
err = reply_sz;
goto free_vport_params;
}
if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN) {
err = -EIO;
goto free_vport_params;
}
return 0;
@@ -2602,9 +2598,6 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
if (reply_sz < 0)
return reply_sz;
if (reply_sz < IDPF_CTLQ_MAX_BUF_LEN)
return -EIO;
ptypes_recvd += le16_to_cpu(ptype_info->num_ptypes);
if (ptypes_recvd > max_ptype)
return -EINVAL;
@@ -3088,9 +3081,9 @@ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))
return;
idpf_vc_xn_shutdown(adapter->vcxn_mngr);
idpf_deinit_task(adapter);
idpf_intr_rel(adapter);
idpf_vc_xn_shutdown(adapter->vcxn_mngr);
cancel_delayed_work_sync(&adapter->serv_task);
cancel_delayed_work_sync(&adapter->mbx_task);

View File

@@ -9651,6 +9651,10 @@ static void igb_io_resume(struct pci_dev *pdev)
struct igb_adapter *adapter = netdev_priv(netdev);
if (netif_running(netdev)) {
if (!test_bit(__IGB_DOWN, &adapter->state)) {
dev_dbg(&pdev->dev, "Resuming from non-fatal error, do nothing.\n");
return;
}
if (igb_up(adapter)) {
dev_err(&pdev->dev, "igb_up failed after reset\n");
return;

View File

@@ -2471,10 +2471,6 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,
e->dma_addr = addr;
e->dma_len = len;
airoha_qdma_rmw(qdma, REG_TX_CPU_IDX(qid),
TX_RING_CPU_IDX_MASK,
FIELD_PREP(TX_RING_CPU_IDX_MASK, index));
data = skb_frag_address(frag);
len = skb_frag_size(frag);
}
@@ -2483,6 +2479,11 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,
q->queued += i;
skb_tx_timestamp(skb);
if (!netdev_xmit_more())
airoha_qdma_rmw(qdma, REG_TX_CPU_IDX(qid),
TX_RING_CPU_IDX_MASK,
FIELD_PREP(TX_RING_CPU_IDX_MASK, q->head));
if (q->ndesc - q->queued < q->free_thr)
netif_tx_stop_queue(txq);

View File

@@ -1260,7 +1260,8 @@ static int efx_poll(struct napi_struct *napi, int budget)
spent = efx_process_channel(channel, budget);
xdp_do_flush();
if (budget)
xdp_do_flush();
if (spent < budget) {
if (efx_channel_has_rx_queue(channel) &&

View File

@@ -1285,7 +1285,8 @@ static int efx_poll(struct napi_struct *napi, int budget)
spent = efx_process_channel(channel, budget);
xdp_do_flush();
if (budget)
xdp_do_flush();
if (spent < budget) {
if (efx_channel_has_rx_queue(channel) &&

View File

@@ -2035,7 +2035,7 @@ static int __alloc_dma_rx_desc_resources(struct stmmac_priv *priv,
rx_q->queue_index = queue;
rx_q->priv_data = priv;
pp_params.flags = PP_FLAG_DMA_MAP | (xdp_prog ? PP_FLAG_DMA_SYNC_DEV : 0);
pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
pp_params.pool_size = dma_conf->dma_rx_size;
num_pages = DIV_ROUND_UP(dma_conf->dma_buf_sz, PAGE_SIZE);
pp_params.order = ilog2(num_pages);

View File

@@ -2744,10 +2744,9 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx)
return 0;
/* alloc netdev */
port->ndev = devm_alloc_etherdev_mqs(common->dev,
sizeof(struct am65_cpsw_ndev_priv),
AM65_CPSW_MAX_QUEUES,
AM65_CPSW_MAX_QUEUES);
port->ndev = alloc_etherdev_mqs(sizeof(struct am65_cpsw_ndev_priv),
AM65_CPSW_MAX_QUEUES,
AM65_CPSW_MAX_QUEUES);
if (!port->ndev) {
dev_err(dev, "error allocating slave net_device %u\n",
port->port_id);
@@ -2868,8 +2867,12 @@ static void am65_cpsw_nuss_cleanup_ndev(struct am65_cpsw_common *common)
for (i = 0; i < common->port_num; i++) {
port = &common->ports[i];
if (port->ndev && port->ndev->reg_state == NETREG_REGISTERED)
if (!port->ndev)
continue;
if (port->ndev->reg_state == NETREG_REGISTERED)
unregister_netdev(port->ndev);
free_netdev(port->ndev);
port->ndev = NULL;
}
}
@@ -3613,16 +3616,17 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
ret = am65_cpsw_nuss_init_ndevs(common);
if (ret)
goto err_free_phylink;
goto err_ndevs_clear;
ret = am65_cpsw_nuss_register_ndevs(common);
if (ret)
goto err_free_phylink;
goto err_ndevs_clear;
pm_runtime_put(dev);
return 0;
err_free_phylink:
err_ndevs_clear:
am65_cpsw_nuss_cleanup_ndev(common);
am65_cpsw_nuss_phylink_cleanup(common);
am65_cpts_release(common->cpts);
err_of_clear:
@@ -3652,13 +3656,13 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev)
return;
}
am65_cpsw_unregister_devlink(common);
am65_cpsw_unregister_notifiers(common);
/* must unregister ndevs here because DD release_driver routine calls
* dma_deconfigure(dev) before devres_release_all(dev)
*/
am65_cpsw_nuss_cleanup_ndev(common);
am65_cpsw_unregister_devlink(common);
am65_cpsw_nuss_phylink_cleanup(common);
am65_cpts_release(common->cpts);
am65_cpsw_disable_serdes_phy(common);

View File

@@ -735,6 +735,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
u8 fid_c1;
tbl = prueth->vlan_tbl;
spin_lock(&prueth->vtbl_lock);
fid_c1 = tbl[vid].fid_c1;
/* FID_C1: bit0..2 port membership mask,
@@ -750,6 +751,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
}
tbl[vid].fid_c1 = fid_c1;
spin_unlock(&prueth->vtbl_lock);
}
EXPORT_SYMBOL_GPL(icssg_vtbl_modify);

View File

@@ -1442,6 +1442,7 @@ static int prueth_probe(struct platform_device *pdev)
icss_iep_init_fw(prueth->iep1);
}
spin_lock_init(&prueth->vtbl_lock);
/* setup netdev interfaces */
if (eth0_node) {
ret = prueth_netdev_init(prueth, eth0_node);

View File

@@ -296,6 +296,8 @@ struct prueth {
bool is_switchmode_supported;
unsigned char switch_id[MAX_PHYS_ITEM_ID_LEN];
int default_vlan;
/** @vtbl_lock: Lock for vtbl in shared memory */
spinlock_t vtbl_lock;
};
struct emac_tx_ts_response {

View File

@@ -1161,8 +1161,14 @@ static void send_ext_msg_udp(struct netconsole_target *nt, const char *msg,
this_chunk = min(userdata_len - sent_userdata,
MAX_PRINT_CHUNK - preceding_bytes);
if (WARN_ON_ONCE(this_chunk <= 0))
if (WARN_ON_ONCE(this_chunk < 0))
/* this_chunk could be zero if all the previous
* message used all the buffer. This is not a
* problem, userdata will be sent in the next
* iteration
*/
return;
memcpy(buf + this_header + this_offset,
userdata + sent_userdata,
this_chunk);

View File

@@ -537,12 +537,6 @@ static int aqcs109_config_init(struct phy_device *phydev)
if (!ret)
aqr107_chip_info(phydev);
/* AQCS109 belongs to a chip family partially supporting 10G and 5G.
* PMA speed ability bits are the same for all members of the family,
* AQCS109 however supports speeds up to 2.5G only.
*/
phy_set_max_speed(phydev, SPEED_2500);
return aqr107_set_downshift(phydev, MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT);
}
@@ -731,6 +725,31 @@ static int aqr113c_fill_interface_modes(struct phy_device *phydev)
return aqr107_fill_interface_modes(phydev);
}
static int aqr115c_get_features(struct phy_device *phydev)
{
unsigned long *supported = phydev->supported;
/* PHY supports speeds up to 2.5G with autoneg. PMA capabilities
* are not useful.
*/
linkmode_or(supported, supported, phy_gbit_features);
linkmode_set_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, supported);
return 0;
}
static int aqr111_get_features(struct phy_device *phydev)
{
/* PHY supports speeds up to 5G with autoneg. PMA capabilities
* are not useful.
*/
aqr115c_get_features(phydev);
linkmode_set_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
phydev->supported);
return 0;
}
static int aqr113c_config_init(struct phy_device *phydev)
{
int ret;
@@ -767,15 +786,6 @@ static int aqr107_probe(struct phy_device *phydev)
return aqr_hwmon_probe(phydev);
}
static int aqr111_config_init(struct phy_device *phydev)
{
/* AQR111 reports supporting speed up to 10G,
* however only speeds up to 5G are supported.
*/
phy_set_max_speed(phydev, SPEED_5000);
return aqr107_config_init(phydev);
}
static struct phy_driver aqr_driver[] = {
{
@@ -853,6 +863,7 @@ static struct phy_driver aqr_driver[] = {
.get_sset_count = aqr107_get_sset_count,
.get_strings = aqr107_get_strings,
.get_stats = aqr107_get_stats,
.get_features = aqr115c_get_features,
.link_change_notify = aqr107_link_change_notify,
.led_brightness_set = aqr_phy_led_brightness_set,
.led_hw_is_supported = aqr_phy_led_hw_is_supported,
@@ -865,7 +876,7 @@ static struct phy_driver aqr_driver[] = {
.name = "Aquantia AQR111",
.probe = aqr107_probe,
.get_rate_matching = aqr107_get_rate_matching,
.config_init = aqr111_config_init,
.config_init = aqr107_config_init,
.config_aneg = aqr_config_aneg,
.config_intr = aqr_config_intr,
.handle_interrupt = aqr_handle_interrupt,
@@ -877,6 +888,7 @@ static struct phy_driver aqr_driver[] = {
.get_sset_count = aqr107_get_sset_count,
.get_strings = aqr107_get_strings,
.get_stats = aqr107_get_stats,
.get_features = aqr111_get_features,
.link_change_notify = aqr107_link_change_notify,
.led_brightness_set = aqr_phy_led_brightness_set,
.led_hw_is_supported = aqr_phy_led_hw_is_supported,
@@ -889,7 +901,7 @@ static struct phy_driver aqr_driver[] = {
.name = "Aquantia AQR111B0",
.probe = aqr107_probe,
.get_rate_matching = aqr107_get_rate_matching,
.config_init = aqr111_config_init,
.config_init = aqr107_config_init,
.config_aneg = aqr_config_aneg,
.config_intr = aqr_config_intr,
.handle_interrupt = aqr_handle_interrupt,
@@ -901,6 +913,7 @@ static struct phy_driver aqr_driver[] = {
.get_sset_count = aqr107_get_sset_count,
.get_strings = aqr107_get_strings,
.get_stats = aqr107_get_stats,
.get_features = aqr111_get_features,
.link_change_notify = aqr107_link_change_notify,
.led_brightness_set = aqr_phy_led_brightness_set,
.led_hw_is_supported = aqr_phy_led_hw_is_supported,
@@ -1010,7 +1023,7 @@ static struct phy_driver aqr_driver[] = {
.name = "Aquantia AQR114C",
.probe = aqr107_probe,
.get_rate_matching = aqr107_get_rate_matching,
.config_init = aqr111_config_init,
.config_init = aqr107_config_init,
.config_aneg = aqr_config_aneg,
.config_intr = aqr_config_intr,
.handle_interrupt = aqr_handle_interrupt,
@@ -1022,6 +1035,7 @@ static struct phy_driver aqr_driver[] = {
.get_sset_count = aqr107_get_sset_count,
.get_strings = aqr107_get_strings,
.get_stats = aqr107_get_stats,
.get_features = aqr111_get_features,
.link_change_notify = aqr107_link_change_notify,
.led_brightness_set = aqr_phy_led_brightness_set,
.led_hw_is_supported = aqr_phy_led_hw_is_supported,
@@ -1046,6 +1060,7 @@ static struct phy_driver aqr_driver[] = {
.get_sset_count = aqr107_get_sset_count,
.get_strings = aqr107_get_strings,
.get_stats = aqr107_get_stats,
.get_features = aqr115c_get_features,
.link_change_notify = aqr107_link_change_notify,
.led_brightness_set = aqr_phy_led_brightness_set,
.led_hw_is_supported = aqr_phy_led_hw_is_supported,

View File

@@ -132,7 +132,7 @@ static int bcm84881_aneg_done(struct phy_device *phydev)
bmsr = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_C22 + MII_BMSR);
if (bmsr < 0)
return val;
return bmsr;
return !!(val & MDIO_AN_STAT1_COMPLETE) &&
!!(bmsr & BMSR_ANEGCOMPLETE);
@@ -158,7 +158,7 @@ static int bcm84881_read_status(struct phy_device *phydev)
bmsr = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_C22 + MII_BMSR);
if (bmsr < 0)
return val;
return bmsr;
phydev->autoneg_complete = !!(val & MDIO_AN_STAT1_COMPLETE) &&
!!(bmsr & BMSR_ANEGCOMPLETE);

View File

@@ -645,7 +645,6 @@ static int dp83869_configure_fiber(struct phy_device *phydev,
phydev->supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, phydev->supported);
linkmode_set_bit(ADVERTISED_FIBRE, phydev->advertising);
if (dp83869->mode == DP83869_RGMII_1000_BASE) {
linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseX_Full_BIT,

View File

@@ -3326,10 +3326,11 @@ static __maybe_unused int phy_led_hw_is_supported(struct led_classdev *led_cdev,
static void phy_leds_unregister(struct phy_device *phydev)
{
struct phy_led *phyled;
struct phy_led *phyled, *tmp;
list_for_each_entry(phyled, &phydev->leds, list) {
list_for_each_entry_safe(phyled, tmp, &phydev->leds, list) {
led_classdev_unregister(&phyled->led_cdev);
list_del(&phyled->list);
}
}

View File

@@ -1081,6 +1081,16 @@ static int rtl8221b_vn_cg_c45_match_phy_device(struct phy_device *phydev)
return rtlgen_is_c45_match(phydev, RTL_8221B_VN_CG, true);
}
static int rtl8251b_c22_match_phy_device(struct phy_device *phydev)
{
return rtlgen_is_c45_match(phydev, RTL_8251B, false);
}
static int rtl8251b_c45_match_phy_device(struct phy_device *phydev)
{
return rtlgen_is_c45_match(phydev, RTL_8251B, true);
}
static int rtlgen_resume(struct phy_device *phydev)
{
int ret = genphy_resume(phydev);
@@ -1418,7 +1428,7 @@ static struct phy_driver realtek_drvs[] = {
.suspend = genphy_c45_pma_suspend,
.resume = rtlgen_c45_resume,
}, {
PHY_ID_MATCH_EXACT(0x001cc862),
.match_phy_device = rtl8251b_c45_match_phy_device,
.name = "RTL8251B 5Gbps PHY",
.get_features = rtl822x_get_features,
.config_aneg = rtl822x_config_aneg,
@@ -1427,6 +1437,18 @@ static struct phy_driver realtek_drvs[] = {
.resume = rtlgen_resume,
.read_page = rtl821x_read_page,
.write_page = rtl821x_write_page,
}, {
.match_phy_device = rtl8251b_c22_match_phy_device,
.name = "RTL8126A-internal 5Gbps PHY",
.get_features = rtl822x_get_features,
.config_aneg = rtl822x_config_aneg,
.read_status = rtl822x_read_status,
.suspend = genphy_suspend,
.resume = rtlgen_resume,
.read_page = rtl821x_read_page,
.write_page = rtl821x_write_page,
.read_mmd = rtl822x_read_mmd,
.write_mmd = rtl822x_write_mmd,
}, {
PHY_ID_MATCH_EXACT(0x001ccad0),
.name = "RTL8224 2.5Gbps PHY",

View File

@@ -542,7 +542,7 @@ ppp_async_encode(struct asyncppp *ap)
* and 7 (code-reject) must be sent as though no options
* had been negotiated.
*/
islcp = proto == PPP_LCP && 1 <= data[2] && data[2] <= 7;
islcp = proto == PPP_LCP && count >= 3 && 1 <= data[2] && data[2] <= 7;
if (i == 0) {
if (islcp)

View File

@@ -785,6 +785,17 @@ static int pse_ethtool_c33_set_config(struct pse_control *psec,
*/
switch (config->c33_admin_control) {
case ETHTOOL_C33_PSE_ADMIN_STATE_ENABLED:
/* We could have mismatch between admin_state_enabled and
* state reported by regulator_is_enabled. This can occur when
* the PI is forcibly turn off by the controller. Call
* regulator_disable on that case to fix the counters state.
*/
if (psec->pcdev->pi[psec->id].admin_state_enabled &&
!regulator_is_enabled(psec->ps)) {
err = regulator_disable(psec->ps);
if (err)
break;
}
if (!psec->pcdev->pi[psec->id].admin_state_enabled)
err = regulator_enable(psec->ps);
break;

View File

@@ -643,46 +643,57 @@ slhc_uncompress(struct slcompress *comp, unsigned char *icp, int isize)
int
slhc_remember(struct slcompress *comp, unsigned char *icp, int isize)
{
struct cstate *cs;
unsigned ihl;
const struct tcphdr *th;
unsigned char index;
struct iphdr *iph;
struct cstate *cs;
unsigned int ihl;
if(isize < 20) {
/* The packet is shorter than a legal IP header */
/* The packet is shorter than a legal IP header.
* Also make sure isize is positive.
*/
if (isize < (int)sizeof(struct iphdr)) {
runt:
comp->sls_i_runt++;
return slhc_toss( comp );
return slhc_toss(comp);
}
iph = (struct iphdr *)icp;
/* Peek at the IP header's IHL field to find its length */
ihl = icp[0] & 0xf;
if(ihl < 20 / 4){
/* The IP header length field is too small */
comp->sls_i_runt++;
return slhc_toss( comp );
}
index = icp[9];
icp[9] = IPPROTO_TCP;
ihl = iph->ihl;
/* The IP header length field is too small,
* or packet is shorter than the IP header followed
* by minimal tcp header.
*/
if (ihl < 5 || isize < ihl * 4 + sizeof(struct tcphdr))
goto runt;
index = iph->protocol;
iph->protocol = IPPROTO_TCP;
if (ip_fast_csum(icp, ihl)) {
/* Bad IP header checksum; discard */
comp->sls_i_badcheck++;
return slhc_toss( comp );
return slhc_toss(comp);
}
if(index > comp->rslot_limit) {
if (index > comp->rslot_limit) {
comp->sls_i_error++;
return slhc_toss(comp);
}
th = (struct tcphdr *)(icp + ihl * 4);
if (th->doff < sizeof(struct tcphdr) / 4)
goto runt;
if (isize < ihl * 4 + th->doff * 4)
goto runt;
/* Update local state */
cs = &comp->rstate[comp->recv_current = index];
comp->flags &=~ SLF_TOSS;
memcpy(&cs->cs_ip,icp,20);
memcpy(&cs->cs_tcp,icp + ihl*4,20);
memcpy(&cs->cs_ip, iph, sizeof(*iph));
memcpy(&cs->cs_tcp, th, sizeof(*th));
if (ihl > 5)
memcpy(cs->cs_ipopt, icp + sizeof(struct iphdr), (ihl - 5) * 4);
if (cs->cs_tcp.doff > 5)
memcpy(cs->cs_tcpopt, icp + ihl*4 + sizeof(struct tcphdr), (cs->cs_tcp.doff - 5) * 4);
cs->cs_hsize = ihl*2 + cs->cs_tcp.doff*2;
memcpy(cs->cs_ipopt, &iph[1], (ihl - 5) * 4);
if (th->doff > 5)
memcpy(cs->cs_tcpopt, &th[1], (th->doff - 5) * 4);
cs->cs_hsize = ihl*2 + th->doff*2;
cs->initialized = true;
/* Put headers back on packet
* Neither header checksum is recalculated

View File

@@ -4913,9 +4913,13 @@ static int __init vxlan_init_module(void)
if (rc)
goto out4;
vxlan_vnifilter_init();
rc = vxlan_vnifilter_init();
if (rc)
goto out5;
return 0;
out5:
rtnl_link_unregister(&vxlan_link_ops);
out4:
unregister_switchdev_notifier(&vxlan_switchdev_notifier_block);
out3:

View File

@@ -202,7 +202,7 @@ int vxlan_vni_in_use(struct net *src_net, struct vxlan_dev *vxlan,
int vxlan_vnigroup_init(struct vxlan_dev *vxlan);
void vxlan_vnigroup_uninit(struct vxlan_dev *vxlan);
void vxlan_vnifilter_init(void);
int vxlan_vnifilter_init(void);
void vxlan_vnifilter_uninit(void);
void vxlan_vnifilter_count(struct vxlan_dev *vxlan, __be32 vni,
struct vxlan_vni_node *vninode,

View File

@@ -992,19 +992,18 @@ static int vxlan_vnifilter_process(struct sk_buff *skb, struct nlmsghdr *nlh,
return err;
}
void vxlan_vnifilter_init(void)
static const struct rtnl_msg_handler vxlan_vnifilter_rtnl_msg_handlers[] = {
{THIS_MODULE, PF_BRIDGE, RTM_GETTUNNEL, NULL, vxlan_vnifilter_dump, 0},
{THIS_MODULE, PF_BRIDGE, RTM_NEWTUNNEL, vxlan_vnifilter_process, NULL, 0},
{THIS_MODULE, PF_BRIDGE, RTM_DELTUNNEL, vxlan_vnifilter_process, NULL, 0},
};
int vxlan_vnifilter_init(void)
{
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETTUNNEL, NULL,
vxlan_vnifilter_dump, 0);
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWTUNNEL,
vxlan_vnifilter_process, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELTUNNEL,
vxlan_vnifilter_process, NULL, 0);
return rtnl_register_many(vxlan_vnifilter_rtnl_msg_handlers);
}
void vxlan_vnifilter_uninit(void)
{
rtnl_unregister(PF_BRIDGE, RTM_GETTUNNEL);
rtnl_unregister(PF_BRIDGE, RTM_NEWTUNNEL);
rtnl_unregister(PF_BRIDGE, RTM_DELTUNNEL);
rtnl_unregister_many(vxlan_vnifilter_rtnl_msg_handlers);
}

View File

@@ -295,7 +295,7 @@ void mctp_neigh_remove_dev(struct mctp_dev *mdev);
int mctp_routes_init(void);
void mctp_routes_exit(void);
void mctp_device_init(void);
int mctp_device_init(void);
void mctp_device_exit(void);
#endif /* __NET_MCTP_H */

View File

@@ -29,6 +29,15 @@ static inline enum rtnl_kinds rtnl_msgtype_kind(int msgtype)
return msgtype & RTNL_KIND_MASK;
}
struct rtnl_msg_handler {
struct module *owner;
int protocol;
int msgtype;
rtnl_doit_func doit;
rtnl_dumpit_func dumpit;
int flags;
};
void rtnl_register(int protocol, int msgtype,
rtnl_doit_func, rtnl_dumpit_func, unsigned int flags);
int rtnl_register_module(struct module *owner, int protocol, int msgtype,
@@ -36,6 +45,14 @@ int rtnl_register_module(struct module *owner, int protocol, int msgtype,
int rtnl_unregister(int protocol, int msgtype);
void rtnl_unregister_all(int protocol);
int __rtnl_register_many(const struct rtnl_msg_handler *handlers, int n);
void __rtnl_unregister_many(const struct rtnl_msg_handler *handlers, int n);
#define rtnl_register_many(handlers) \
__rtnl_register_many(handlers, ARRAY_SIZE(handlers))
#define rtnl_unregister_many(handlers) \
__rtnl_unregister_many(handlers, ARRAY_SIZE(handlers))
static inline int rtnl_msg_family(const struct nlmsghdr *nlh)
{
if (nlmsg_len(nlh) >= sizeof(struct rtgenmsg))

View File

@@ -848,7 +848,6 @@ static inline void qdisc_calculate_pkt_len(struct sk_buff *skb,
static inline int qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
struct sk_buff **to_free)
{
qdisc_calculate_pkt_len(skb, sch);
return sch->enqueue(skb, sch, to_free);
}

View File

@@ -894,6 +894,8 @@ static inline void sk_add_bind_node(struct sock *sk,
hlist_for_each_entry_safe(__sk, tmp, list, sk_node)
#define sk_for_each_bound(__sk, list) \
hlist_for_each_entry(__sk, list, sk_bind_node)
#define sk_for_each_bound_safe(__sk, tmp, list) \
hlist_for_each_entry_safe(__sk, tmp, list, sk_bind_node)
/**
* sk_for_each_entry_offset_rcu - iterate over a list at a given struct offset

View File

@@ -289,6 +289,9 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data)
kfree(conn_handle);
if (!hci_conn_valid(hdev, conn))
return -ECANCELED;
bt_dev_dbg(hdev, "hcon %p", conn);
configure_datapath_sync(hdev, &conn->codec);

View File

@@ -865,9 +865,7 @@ static int rfcomm_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned lon
if (err == -ENOIOCTLCMD) {
#ifdef CONFIG_BT_RFCOMM_TTY
lock_sock(sk);
err = rfcomm_dev_ioctl(sk, cmd, (void __user *) arg);
release_sock(sk);
#else
err = -EOPNOTSUPP;
#endif

View File

@@ -33,6 +33,7 @@
#include <net/ip.h>
#include <net/ipv6.h>
#include <net/addrconf.h>
#include <net/dst_metadata.h>
#include <net/route.h>
#include <net/netfilter/br_netfilter.h>
#include <net/netns/generic.h>
@@ -879,6 +880,10 @@ static int br_nf_dev_queue_xmit(struct net *net, struct sock *sk, struct sk_buff
return br_dev_queue_push_xmit(net, sk, skb);
}
/* Fragmentation on metadata/template dst is not supported */
if (unlikely(!skb_valid_dst(skb)))
goto drop;
/* This is wrong! We should preserve the original fragment
* boundaries by preserving frag_list rather than refragmenting.
*/

View File

@@ -1920,7 +1920,10 @@ int __init br_netlink_init(void)
{
int err;
br_vlan_rtnl_init();
err = br_vlan_rtnl_init();
if (err)
goto out;
rtnl_af_register(&br_af_ops);
err = rtnl_link_register(&br_link_ops);
@@ -1931,6 +1934,7 @@ int __init br_netlink_init(void)
out_af:
rtnl_af_unregister(&br_af_ops);
out:
return err;
}

View File

@@ -1571,7 +1571,7 @@ void br_vlan_get_stats(const struct net_bridge_vlan *v,
void br_vlan_port_event(struct net_bridge_port *p, unsigned long event);
int br_vlan_bridge_event(struct net_device *dev, unsigned long event,
void *ptr);
void br_vlan_rtnl_init(void);
int br_vlan_rtnl_init(void);
void br_vlan_rtnl_uninit(void);
void br_vlan_notify(const struct net_bridge *br,
const struct net_bridge_port *p,
@@ -1802,8 +1802,9 @@ static inline int br_vlan_bridge_event(struct net_device *dev,
return 0;
}
static inline void br_vlan_rtnl_init(void)
static inline int br_vlan_rtnl_init(void)
{
return 0;
}
static inline void br_vlan_rtnl_uninit(void)

View File

@@ -2296,19 +2296,18 @@ static int br_vlan_rtm_process(struct sk_buff *skb, struct nlmsghdr *nlh,
return err;
}
void br_vlan_rtnl_init(void)
static const struct rtnl_msg_handler br_vlan_rtnl_msg_handlers[] = {
{THIS_MODULE, PF_BRIDGE, RTM_NEWVLAN, br_vlan_rtm_process, NULL, 0},
{THIS_MODULE, PF_BRIDGE, RTM_DELVLAN, br_vlan_rtm_process, NULL, 0},
{THIS_MODULE, PF_BRIDGE, RTM_GETVLAN, NULL, br_vlan_rtm_dump, 0},
};
int br_vlan_rtnl_init(void)
{
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_GETVLAN, NULL,
br_vlan_rtm_dump, 0);
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_NEWVLAN,
br_vlan_rtm_process, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_BRIDGE, RTM_DELVLAN,
br_vlan_rtm_process, NULL, 0);
return rtnl_register_many(br_vlan_rtnl_msg_handlers);
}
void br_vlan_rtnl_uninit(void)
{
rtnl_unregister(PF_BRIDGE, RTM_GETVLAN);
rtnl_unregister(PF_BRIDGE, RTM_NEWVLAN);
rtnl_unregister(PF_BRIDGE, RTM_DELVLAN);
rtnl_unregister_many(br_vlan_rtnl_msg_handlers);
}

View File

@@ -109,9 +109,6 @@ static void dst_destroy(struct dst_entry *dst)
child = xdst->child;
}
#endif
if (!(dst->flags & DST_NOCOUNT))
dst_entries_add(dst->ops, -1);
if (dst->ops->destroy)
dst->ops->destroy(dst);
netdev_put(dst->dev, &dst->dev_tracker);
@@ -159,17 +156,27 @@ void dst_dev_put(struct dst_entry *dst)
}
EXPORT_SYMBOL(dst_dev_put);
static void dst_count_dec(struct dst_entry *dst)
{
if (!(dst->flags & DST_NOCOUNT))
dst_entries_add(dst->ops, -1);
}
void dst_release(struct dst_entry *dst)
{
if (dst && rcuref_put(&dst->__rcuref))
if (dst && rcuref_put(&dst->__rcuref)) {
dst_count_dec(dst);
call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
}
}
EXPORT_SYMBOL(dst_release);
void dst_release_immediate(struct dst_entry *dst)
{
if (dst && rcuref_put(&dst->__rcuref))
if (dst && rcuref_put(&dst->__rcuref)) {
dst_count_dec(dst);
dst_destroy(dst);
}
}
EXPORT_SYMBOL(dst_release_immediate);

View File

@@ -384,6 +384,35 @@ void rtnl_unregister_all(int protocol)
}
EXPORT_SYMBOL_GPL(rtnl_unregister_all);
int __rtnl_register_many(const struct rtnl_msg_handler *handlers, int n)
{
const struct rtnl_msg_handler *handler;
int i, err;
for (i = 0, handler = handlers; i < n; i++, handler++) {
err = rtnl_register_internal(handler->owner, handler->protocol,
handler->msgtype, handler->doit,
handler->dumpit, handler->flags);
if (err) {
__rtnl_unregister_many(handlers, i);
break;
}
}
return err;
}
EXPORT_SYMBOL_GPL(__rtnl_register_many);
void __rtnl_unregister_many(const struct rtnl_msg_handler *handlers, int n)
{
const struct rtnl_msg_handler *handler;
int i;
for (i = n - 1, handler = handlers + n - 1; i >= 0; i--, handler--)
rtnl_unregister(handler->protocol, handler->msgtype);
}
EXPORT_SYMBOL_GPL(__rtnl_unregister_many);
static LIST_HEAD(link_ops);
static const struct rtnl_link_ops *rtnl_link_ops_get(const char *kind)

View File

@@ -1392,6 +1392,14 @@ dsa_user_add_cls_matchall_mirred(struct net_device *dev,
if (!dsa_user_dev_check(act->dev))
return -EOPNOTSUPP;
to_dp = dsa_user_to_port(act->dev);
if (dp->ds != to_dp->ds) {
NL_SET_ERR_MSG_MOD(extack,
"Cross-chip mirroring not implemented");
return -EOPNOTSUPP;
}
mall_tc_entry = kzalloc(sizeof(*mall_tc_entry), GFP_KERNEL);
if (!mall_tc_entry)
return -ENOMEM;
@@ -1399,9 +1407,6 @@ dsa_user_add_cls_matchall_mirred(struct net_device *dev,
mall_tc_entry->cookie = cls->cookie;
mall_tc_entry->type = DSA_PORT_MALL_MIRROR;
mirror = &mall_tc_entry->mirror;
to_dp = dsa_user_to_port(act->dev);
mirror->to_local_port = to_dp->index;
mirror->ingress = ingress;

View File

@@ -65,6 +65,7 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
.flowi4_scope = RT_SCOPE_UNIVERSE,
.flowi4_iif = LOOPBACK_IFINDEX,
.flowi4_uid = sock_net_uid(nft_net(pkt), NULL),
.flowi4_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
};
const struct net_device *oif;
const struct net_device *found;
@@ -83,9 +84,6 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
else
oif = NULL;
if (priv->flags & NFTA_FIB_F_IIF)
fl4.flowi4_l3mdev = l3mdev_master_ifindex_rcu(oif);
if (nft_hook(pkt) == NF_INET_PRE_ROUTING &&
nft_fib_is_loopback(pkt->skb, nft_in(pkt))) {
nft_fib_store_result(dest, priv, nft_in(pkt));

View File

@@ -2473,8 +2473,22 @@ static bool tcp_skb_spurious_retrans(const struct tcp_sock *tp,
*/
static inline bool tcp_packet_delayed(const struct tcp_sock *tp)
{
return tp->retrans_stamp &&
tcp_tsopt_ecr_before(tp, tp->retrans_stamp);
const struct sock *sk = (const struct sock *)tp;
if (tp->retrans_stamp &&
tcp_tsopt_ecr_before(tp, tp->retrans_stamp))
return true; /* got echoed TS before first retransmission */
/* Check if nothing was retransmitted (retrans_stamp==0), which may
* happen in fast recovery due to TSQ. But we ignore zero retrans_stamp
* in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear
* retrans_stamp even if we had retransmitted the SYN.
*/
if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */
sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */
return true; /* nothing was retransmitted */
return false;
}
/* Undo procedures. */
@@ -2508,6 +2522,16 @@ static bool tcp_any_retrans_done(const struct sock *sk)
return false;
}
/* If loss recovery is finished and there are no retransmits out in the
* network, then we clear retrans_stamp so that upon the next loss recovery
* retransmits_timed_out() and timestamp-undo are using the correct value.
*/
static void tcp_retrans_stamp_cleanup(struct sock *sk)
{
if (!tcp_any_retrans_done(sk))
tcp_sk(sk)->retrans_stamp = 0;
}
static void DBGUNDO(struct sock *sk, const char *msg)
{
#if FASTRETRANS_DEBUG > 1
@@ -2875,6 +2899,9 @@ void tcp_enter_recovery(struct sock *sk, bool ece_ack)
struct tcp_sock *tp = tcp_sk(sk);
int mib_idx;
/* Start the clock with our fast retransmit, for undo and ETIMEDOUT. */
tcp_retrans_stamp_cleanup(sk);
if (tcp_is_reno(tp))
mib_idx = LINUX_MIB_TCPRENORECOVERY;
else
@@ -6657,10 +6684,17 @@ static void tcp_rcv_synrecv_state_fastopen(struct sock *sk)
if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss && !tp->packets_out)
tcp_try_undo_recovery(sk);
/* Reset rtx states to prevent spurious retransmits_timed_out() */
tcp_update_rto_time(tp);
tp->retrans_stamp = 0;
inet_csk(sk)->icsk_retransmits = 0;
/* In tcp_fastopen_synack_timer() on the first SYNACK RTO we set
* retrans_stamp but don't enter CA_Loss, so in case that happened we
* need to zero retrans_stamp here to prevent spurious
* retransmits_timed_out(). However, if the ACK of our SYNACK caused us
* to enter CA_Recovery then we need to leave retrans_stamp as it was
* set entering CA_Recovery, for correct retransmits_timed_out() and
* undo behavior.
*/
tcp_retrans_stamp_cleanup(sk);
/* Once we leave TCP_SYN_RECV or TCP_FIN_WAIT_1,
* we no longer need req so release it.

View File

@@ -2342,10 +2342,7 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
if (len <= skb->len)
break;
if (unlikely(TCP_SKB_CB(skb)->eor) ||
tcp_has_tx_tstamp(skb) ||
!skb_pure_zcopy_same(skb, next) ||
skb_frags_readable(skb) != skb_frags_readable(next))
if (tcp_has_tx_tstamp(skb) || !tcp_skb_can_collapse(skb, next))
return false;
len -= skb->len;

View File

@@ -41,8 +41,6 @@ static int nft_fib6_flowi_init(struct flowi6 *fl6, const struct nft_fib *priv,
if (ipv6_addr_type(&fl6->daddr) & IPV6_ADDR_LINKLOCAL) {
lookup_flags |= RT6_LOOKUP_F_IFACE;
fl6->flowi6_oif = get_ifindex(dev ? dev : pkt->skb->dev);
} else if (priv->flags & NFTA_FIB_F_IIF) {
fl6->flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
}
if (ipv6_addr_type(&fl6->saddr) & IPV6_ADDR_UNICAST)
@@ -75,6 +73,8 @@ static u32 __nft_fib6_eval_type(const struct nft_fib *priv,
else if (priv->flags & NFTA_FIB_F_OIF)
dev = nft_out(pkt);
fl6.flowi6_l3mdev = l3mdev_master_ifindex_rcu(dev);
nft_fib6_flowi_init(&fl6, priv, pkt, dev, iph);
if (dev && nf_ipv6_chk_addr(nft_net(pkt), &fl6.daddr, dev, true))
@@ -165,6 +165,7 @@ void nft_fib6_eval(const struct nft_expr *expr, struct nft_regs *regs,
.flowi6_iif = LOOPBACK_IFINDEX,
.flowi6_proto = pkt->tprot,
.flowi6_uid = sock_net_uid(nft_net(pkt), NULL),
.flowi6_l3mdev = l3mdev_master_ifindex_rcu(nft_in(pkt)),
};
struct rt6_info *rt;
int lookup_flags;

View File

@@ -756,10 +756,14 @@ static __init int mctp_init(void)
if (rc)
goto err_unreg_routes;
mctp_device_init();
rc = mctp_device_init();
if (rc)
goto err_unreg_neigh;
return 0;
err_unreg_neigh:
mctp_neigh_exit();
err_unreg_routes:
mctp_routes_exit();
err_unreg_proto:

View File

@@ -524,25 +524,31 @@ static struct notifier_block mctp_dev_nb = {
.priority = ADDRCONF_NOTIFY_PRIORITY,
};
void __init mctp_device_init(void)
{
register_netdevice_notifier(&mctp_dev_nb);
static const struct rtnl_msg_handler mctp_device_rtnl_msg_handlers[] = {
{THIS_MODULE, PF_MCTP, RTM_NEWADDR, mctp_rtm_newaddr, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_DELADDR, mctp_rtm_deladdr, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_GETADDR, NULL, mctp_dump_addrinfo, 0},
};
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETADDR,
NULL, mctp_dump_addrinfo, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWADDR,
mctp_rtm_newaddr, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELADDR,
mctp_rtm_deladdr, NULL, 0);
int __init mctp_device_init(void)
{
int err;
register_netdevice_notifier(&mctp_dev_nb);
rtnl_af_register(&mctp_af_ops);
err = rtnl_register_many(mctp_device_rtnl_msg_handlers);
if (err) {
rtnl_af_unregister(&mctp_af_ops);
unregister_netdevice_notifier(&mctp_dev_nb);
}
return err;
}
void __exit mctp_device_exit(void)
{
rtnl_unregister_many(mctp_device_rtnl_msg_handlers);
rtnl_af_unregister(&mctp_af_ops);
rtnl_unregister(PF_MCTP, RTM_DELADDR);
rtnl_unregister(PF_MCTP, RTM_NEWADDR);
rtnl_unregister(PF_MCTP, RTM_GETADDR);
unregister_netdevice_notifier(&mctp_dev_nb);
}

View File

@@ -322,22 +322,29 @@ static struct pernet_operations mctp_net_ops = {
.exit = mctp_neigh_net_exit,
};
static const struct rtnl_msg_handler mctp_neigh_rtnl_msg_handlers[] = {
{THIS_MODULE, PF_MCTP, RTM_NEWNEIGH, mctp_rtm_newneigh, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_DELNEIGH, mctp_rtm_delneigh, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_GETNEIGH, NULL, mctp_rtm_getneigh, 0},
};
int __init mctp_neigh_init(void)
{
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWNEIGH,
mctp_rtm_newneigh, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELNEIGH,
mctp_rtm_delneigh, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETNEIGH,
NULL, mctp_rtm_getneigh, 0);
int err;
return register_pernet_subsys(&mctp_net_ops);
err = register_pernet_subsys(&mctp_net_ops);
if (err)
return err;
err = rtnl_register_many(mctp_neigh_rtnl_msg_handlers);
if (err)
unregister_pernet_subsys(&mctp_net_ops);
return err;
}
void __exit mctp_neigh_exit(void)
void mctp_neigh_exit(void)
{
rtnl_unregister_many(mctp_neigh_rtnl_msg_handlers);
unregister_pernet_subsys(&mctp_net_ops);
rtnl_unregister(PF_MCTP, RTM_GETNEIGH);
rtnl_unregister(PF_MCTP, RTM_DELNEIGH);
rtnl_unregister(PF_MCTP, RTM_NEWNEIGH);
}

View File

@@ -1474,26 +1474,39 @@ static struct pernet_operations mctp_net_ops = {
.exit = mctp_routes_net_exit,
};
static const struct rtnl_msg_handler mctp_route_rtnl_msg_handlers[] = {
{THIS_MODULE, PF_MCTP, RTM_NEWROUTE, mctp_newroute, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_DELROUTE, mctp_delroute, NULL, 0},
{THIS_MODULE, PF_MCTP, RTM_GETROUTE, NULL, mctp_dump_rtinfo, 0},
};
int __init mctp_routes_init(void)
{
int err;
dev_add_pack(&mctp_packet_type);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_GETROUTE,
NULL, mctp_dump_rtinfo, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_NEWROUTE,
mctp_newroute, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MCTP, RTM_DELROUTE,
mctp_delroute, NULL, 0);
err = register_pernet_subsys(&mctp_net_ops);
if (err)
goto err_pernet;
return register_pernet_subsys(&mctp_net_ops);
err = rtnl_register_many(mctp_route_rtnl_msg_handlers);
if (err)
goto err_rtnl;
return 0;
err_rtnl:
unregister_pernet_subsys(&mctp_net_ops);
err_pernet:
dev_remove_pack(&mctp_packet_type);
return err;
}
void mctp_routes_exit(void)
{
rtnl_unregister_many(mctp_route_rtnl_msg_handlers);
unregister_pernet_subsys(&mctp_net_ops);
rtnl_unregister(PF_MCTP, RTM_DELROUTE);
rtnl_unregister(PF_MCTP, RTM_NEWROUTE);
rtnl_unregister(PF_MCTP, RTM_GETROUTE);
dev_remove_pack(&mctp_packet_type);
}

View File

@@ -2728,6 +2728,15 @@ static struct rtnl_af_ops mpls_af_ops __read_mostly = {
.get_stats_af_size = mpls_get_stats_af_size,
};
static const struct rtnl_msg_handler mpls_rtnl_msg_handlers[] __initdata_or_module = {
{THIS_MODULE, PF_MPLS, RTM_NEWROUTE, mpls_rtm_newroute, NULL, 0},
{THIS_MODULE, PF_MPLS, RTM_DELROUTE, mpls_rtm_delroute, NULL, 0},
{THIS_MODULE, PF_MPLS, RTM_GETROUTE, mpls_getroute, mpls_dump_routes, 0},
{THIS_MODULE, PF_MPLS, RTM_GETNETCONF,
mpls_netconf_get_devconf, mpls_netconf_dump_devconf,
RTNL_FLAG_DUMP_UNLOCKED},
};
static int __init mpls_init(void)
{
int err;
@@ -2746,24 +2755,25 @@ static int __init mpls_init(void)
rtnl_af_register(&mpls_af_ops);
rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_NEWROUTE,
mpls_rtm_newroute, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_DELROUTE,
mpls_rtm_delroute, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_GETROUTE,
mpls_getroute, mpls_dump_routes, 0);
rtnl_register_module(THIS_MODULE, PF_MPLS, RTM_GETNETCONF,
mpls_netconf_get_devconf,
mpls_netconf_dump_devconf,
RTNL_FLAG_DUMP_UNLOCKED);
err = ipgre_tunnel_encap_add_mpls_ops();
err = rtnl_register_many(mpls_rtnl_msg_handlers);
if (err)
goto out_unregister_rtnl_af;
err = ipgre_tunnel_encap_add_mpls_ops();
if (err) {
pr_err("Can't add mpls over gre tunnel ops\n");
goto out_unregister_rtnl;
}
err = 0;
out:
return err;
out_unregister_rtnl:
rtnl_unregister_many(mpls_rtnl_msg_handlers);
out_unregister_rtnl_af:
rtnl_af_unregister(&mpls_af_ops);
dev_remove_pack(&mpls_packet_type);
out_unregister_pernet:
unregister_pernet_subsys(&mpls_net_ops);
goto out;

View File

@@ -32,6 +32,8 @@ static const struct snmp_mib mptcp_snmp_list[] = {
SNMP_MIB_ITEM("MPJoinSynTxBindErr", MPTCP_MIB_JOINSYNTXBINDERR),
SNMP_MIB_ITEM("MPJoinSynTxConnectErr", MPTCP_MIB_JOINSYNTXCONNECTERR),
SNMP_MIB_ITEM("DSSNotMatching", MPTCP_MIB_DSSNOMATCH),
SNMP_MIB_ITEM("DSSCorruptionFallback", MPTCP_MIB_DSSCORRUPTIONFALLBACK),
SNMP_MIB_ITEM("DSSCorruptionReset", MPTCP_MIB_DSSCORRUPTIONRESET),
SNMP_MIB_ITEM("InfiniteMapTx", MPTCP_MIB_INFINITEMAPTX),
SNMP_MIB_ITEM("InfiniteMapRx", MPTCP_MIB_INFINITEMAPRX),
SNMP_MIB_ITEM("DSSNoMatchTCP", MPTCP_MIB_DSSTCPMISMATCH),

View File

@@ -27,6 +27,8 @@ enum linux_mptcp_mib_field {
MPTCP_MIB_JOINSYNTXBINDERR, /* Not able to bind() the address when sending a SYN + MP_JOIN */
MPTCP_MIB_JOINSYNTXCONNECTERR, /* Not able to connect() when sending a SYN + MP_JOIN */
MPTCP_MIB_DSSNOMATCH, /* Received a new mapping that did not match the previous one */
MPTCP_MIB_DSSCORRUPTIONFALLBACK,/* DSS corruption detected, fallback */
MPTCP_MIB_DSSCORRUPTIONRESET, /* DSS corruption detected, MPJ subflow reset */
MPTCP_MIB_INFINITEMAPTX, /* Sent an infinite mapping */
MPTCP_MIB_INFINITEMAPRX, /* Received an infinite mapping */
MPTCP_MIB_DSSTCPMISMATCH, /* DSS-mapping did not map with TCP's sequence numbers */

View File

@@ -860,7 +860,8 @@ static void mptcp_pm_nl_rm_addr_or_subflow(struct mptcp_sock *msk,
int how = RCV_SHUTDOWN | SEND_SHUTDOWN;
u8 id = subflow_get_local_id(subflow);
if (inet_sk_state_load(ssk) == TCP_CLOSE)
if ((1 << inet_sk_state_load(ssk)) &
(TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | TCPF_CLOSING | TCPF_CLOSE))
continue;
if (rm_type == MPTCP_MIB_RMADDR && remote_id != rm_id)
continue;

View File

@@ -620,6 +620,18 @@ static bool mptcp_check_data_fin(struct sock *sk)
return ret;
}
static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
{
if (READ_ONCE(msk->allow_infinite_fallback)) {
MPTCP_INC_STATS(sock_net(ssk),
MPTCP_MIB_DSSCORRUPTIONFALLBACK);
mptcp_do_fallback(ssk);
} else {
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET);
mptcp_subflow_reset(ssk);
}
}
static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
struct sock *ssk,
unsigned int *bytes)
@@ -692,10 +704,16 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
moved += len;
seq += len;
if (WARN_ON_ONCE(map_remaining < len))
break;
if (unlikely(map_remaining < len)) {
DEBUG_NET_WARN_ON_ONCE(1);
mptcp_dss_corruption(msk, ssk);
}
} else {
WARN_ON_ONCE(!fin);
if (unlikely(!fin)) {
DEBUG_NET_WARN_ON_ONCE(1);
mptcp_dss_corruption(msk, ssk);
}
sk_eat_skb(ssk, skb);
done = true;
}

View File

@@ -975,8 +975,10 @@ static bool skb_is_fully_mapped(struct sock *ssk, struct sk_buff *skb)
unsigned int skb_consumed;
skb_consumed = tcp_sk(ssk)->copied_seq - TCP_SKB_CB(skb)->seq;
if (WARN_ON_ONCE(skb_consumed >= skb->len))
if (unlikely(skb_consumed >= skb->len)) {
DEBUG_NET_WARN_ON_ONCE(1);
return true;
}
return skb->len - skb_consumed <= subflow->map_data_len -
mptcp_subflow_get_map_offset(subflow);
@@ -1280,7 +1282,7 @@ static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
else if (READ_ONCE(msk->csum_enabled))
return !subflow->valid_csum_seen;
else
return !subflow->fully_established;
return READ_ONCE(msk->allow_infinite_fallback);
}
static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)

View File

@@ -63,24 +63,37 @@ static int checksum_tg_check(const struct xt_tgchk_param *par)
return 0;
}
static struct xt_target checksum_tg_reg __read_mostly = {
.name = "CHECKSUM",
.family = NFPROTO_UNSPEC,
.target = checksum_tg,
.targetsize = sizeof(struct xt_CHECKSUM_info),
.table = "mangle",
.checkentry = checksum_tg_check,
.me = THIS_MODULE,
static struct xt_target checksum_tg_reg[] __read_mostly = {
{
.name = "CHECKSUM",
.family = NFPROTO_IPV4,
.target = checksum_tg,
.targetsize = sizeof(struct xt_CHECKSUM_info),
.table = "mangle",
.checkentry = checksum_tg_check,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "CHECKSUM",
.family = NFPROTO_IPV6,
.target = checksum_tg,
.targetsize = sizeof(struct xt_CHECKSUM_info),
.table = "mangle",
.checkentry = checksum_tg_check,
.me = THIS_MODULE,
},
#endif
};
static int __init checksum_tg_init(void)
{
return xt_register_target(&checksum_tg_reg);
return xt_register_targets(checksum_tg_reg, ARRAY_SIZE(checksum_tg_reg));
}
static void __exit checksum_tg_exit(void)
{
xt_unregister_target(&checksum_tg_reg);
xt_unregister_targets(checksum_tg_reg, ARRAY_SIZE(checksum_tg_reg));
}
module_init(checksum_tg_init);

View File

@@ -38,9 +38,9 @@ static struct xt_target classify_tg_reg[] __read_mostly = {
{
.name = "CLASSIFY",
.revision = 0,
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.hooks = (1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_FORWARD) |
(1 << NF_INET_POST_ROUTING),
(1 << NF_INET_POST_ROUTING),
.target = classify_tg,
.targetsize = sizeof(struct xt_classify_target_info),
.me = THIS_MODULE,
@@ -54,6 +54,18 @@ static struct xt_target classify_tg_reg[] __read_mostly = {
.targetsize = sizeof(struct xt_classify_target_info),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "CLASSIFY",
.revision = 0,
.family = NFPROTO_IPV6,
.hooks = (1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_FORWARD) |
(1 << NF_INET_POST_ROUTING),
.target = classify_tg,
.targetsize = sizeof(struct xt_classify_target_info),
.me = THIS_MODULE,
},
#endif
};
static int __init classify_tg_init(void)

View File

@@ -114,25 +114,39 @@ static void connsecmark_tg_destroy(const struct xt_tgdtor_param *par)
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target connsecmark_tg_reg __read_mostly = {
.name = "CONNSECMARK",
.revision = 0,
.family = NFPROTO_UNSPEC,
.checkentry = connsecmark_tg_check,
.destroy = connsecmark_tg_destroy,
.target = connsecmark_tg,
.targetsize = sizeof(struct xt_connsecmark_target_info),
.me = THIS_MODULE,
static struct xt_target connsecmark_tg_reg[] __read_mostly = {
{
.name = "CONNSECMARK",
.revision = 0,
.family = NFPROTO_IPV4,
.checkentry = connsecmark_tg_check,
.destroy = connsecmark_tg_destroy,
.target = connsecmark_tg,
.targetsize = sizeof(struct xt_connsecmark_target_info),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "CONNSECMARK",
.revision = 0,
.family = NFPROTO_IPV6,
.checkentry = connsecmark_tg_check,
.destroy = connsecmark_tg_destroy,
.target = connsecmark_tg,
.targetsize = sizeof(struct xt_connsecmark_target_info),
.me = THIS_MODULE,
},
#endif
};
static int __init connsecmark_tg_init(void)
{
return xt_register_target(&connsecmark_tg_reg);
return xt_register_targets(connsecmark_tg_reg, ARRAY_SIZE(connsecmark_tg_reg));
}
static void __exit connsecmark_tg_exit(void)
{
xt_unregister_target(&connsecmark_tg_reg);
xt_unregister_targets(connsecmark_tg_reg, ARRAY_SIZE(connsecmark_tg_reg));
}
module_init(connsecmark_tg_init);

View File

@@ -313,44 +313,6 @@ static void xt_ct_tg_destroy_v1(const struct xt_tgdtor_param *par)
xt_ct_tg_destroy(par, par->targinfo);
}
static struct xt_target xt_ct_tg_reg[] __read_mostly = {
{
.name = "CT",
.family = NFPROTO_UNSPEC,
.targetsize = sizeof(struct xt_ct_target_info),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v0,
.destroy = xt_ct_tg_destroy_v0,
.target = xt_ct_target_v0,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_UNSPEC,
.revision = 1,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v1,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_UNSPEC,
.revision = 2,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v2,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
};
static unsigned int
notrack_tg(struct sk_buff *skb, const struct xt_action_param *par)
{
@@ -363,35 +325,105 @@ notrack_tg(struct sk_buff *skb, const struct xt_action_param *par)
return XT_CONTINUE;
}
static struct xt_target notrack_tg_reg __read_mostly = {
.name = "NOTRACK",
.revision = 0,
.family = NFPROTO_UNSPEC,
.target = notrack_tg,
.table = "raw",
.me = THIS_MODULE,
static struct xt_target xt_ct_tg_reg[] __read_mostly = {
{
.name = "NOTRACK",
.revision = 0,
.family = NFPROTO_IPV4,
.target = notrack_tg,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV4,
.targetsize = sizeof(struct xt_ct_target_info),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v0,
.destroy = xt_ct_tg_destroy_v0,
.target = xt_ct_target_v0,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV4,
.revision = 1,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v1,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV4,
.revision = 2,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v2,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "NOTRACK",
.revision = 0,
.family = NFPROTO_IPV6,
.target = notrack_tg,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV6,
.targetsize = sizeof(struct xt_ct_target_info),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v0,
.destroy = xt_ct_tg_destroy_v0,
.target = xt_ct_target_v0,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV6,
.revision = 1,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v1,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
{
.name = "CT",
.family = NFPROTO_IPV6,
.revision = 2,
.targetsize = sizeof(struct xt_ct_target_info_v1),
.usersize = offsetof(struct xt_ct_target_info, ct),
.checkentry = xt_ct_tg_check_v2,
.destroy = xt_ct_tg_destroy_v1,
.target = xt_ct_target_v1,
.table = "raw",
.me = THIS_MODULE,
},
#endif
};
static int __init xt_ct_tg_init(void)
{
int ret;
ret = xt_register_target(&notrack_tg_reg);
if (ret < 0)
return ret;
ret = xt_register_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
if (ret < 0) {
xt_unregister_target(&notrack_tg_reg);
return ret;
}
return 0;
return xt_register_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
}
static void __exit xt_ct_tg_exit(void)
{
xt_unregister_targets(xt_ct_tg_reg, ARRAY_SIZE(xt_ct_tg_reg));
xt_unregister_target(&notrack_tg_reg);
}
module_init(xt_ct_tg_init);

View File

@@ -458,28 +458,49 @@ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
static struct xt_target idletimer_tg[] __read_mostly = {
{
.name = "IDLETIMER",
.family = NFPROTO_UNSPEC,
.target = idletimer_tg_target,
.targetsize = sizeof(struct idletimer_tg_info),
.usersize = offsetof(struct idletimer_tg_info, timer),
.checkentry = idletimer_tg_checkentry,
.destroy = idletimer_tg_destroy,
.me = THIS_MODULE,
.name = "IDLETIMER",
.family = NFPROTO_IPV4,
.target = idletimer_tg_target,
.targetsize = sizeof(struct idletimer_tg_info),
.usersize = offsetof(struct idletimer_tg_info, timer),
.checkentry = idletimer_tg_checkentry,
.destroy = idletimer_tg_destroy,
.me = THIS_MODULE,
},
{
.name = "IDLETIMER",
.family = NFPROTO_UNSPEC,
.revision = 1,
.target = idletimer_tg_target_v1,
.targetsize = sizeof(struct idletimer_tg_info_v1),
.usersize = offsetof(struct idletimer_tg_info_v1, timer),
.checkentry = idletimer_tg_checkentry_v1,
.destroy = idletimer_tg_destroy_v1,
.me = THIS_MODULE,
.name = "IDLETIMER",
.family = NFPROTO_IPV4,
.revision = 1,
.target = idletimer_tg_target_v1,
.targetsize = sizeof(struct idletimer_tg_info_v1),
.usersize = offsetof(struct idletimer_tg_info_v1, timer),
.checkentry = idletimer_tg_checkentry_v1,
.destroy = idletimer_tg_destroy_v1,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "IDLETIMER",
.family = NFPROTO_IPV6,
.target = idletimer_tg_target,
.targetsize = sizeof(struct idletimer_tg_info),
.usersize = offsetof(struct idletimer_tg_info, timer),
.checkentry = idletimer_tg_checkentry,
.destroy = idletimer_tg_destroy,
.me = THIS_MODULE,
},
{
.name = "IDLETIMER",
.family = NFPROTO_IPV6,
.revision = 1,
.target = idletimer_tg_target_v1,
.targetsize = sizeof(struct idletimer_tg_info_v1),
.usersize = offsetof(struct idletimer_tg_info_v1, timer),
.checkentry = idletimer_tg_checkentry_v1,
.destroy = idletimer_tg_destroy_v1,
.me = THIS_MODULE,
},
#endif
};
static struct class *idletimer_tg_class;

View File

@@ -175,26 +175,41 @@ static void led_tg_destroy(const struct xt_tgdtor_param *par)
kfree(ledinternal);
}
static struct xt_target led_tg_reg __read_mostly = {
.name = "LED",
.revision = 0,
.family = NFPROTO_UNSPEC,
.target = led_tg,
.targetsize = sizeof(struct xt_led_info),
.usersize = offsetof(struct xt_led_info, internal_data),
.checkentry = led_tg_check,
.destroy = led_tg_destroy,
.me = THIS_MODULE,
static struct xt_target led_tg_reg[] __read_mostly = {
{
.name = "LED",
.revision = 0,
.family = NFPROTO_IPV4,
.target = led_tg,
.targetsize = sizeof(struct xt_led_info),
.usersize = offsetof(struct xt_led_info, internal_data),
.checkentry = led_tg_check,
.destroy = led_tg_destroy,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "LED",
.revision = 0,
.family = NFPROTO_IPV6,
.target = led_tg,
.targetsize = sizeof(struct xt_led_info),
.usersize = offsetof(struct xt_led_info, internal_data),
.checkentry = led_tg_check,
.destroy = led_tg_destroy,
.me = THIS_MODULE,
},
#endif
};
static int __init led_tg_init(void)
{
return xt_register_target(&led_tg_reg);
return xt_register_targets(led_tg_reg, ARRAY_SIZE(led_tg_reg));
}
static void __exit led_tg_exit(void)
{
xt_unregister_target(&led_tg_reg);
xt_unregister_targets(led_tg_reg, ARRAY_SIZE(led_tg_reg));
}
module_init(led_tg_init);

View File

@@ -64,25 +64,39 @@ static void nflog_tg_destroy(const struct xt_tgdtor_param *par)
nf_logger_put(par->family, NF_LOG_TYPE_ULOG);
}
static struct xt_target nflog_tg_reg __read_mostly = {
.name = "NFLOG",
.revision = 0,
.family = NFPROTO_UNSPEC,
.checkentry = nflog_tg_check,
.destroy = nflog_tg_destroy,
.target = nflog_tg,
.targetsize = sizeof(struct xt_nflog_info),
.me = THIS_MODULE,
static struct xt_target nflog_tg_reg[] __read_mostly = {
{
.name = "NFLOG",
.revision = 0,
.family = NFPROTO_IPV4,
.checkentry = nflog_tg_check,
.destroy = nflog_tg_destroy,
.target = nflog_tg,
.targetsize = sizeof(struct xt_nflog_info),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "NFLOG",
.revision = 0,
.family = NFPROTO_IPV4,
.checkentry = nflog_tg_check,
.destroy = nflog_tg_destroy,
.target = nflog_tg,
.targetsize = sizeof(struct xt_nflog_info),
.me = THIS_MODULE,
},
#endif
};
static int __init nflog_tg_init(void)
{
return xt_register_target(&nflog_tg_reg);
return xt_register_targets(nflog_tg_reg, ARRAY_SIZE(nflog_tg_reg));
}
static void __exit nflog_tg_exit(void)
{
xt_unregister_target(&nflog_tg_reg);
xt_unregister_targets(nflog_tg_reg, ARRAY_SIZE(nflog_tg_reg));
}
module_init(nflog_tg_init);

View File

@@ -179,16 +179,31 @@ static void xt_rateest_tg_destroy(const struct xt_tgdtor_param *par)
xt_rateest_put(par->net, info->est);
}
static struct xt_target xt_rateest_tg_reg __read_mostly = {
.name = "RATEEST",
.revision = 0,
.family = NFPROTO_UNSPEC,
.target = xt_rateest_tg,
.checkentry = xt_rateest_tg_checkentry,
.destroy = xt_rateest_tg_destroy,
.targetsize = sizeof(struct xt_rateest_target_info),
.usersize = offsetof(struct xt_rateest_target_info, est),
.me = THIS_MODULE,
static struct xt_target xt_rateest_tg_reg[] __read_mostly = {
{
.name = "RATEEST",
.revision = 0,
.family = NFPROTO_IPV4,
.target = xt_rateest_tg,
.checkentry = xt_rateest_tg_checkentry,
.destroy = xt_rateest_tg_destroy,
.targetsize = sizeof(struct xt_rateest_target_info),
.usersize = offsetof(struct xt_rateest_target_info, est),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "RATEEST",
.revision = 0,
.family = NFPROTO_IPV6,
.target = xt_rateest_tg,
.checkentry = xt_rateest_tg_checkentry,
.destroy = xt_rateest_tg_destroy,
.targetsize = sizeof(struct xt_rateest_target_info),
.usersize = offsetof(struct xt_rateest_target_info, est),
.me = THIS_MODULE,
},
#endif
};
static __net_init int xt_rateest_net_init(struct net *net)
@@ -214,12 +229,12 @@ static int __init xt_rateest_tg_init(void)
if (err)
return err;
return xt_register_target(&xt_rateest_tg_reg);
return xt_register_targets(xt_rateest_tg_reg, ARRAY_SIZE(xt_rateest_tg_reg));
}
static void __exit xt_rateest_tg_fini(void)
{
xt_unregister_target(&xt_rateest_tg_reg);
xt_unregister_targets(xt_rateest_tg_reg, ARRAY_SIZE(xt_rateest_tg_reg));
unregister_pernet_subsys(&xt_rateest_net_ops);
}

View File

@@ -157,7 +157,7 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
{
.name = "SECMARK",
.revision = 0,
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.checkentry = secmark_tg_check_v0,
.destroy = secmark_tg_destroy,
.target = secmark_tg_v0,
@@ -167,7 +167,7 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
{
.name = "SECMARK",
.revision = 1,
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.checkentry = secmark_tg_check_v1,
.destroy = secmark_tg_destroy,
.target = secmark_tg_v1,
@@ -175,6 +175,29 @@ static struct xt_target secmark_tg_reg[] __read_mostly = {
.usersize = offsetof(struct xt_secmark_target_info_v1, secid),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "SECMARK",
.revision = 0,
.family = NFPROTO_IPV6,
.checkentry = secmark_tg_check_v0,
.destroy = secmark_tg_destroy,
.target = secmark_tg_v0,
.targetsize = sizeof(struct xt_secmark_target_info),
.me = THIS_MODULE,
},
{
.name = "SECMARK",
.revision = 1,
.family = NFPROTO_IPV6,
.checkentry = secmark_tg_check_v1,
.destroy = secmark_tg_destroy,
.target = secmark_tg_v1,
.targetsize = sizeof(struct xt_secmark_target_info_v1),
.usersize = offsetof(struct xt_secmark_target_info_v1, secid),
.me = THIS_MODULE,
},
#endif
};
static int __init secmark_tg_init(void)

View File

@@ -29,25 +29,38 @@ trace_tg(struct sk_buff *skb, const struct xt_action_param *par)
return XT_CONTINUE;
}
static struct xt_target trace_tg_reg __read_mostly = {
.name = "TRACE",
.revision = 0,
.family = NFPROTO_UNSPEC,
.table = "raw",
.target = trace_tg,
.checkentry = trace_tg_check,
.destroy = trace_tg_destroy,
.me = THIS_MODULE,
static struct xt_target trace_tg_reg[] __read_mostly = {
{
.name = "TRACE",
.revision = 0,
.family = NFPROTO_IPV4,
.table = "raw",
.target = trace_tg,
.checkentry = trace_tg_check,
.destroy = trace_tg_destroy,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "TRACE",
.revision = 0,
.family = NFPROTO_IPV6,
.table = "raw",
.target = trace_tg,
.checkentry = trace_tg_check,
.destroy = trace_tg_destroy,
},
#endif
};
static int __init trace_tg_init(void)
{
return xt_register_target(&trace_tg_reg);
return xt_register_targets(trace_tg_reg, ARRAY_SIZE(trace_tg_reg));
}
static void __exit trace_tg_exit(void)
{
xt_unregister_target(&trace_tg_reg);
xt_unregister_targets(trace_tg_reg, ARRAY_SIZE(trace_tg_reg));
}
module_init(trace_tg_init);

View File

@@ -208,13 +208,24 @@ static struct xt_match addrtype_mt_reg[] __read_mostly = {
},
{
.name = "addrtype",
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.revision = 1,
.match = addrtype_mt_v1,
.checkentry = addrtype_mt_checkentry_v1,
.matchsize = sizeof(struct xt_addrtype_info_v1),
.me = THIS_MODULE
}
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "addrtype",
.family = NFPROTO_IPV6,
.revision = 1,
.match = addrtype_mt_v1,
.checkentry = addrtype_mt_checkentry_v1,
.matchsize = sizeof(struct xt_addrtype_info_v1),
.me = THIS_MODULE
},
#endif
};
static int __init addrtype_mt_init(void)

View File

@@ -146,24 +146,37 @@ static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par)
nf_ct_netns_put(par->net, par->family);
}
static struct xt_match xt_cluster_match __read_mostly = {
.name = "cluster",
.family = NFPROTO_UNSPEC,
.match = xt_cluster_mt,
.checkentry = xt_cluster_mt_checkentry,
.matchsize = sizeof(struct xt_cluster_match_info),
.destroy = xt_cluster_mt_destroy,
.me = THIS_MODULE,
static struct xt_match xt_cluster_match[] __read_mostly = {
{
.name = "cluster",
.family = NFPROTO_IPV4,
.match = xt_cluster_mt,
.checkentry = xt_cluster_mt_checkentry,
.matchsize = sizeof(struct xt_cluster_match_info),
.destroy = xt_cluster_mt_destroy,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "cluster",
.family = NFPROTO_IPV6,
.match = xt_cluster_mt,
.checkentry = xt_cluster_mt_checkentry,
.matchsize = sizeof(struct xt_cluster_match_info),
.destroy = xt_cluster_mt_destroy,
.me = THIS_MODULE,
},
#endif
};
static int __init xt_cluster_mt_init(void)
{
return xt_register_match(&xt_cluster_match);
return xt_register_matches(xt_cluster_match, ARRAY_SIZE(xt_cluster_match));
}
static void __exit xt_cluster_mt_fini(void)
{
xt_unregister_match(&xt_cluster_match);
xt_unregister_matches(xt_cluster_match, ARRAY_SIZE(xt_cluster_match));
}
MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>");

View File

@@ -111,9 +111,11 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par)
return -EINVAL;
ret = nf_ct_netns_get(par->net, par->family);
if (ret < 0)
if (ret < 0) {
pr_info_ratelimited("cannot load conntrack support for proto=%u\n",
par->family);
return ret;
}
/*
* This filter cannot function correctly unless connection tracking

View File

@@ -117,26 +117,41 @@ static void connlimit_mt_destroy(const struct xt_mtdtor_param *par)
nf_ct_netns_put(par->net, par->family);
}
static struct xt_match connlimit_mt_reg __read_mostly = {
.name = "connlimit",
.revision = 1,
.family = NFPROTO_UNSPEC,
.checkentry = connlimit_mt_check,
.match = connlimit_mt,
.matchsize = sizeof(struct xt_connlimit_info),
.usersize = offsetof(struct xt_connlimit_info, data),
.destroy = connlimit_mt_destroy,
.me = THIS_MODULE,
static struct xt_match connlimit_mt_reg[] __read_mostly = {
{
.name = "connlimit",
.revision = 1,
.family = NFPROTO_IPV4,
.checkentry = connlimit_mt_check,
.match = connlimit_mt,
.matchsize = sizeof(struct xt_connlimit_info),
.usersize = offsetof(struct xt_connlimit_info, data),
.destroy = connlimit_mt_destroy,
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "connlimit",
.revision = 1,
.family = NFPROTO_IPV6,
.checkentry = connlimit_mt_check,
.match = connlimit_mt,
.matchsize = sizeof(struct xt_connlimit_info),
.usersize = offsetof(struct xt_connlimit_info, data),
.destroy = connlimit_mt_destroy,
.me = THIS_MODULE,
},
#endif
};
static int __init connlimit_mt_init(void)
{
return xt_register_match(&connlimit_mt_reg);
return xt_register_matches(connlimit_mt_reg, ARRAY_SIZE(connlimit_mt_reg));
}
static void __exit connlimit_mt_exit(void)
{
xt_unregister_match(&connlimit_mt_reg);
xt_unregister_matches(connlimit_mt_reg, ARRAY_SIZE(connlimit_mt_reg));
}
module_init(connlimit_mt_init);

View File

@@ -151,7 +151,7 @@ static struct xt_target connmark_tg_reg[] __read_mostly = {
{
.name = "CONNMARK",
.revision = 1,
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.checkentry = connmark_tg_check,
.target = connmark_tg,
.targetsize = sizeof(struct xt_connmark_tginfo1),
@@ -161,13 +161,35 @@ static struct xt_target connmark_tg_reg[] __read_mostly = {
{
.name = "CONNMARK",
.revision = 2,
.family = NFPROTO_UNSPEC,
.family = NFPROTO_IPV4,
.checkentry = connmark_tg_check,
.target = connmark_tg_v2,
.targetsize = sizeof(struct xt_connmark_tginfo2),
.destroy = connmark_tg_destroy,
.me = THIS_MODULE,
}
},
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "CONNMARK",
.revision = 1,
.family = NFPROTO_IPV6,
.checkentry = connmark_tg_check,
.target = connmark_tg,
.targetsize = sizeof(struct xt_connmark_tginfo1),
.destroy = connmark_tg_destroy,
.me = THIS_MODULE,
},
{
.name = "CONNMARK",
.revision = 2,
.family = NFPROTO_IPV6,
.checkentry = connmark_tg_check,
.target = connmark_tg_v2,
.targetsize = sizeof(struct xt_connmark_tginfo2),
.destroy = connmark_tg_destroy,
.me = THIS_MODULE,
},
#endif
};
static struct xt_match connmark_mt_reg __read_mostly = {

View File

@@ -39,13 +39,35 @@ mark_mt(const struct sk_buff *skb, struct xt_action_param *par)
return ((skb->mark & info->mask) == info->mark) ^ info->invert;
}
static struct xt_target mark_tg_reg __read_mostly = {
.name = "MARK",
.revision = 2,
.family = NFPROTO_UNSPEC,
.target = mark_tg,
.targetsize = sizeof(struct xt_mark_tginfo2),
.me = THIS_MODULE,
static struct xt_target mark_tg_reg[] __read_mostly = {
{
.name = "MARK",
.revision = 2,
.family = NFPROTO_IPV4,
.target = mark_tg,
.targetsize = sizeof(struct xt_mark_tginfo2),
.me = THIS_MODULE,
},
#if IS_ENABLED(CONFIG_IP_NF_ARPTABLES)
{
.name = "MARK",
.revision = 2,
.family = NFPROTO_ARP,
.target = mark_tg,
.targetsize = sizeof(struct xt_mark_tginfo2),
.me = THIS_MODULE,
},
#endif
#if IS_ENABLED(CONFIG_IP6_NF_IPTABLES)
{
.name = "MARK",
.revision = 2,
.family = NFPROTO_IPV4,
.target = mark_tg,
.targetsize = sizeof(struct xt_mark_tginfo2),
.me = THIS_MODULE,
},
#endif
};
static struct xt_match mark_mt_reg __read_mostly = {
@@ -61,12 +83,12 @@ static int __init mark_mt_init(void)
{
int ret;
ret = xt_register_target(&mark_tg_reg);
ret = xt_register_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
if (ret < 0)
return ret;
ret = xt_register_match(&mark_mt_reg);
if (ret < 0) {
xt_unregister_target(&mark_tg_reg);
xt_unregister_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
return ret;
}
return 0;
@@ -75,7 +97,7 @@ static int __init mark_mt_init(void)
static void __exit mark_mt_exit(void)
{
xt_unregister_match(&mark_mt_reg);
xt_unregister_target(&mark_tg_reg);
xt_unregister_targets(mark_tg_reg, ARRAY_SIZE(mark_tg_reg));
}
module_init(mark_mt_init);

View File

@@ -2136,8 +2136,9 @@ void __netlink_clear_multicast_users(struct sock *ksk, unsigned int group)
{
struct sock *sk;
struct netlink_table *tbl = &nl_table[ksk->sk_protocol];
struct hlist_node *tmp;
sk_for_each_bound(sk, &tbl->mc_list)
sk_for_each_bound_safe(sk, tmp, &tbl->mc_list)
netlink_update_socket_mc(nlk_sk(sk), group, 0);
}

View File

@@ -285,23 +285,17 @@ static int route_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
return err;
}
static const struct rtnl_msg_handler phonet_rtnl_msg_handlers[] __initdata_or_module = {
{THIS_MODULE, PF_PHONET, RTM_NEWADDR, addr_doit, NULL, 0},
{THIS_MODULE, PF_PHONET, RTM_DELADDR, addr_doit, NULL, 0},
{THIS_MODULE, PF_PHONET, RTM_GETADDR, NULL, getaddr_dumpit, 0},
{THIS_MODULE, PF_PHONET, RTM_NEWROUTE, route_doit, NULL, 0},
{THIS_MODULE, PF_PHONET, RTM_DELROUTE, route_doit, NULL, 0},
{THIS_MODULE, PF_PHONET, RTM_GETROUTE, NULL, route_dumpit,
RTNL_FLAG_DUMP_UNLOCKED},
};
int __init phonet_netlink_register(void)
{
int err = rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_NEWADDR,
addr_doit, NULL, 0);
if (err)
return err;
/* Further rtnl_register_module() cannot fail */
rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_DELADDR,
addr_doit, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_GETADDR,
NULL, getaddr_dumpit, 0);
rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_NEWROUTE,
route_doit, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_DELROUTE,
route_doit, NULL, 0);
rtnl_register_module(THIS_MODULE, PF_PHONET, RTM_GETROUTE,
NULL, route_dumpit, RTNL_FLAG_DUMP_UNLOCKED);
return 0;
return rtnl_register_many(phonet_rtnl_msg_handlers);
}

View File

@@ -1056,7 +1056,7 @@ bool rxrpc_direct_abort(struct sk_buff *skb, enum rxrpc_abort_reason why,
int rxrpc_io_thread(void *data);
static inline void rxrpc_wake_up_io_thread(struct rxrpc_local *local)
{
wake_up_process(local->io_thread);
wake_up_process(READ_ONCE(local->io_thread));
}
static inline bool rxrpc_protocol_error(struct sk_buff *skb, enum rxrpc_abort_reason why)

View File

@@ -27,11 +27,17 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
{
struct sk_buff_head *rx_queue;
struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
struct task_struct *io_thread;
if (unlikely(!local)) {
kfree_skb(skb);
return 0;
}
io_thread = READ_ONCE(local->io_thread);
if (!io_thread) {
kfree_skb(skb);
return 0;
}
if (skb->tstamp == 0)
skb->tstamp = ktime_get_real();
@@ -47,7 +53,7 @@ int rxrpc_encap_rcv(struct sock *udp_sk, struct sk_buff *skb)
#endif
skb_queue_tail(rx_queue, skb);
rxrpc_wake_up_io_thread(local);
wake_up_process(io_thread);
return 0;
}
@@ -565,7 +571,7 @@ int rxrpc_io_thread(void *data)
__set_current_state(TASK_RUNNING);
rxrpc_see_local(local, rxrpc_local_stop);
rxrpc_destroy_local(local);
local->io_thread = NULL;
WRITE_ONCE(local->io_thread, NULL);
rxrpc_see_local(local, rxrpc_local_stopped);
return 0;
}

Some files were not shown because too many files have changed in this diff Show More