Files
linux/include/net
Eric Dumazet d4438ce68b inet: call inet6_ehashfn() once from inet6_hash_connect()
inet6_ehashfn() being called from __inet6_check_established()
has a big impact on performance, as shown in the Tested section.

After prior patch, we can compute the hash for port 0
from inet6_hash_connect(), and derive each hash in
__inet_hash_connect() from this initial hash:

hash(saddr, lport, daddr, dport) == hash(saddr, 0, daddr, dport) + lport

Apply the same principle for __inet_check_established(),
although inet_ehashfn() has a smaller cost.

Tested:

Server: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog
Client: ulimit -n 40000; neper/tcp_crr -T 200 -F 30000 -6 --nolog -c -H server

Before this patch:

  utime_start=0.286131
  utime_end=4.378886
  stime_start=11.952556
  stime_end=1991.655533
  num_transactions=1446830
  latency_min=0.001061085
  latency_max=12.075275028
  latency_mean=0.376375302
  latency_stddev=1.361969596
  num_samples=306383
  throughput=151866.56

perf top:

 50.01%  [kernel]       [k] __inet6_check_established
 20.65%  [kernel]       [k] __inet_hash_connect
 15.81%  [kernel]       [k] inet6_ehashfn
  2.92%  [kernel]       [k] rcu_all_qs
  2.34%  [kernel]       [k] __cond_resched
  0.50%  [kernel]       [k] _raw_spin_lock
  0.34%  [kernel]       [k] sched_balance_trigger
  0.24%  [kernel]       [k] queued_spin_lock_slowpath

After this patch:

  utime_start=0.315047
  utime_end=9.257617
  stime_start=7.041489
  stime_end=1923.688387
  num_transactions=3057968
  latency_min=0.003041375
  latency_max=7.056589232
  latency_mean=0.141075048    # Better latency metrics
  latency_stddev=0.526900516
  num_samples=312996
  throughput=320677.21        # 111 % increase, and 229 % for the series

perf top: inet6_ehashfn is no longer seen.

 39.67%  [kernel]       [k] __inet_hash_connect
 37.06%  [kernel]       [k] __inet6_check_established
  4.79%  [kernel]       [k] rcu_all_qs
  3.82%  [kernel]       [k] __cond_resched
  1.76%  [kernel]       [k] sched_balance_domains
  0.82%  [kernel]       [k] _raw_spin_lock
  0.81%  [kernel]       [k] sched_balance_rq
  0.81%  [kernel]       [k] sched_balance_trigger
  0.76%  [kernel]       [k] queued_spin_lock_slowpath

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Tested-by: Jason Xing <kerneljasonxing@gmail.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Link: https://patch.msgid.link/20250305034550.879255-3-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-03-06 15:26:02 -08:00
..
2024-10-08 15:33:49 -07:00
2024-12-09 14:44:59 -08:00
2024-08-26 09:37:23 -07:00
2024-01-02 12:41:16 +00:00
2025-01-20 12:16:04 -08:00
2024-06-25 11:10:18 +02:00
2025-01-06 15:57:01 -08:00
2022-08-09 22:14:02 -07:00
2025-01-16 13:04:58 -08:00
2024-05-08 10:35:09 +01:00
2024-08-26 09:37:23 -07:00
2024-12-09 14:44:59 -08:00
2024-11-13 18:49:50 -08:00
2024-08-26 09:37:23 -07:00
2023-11-02 09:31:02 +01:00
2024-05-07 01:35:55 +02:00
2024-08-26 09:37:23 -07:00
2021-10-15 11:33:08 +01:00
2024-02-28 11:19:41 +00:00
2023-04-22 01:39:41 +02:00
2024-12-06 17:43:08 -08:00
2021-10-13 09:40:46 -07:00
2024-08-26 09:37:23 -07:00
2024-08-12 17:23:57 -07:00
2025-02-06 16:27:30 -08:00
2024-08-12 17:50:34 -07:00
2024-04-01 10:49:28 +01:00
2023-07-14 20:39:30 -07:00
2024-08-26 09:37:23 -07:00
2024-08-26 09:37:23 -07:00
2024-05-30 18:29:38 -07:00
2024-05-30 18:29:38 -07:00
2019-10-05 16:29:00 -07:00
2025-03-03 17:16:34 -08:00
2023-10-04 11:49:20 -07:00
2025-01-29 13:32:08 -08:00
2024-07-08 14:07:31 -07:00
2024-05-09 20:25:55 -07:00
2023-07-28 14:07:59 -07:00
2022-12-12 15:04:39 -08:00
2023-09-14 16:16:36 +02:00
2024-08-26 09:37:23 -07:00
2025-02-27 14:03:52 +01:00