Commit Graph

4528 Commits

Author SHA1 Message Date
Matt Macy
81be655266 iflib: ensure that tx interrupts enabled and cleanups
Doing a 'dd' over iscsi will reliably cause stalls. Tx
cleaning _should_ reliably happen as data is sent.
However, currently if the transmit queue fills it will
wait until the iflib timer (hz/2) runs.

This change causes the the tx taskq thread to be run
if there are completed descriptors.

While here:

- make timer interrupt delay a sysctl

- simplify txd_db_check handling

- comment on INTR types

Background on the change:

Initially doorbell updates were minimized by only writing to the register
on every fourth packet. If txq_drain would return without writing to the
doorbell it scheduled a callout on the next tick to do the doorbell write
to ensure that the write otherwise happened "soon". At that time a sysctl
was added for users to avoid the potential added latency by simply writing
to the doorbell register on every packet. This worked perfectly well for
e1000 and ixgbe ... and appeared to work well on ixl. However, as it
turned out there was a race to this approach that would lockup the ixl MAC.
It was possible for a lower producer index to be written after a higher one.
On e1000 and ixgbe this was harmless - on ixl it was fatal. My initial
response was to add a lock around doorbell writes - fixing the problem but
adding an unacceptable amount of lock contention.

The next iteration was to use transmit interrupts to drive delayed doorbell
writes. If there were no packets in the queue all doorbell writes would be
immediate as the queue started to fill up we could delay doorbell writes
further and further. At the start of drain if we've cleaned any packets we
know we've moved the state machine along and we write the doorbell (an
obvious missing optimization was to skip that doorbell write if db_pending
is zero). This change required that tx interrupts be scheduled periodically
as opposed to just when the hardware txq was full. However, that just leads
to our next problem.

Initially dedicated msix vectors were used for both tx and rx. However, it
was often possible to use up all available vectors before we set up all the
queues we wanted. By having rx and tx share a vector for a given queue we
could halve the number of vectors used by a given configuration. The problem
here is that with this change only e1000 passed the necessary value to have
the fast interrupt drive tx when appropriate.

Reported by: mav@
Tested by: mav@
Reviewed by:    gallatin@
MFC after:      1 month
Sponsored by:   iXsystems
Differential Revision:  https://reviews.freebsd.org/D27683
2021-01-07 14:07:35 -08:00
Alexander V. Chernikov
d68cf57b7f Refactor rt_addrmsg() and rt_routemsg().
Summary:
* Refactor rt_addrmsg(): make V_rt_add_addr_allfibs decision locally.
* Fix rt_routemsg() and multipath by accepting nexthop instead of interface pointer.
* Refactor rtsock_routemsg(): avoid accessing rtentry fields directly.
* Simplify in_addprefix() by moving prefix search to a separate  function.

Reviewers: #network

Subscribers: imp, ae, bz

Differential Revision: https://reviews.freebsd.org/D28011
2021-01-07 19:38:19 +00:00
Kristof Provost
5a3b9507d7 pf: Convert pfi_kkif to use counter_u64
Improve caching behaviour by using counter_u64 rather than variables
shared between cores.

The result of converting all counters to counter(9) (i.e. this full
patch series) is a significant improvement in throughput. As tested by
olivier@, on Intel Xeon E5-2697Av4 (16Cores, 32 threads) hardware with
Mellanox ConnectX-4 MCX416A-CCAT (100GBase-SR4) nics we see:

x FreeBSD 20201223: inet packets-per-second
+ FreeBSD 20201223 with pf patches: inet packets-per-second
+--------------------------------------------------------------------------+
|                                                                        + |
| xx                                                                     + |
|xxx                                                                    +++|
||A|                                                                       |
|                                                                       |A||
+--------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x   5       9216962       9526356       9343902     9371057.6     116720.36
+   5      19427190      19698400      19502922      19546509     109084.92
Difference at 95.0% confidence
        1.01755e+07 +/- 164756
        108.584% +/- 2.9359%
        (Student's t, pooled s = 112967)

Reviewed by:	philip
MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27763
2021-01-05 23:35:37 +01:00
Kristof Provost
26c841e2a4 pf: Allocate and free pfi_kkif in separate functions
Factor out allocating and freeing pfi_kkif structures. This will be
useful when we change the counters to be counter_u64, so we don't have
to deal with that complexity in the multiple locations where we allocate
pfi_kkif structures.

No functional change.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27762
2021-01-05 23:35:37 +01:00
Kristof Provost
320c11165b pf: Split pfi_kif into a user and kernel space structure
No functional change.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27761
2021-01-05 23:35:37 +01:00
Kristof Provost
c3adacdad4 pf: Change pf_krule counters to use counter_u64
This improves the cache behaviour of pf and results in improved
throughput.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27760
2021-01-05 23:35:37 +01:00
Kristof Provost
c7bdafe2f1 pf: Remove unused fields from pf_krule
The u_* counters are used only to communicate with userspace, as
userspace cannot use counter_u64. As pf_krule is not passed to userspace
these fields are now obsolete.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27759
2021-01-05 23:35:36 +01:00
Kristof Provost
e86bddea9f pf: Split pf_rule into kernel and user space versions
No functional change intended.

MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27758
2021-01-05 23:35:36 +01:00
Kristof Provost
dc865dae89 pf: Migrate pf_rule and related structs to pf.h
As part of the split between user and kernel mode structures we're
moving all user space usable definitions into pf.h.

No functional change intended.

MFC after:      2 weeks
Sponsored by:   Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27757
2021-01-05 23:35:36 +01:00
Kristof Provost
fbbf270eef pf: Use counter_u64 in pf_src_node
Reviewd by:	philip
MFC after:      2 weeks
Sponsored by:   Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27756
2021-01-05 23:35:36 +01:00
Kristof Provost
17ad7334ca pf: Split pf_src_node into a kernel and userspace struct
Introduce a kernel version of struct pf_src_node (pf_ksrc_node).

This will allow us to improve the in-kernel data structure without
breaking userspace compatibility.

Reviewed by:	philip
MFC after:	2 weeks
Sponsored by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27707
2021-01-05 23:35:36 +01:00
Alexander V. Chernikov
9c0ff6a8bb Remove now-unused RT_GATEWAY* definitions.
They were used to simplify nexthop transition, hence not needed
 anymore.
2021-01-04 21:45:46 +00:00
Hans Petter Selasky
747feea146 Streamline the infiniband code according to the ethernet code.
Fix LINT-NOIP kernel build.

Submitted by:	rlibby @
Differential Revision:	https://reviews.freebsd.org/D27861
MFC after:	1 week
Sponsored by:	Mellanox Technologies // NVIDIA Networking
2020-12-31 10:07:02 +01:00
Hans Petter Selasky
ec52ff6d14 Streamline the infiniband code according to the ethernet code.
Specifically implement the if_requestencap callback function for infiniband.
Most of the changes are simply a cut and paste of the equivalent ethernet part.

Reviewed by:	melifaro @
Differential Revision:	https://reviews.freebsd.org/D27631
MFC after:	1 week
Sponsored by:	Mellanox Technologies // NVIDIA Networking
2020-12-29 18:01:57 +01:00
Hans Petter Selasky
19ecb5e8da Fix for IPoIB over lagg(4).
Need to update both link layer address and broadcast address when active link changes for IP over infiniband.
This is because the broadcast address contains the so-called P-key, which is interface dependent.

Reviewed by:	kib @
Differential Revision:	https://reviews.freebsd.org/D27658
MFC after:	1 week
Sponsored by:	Mellanox Technologies // NVIDIA Networking
2020-12-29 17:35:06 +01:00
Ryan Libby
833dbf1e22 route: quiet -Wredundant-decls
Remove declaration duplicated in
f5baf8bb12

Reviewed by:	melifaro
Sponsored by:	Dell EMC Isilon
Differential Revision:	https://reviews.freebsd.org/D27790
2020-12-27 16:32:27 -08:00
Alexander V. Chernikov
f733d9701b Fix default route handling in radix4_lockless algo.
Improve nexthop debugging.

Reported by:	Florian Smeets <flo at smeets.xyz>
2020-12-26 22:51:02 +00:00
Alexander V. Chernikov
4e19e0d92a Use light-weight versions of routing lookup functions in ng_netflow.
Use recently-added combination of `fib[46]_lookup_rt()` which
 returns rtentry & raw nexthop with `rt_get_inet[6]_plen()` which
 returns address/prefix length of prefix inside `rt`.

Add `nhop_select_func()` wrapper around inlined `nhop_select()` to
 allow callers external to the routing subsystem select the proper
 nexthop from the multipath group without including internal headers.

New calls does not require reference counting objects and reduce
 the amount of copied/processed rtentry data.

Differential Revision: https://reviews.freebsd.org/D27675
2020-12-26 11:27:38 +00:00
Alexander V. Chernikov
f5baf8bb12 Add modular fib lookup framework.
This change introduces framework that allows to dynamically
 attach or detach longest prefix match (lpm) lookup algorithms
 to speed up datapath route tables lookups.

Framework takes care of handling initial synchronisation,
 route subscription, nhop/nhop groups reference and indexing,
 dataplane attachments and fib instance algorithm setup/teardown.
Framework features automatic algorithm selection, allowing for
 picking the best matching algorithm on-the-fly based on the
 amount of routes in the routing table.

Currently framework code is guarded under FIB_ALGO config option.
An idea is to enable it by default in the next couple of weeks.

The following algorithms are provided by default:
IPv4:
* bsearch4 (lockless binary search in a special IP array), tailored for
  small-fib (<16 routes)
* radix4_lockless (lockless immutable radix, re-created on every rtable change),
  tailored for small-fib (<1000 routes)
* radix4 (base system radix backend)
* dpdk_lpm4 (DPDK DIR24-8-based lookups), lockless datastrucure, optimized
  for large-fib (D27412)
IPv6:
* radix6_lockless (lockless immutable radix, re-created on every rtable change),
  tailed for small-fib (<1000 routes)
* radix6 (base system radix backend)
* dpdk_lpm6 (DPDK DIR24-8-based lookups), lockless datastrucure, optimized
  for large-fib (D27412)

Performance changes:
Micro benchmarks (I7-7660U, single-core lookups, 2048k dst, code in D27604):
IPv4:
8 routes:
  radix4: ~20mpps
  radix4_lockless: ~24.8mpps
  bsearch4: ~69mpps
  dpdk_lpm4: ~67 mpps
700k routes:
  radix4_lockless: 3.3mpps
  dpdk_lpm4: 46mpps

IPv6:
8 routes:
  radix6_lockless: ~20mpps
  dpdk_lpm6: ~70mpps
100k routes:
  radix6_lockless: 13.9mpps
  dpdk_lpm6: 57mpps

Forwarding benchmarks:
+ 10-15% IPv4 forwarding performance (small-fib, bsearch4)
+ 25% IPv4 forwarding performance (full-view, dpdk_lpm4)
+ 20% IPv6 forwarding performance (full-view, dpdk_lpm6)

Control:
Framwork adds the following runtime sysctls:

List algos
* net.route.algo.inet.algo_list: bsearch4, radix4_lockless, radix4
* net.route.algo.inet6.algo_list: radix6_lockless, radix6, dpdk_lpm6
Debug level (7=LOG_DEBUG, per-route)
net.route.algo.debug_level: 5
Algo selection (currently only for fib 0):
net.route.algo.inet.algo: bsearch4
net.route.algo.inet6.algo: radix6_lockless

Support for manually changing algos in non-default fib will be added
soon. Some sysctl names will be changed in the near future.

Differential Revision: https://reviews.freebsd.org/D27401
2020-12-25 11:33:17 +00:00
Ryan Libby
2fb4a03d55 rtsock: quiet -Wunused-variable in LINT-NOIP kernels
Fixup after r368769 / d68fb8d978.

Reported by:	mjg
Reviewed by:	melifaro
Sponsored by:	Dell EMC Isilon
Differential Revision:	https://reviews.freebsd.org/D27730
2020-12-24 12:34:18 -08:00
Mark Johnston
92be2847e8 rtsock: Avoid copying uninitialized padding bytes
When copying sockaddrs out to userspace, we pad them to a multiple of
the platform alignment (sizeof(long)).  However, some sockaddr sizes,
such as struct sockaddr_dl, are not an integer multiple of the
alignment, so we may end up copying out uninitialized bytes.

Fix this by always bouncing through a pre-zeroed sockaddr_storage.

Reported by:	KASAN
Reviewed by:	melifaro
MFC after:	3 days
Sponsored by:	The FreeBSD Foundation
Differential Revision:	https://reviews.freebsd.org/D27729
2020-12-23 11:16:40 -05:00
Kristof Provost
1c00efe98e pf: Use counter(9) for pf_state byte/packet tracking
This improves cache behaviour by not writing to the same variable from
multiple cores simultaneously.

pf_state is only used in the kernel, so can be safely modified.

Reviewed by:	Lutz Donnerhacke, philip
MFC after:	1 week
Sponsed by:	Orange Business Services
Differential Revision:	https://reviews.freebsd.org/D27661
2020-12-23 12:03:21 +01:00
Kristof Provost
c3f69af03a pf: Fix unaligned checksum updates
The algorithm we use to update checksums only works correctly if the
updated data is aligned on 16-bit boundaries (relative to the start of
the packet).

Import the OpenBSD fix for this issue.

PR:		240416
Obtained from:	OpenBSD
MFC after:	1 week
Reviewed by:	tuexen (previous version)
Differential Revision:	https://reviews.freebsd.org/D27696
2020-12-23 12:03:20 +01:00
Hans Petter Selasky
ddce63fcb6 Remove not needed variable initialization.
And switch from int to bool while at it.

Reviewed by:	melifaro@
Differential Revision:	https://reviews.freebsd.org/D27725
MFC after:	1 week
Sponsored by:	Mellanox Technologies // NVIDIA Networking
2020-12-23 12:04:46 +01:00
Konstantin Belousov
994e47023a vxlan: stop checking CSUM_ENCAP_VXLAN when converting inner CSUM flags into normal, for decapsulation.
The packet, if processed at this point, was already parsed to be UDP
directed to a vxlan port.

Connect-X 4+ does not provide easy method to infer which parser
processed the packet, so driver cannot set the flag without a lot of
efforts which are only to satisfy the formal requirements.

Reviewed by:	bryanv, np
Sponsored by:	Mellanox Technologies/NVidia Networking
Differential revision:	https://reviews.freebsd.org/D27449
MFC after:	1 week
2020-12-23 10:54:06 +02:00
Alexander V. Chernikov
d68fb8d978 Switch direct rt fields access in rtsock.c to newly-create field acessors.
rtsock code was build around the assumption that each rtentry record
 in the system radix tree is a ready-to-use sockaddr. This assumptions
 turned out to be not quite true:
* masks have their length tweaked, so we have rtsock_fix_netmask() hack
* IPv6 addresses have their scope embedded, so we have another explicit
 deembedding hack.

Change the code to decouple rtentry internals from rtsock code using
 newly-created rtentry accessors. This will allow to eventually eliminate
 both of the hacks and change rtentry dst/mask format.

Differential Revision:	https://reviews.freebsd.org/D27451
2020-12-18 22:00:57 +00:00
Brooks Davis
f3f2ee76ad style(9): Correct whitespace in struct definitions
struct ifconf and struct ifreq use the odd style "struct<tab>foo".
struct ifdrv seems to have tried to follow this but was committed with
spaces in place of most tabs resulting in "struct<space><space>ifdrv".

MFC after:	3 days
2020-12-11 01:00:07 +00:00
Gleb Smirnoff
5ee33a9076 Fixup r368446 with KERN_TLS. 2020-12-08 23:54:09 +00:00
Gleb Smirnoff
e1074ed6a0 The list of ports in configuration path shall be protected by locks,
epoch shall be used only for fast path.  Thus use LAGG_XLOCK() in
lagg_[un]register_vlan.  This fixes sleeping in epoch panic.

PR:		240609
2020-12-08 16:46:00 +00:00
Gleb Smirnoff
87bf9b9cbe Convert LAGG_RLOCK() to NET_EPOCH_ENTER(). No functional changes. 2020-12-08 16:36:46 +00:00
Mark Johnston
c065d4e5e9 iflib: Avoid leaking the freelist bitmaps upon driver detach
Submitted by:	Sai Rajesh Tallamraju <stallamr@netapp.com>
MFC after:	2 weeks
Sponsored by:	NetApp, Inc.
Differential Revision:	https://reviews.freebsd.org/D27342
2020-12-07 14:53:14 +00:00
Mark Johnston
102540192c iflib: Detach tasks upon device registration failure
In some error paths we would fail to detach from the iflib taskqueue
groups.  Also move the detach code into its own subroutine instead of
duplicating it.

Submitted by:	Sai Rajesh Tallamraju <stallamr@netapp.com>
MFC after:	2 weeks
Sponsored by:	NetApp, Inc.
Differential Revision:	https://reviews.freebsd.org/D27342
2020-12-07 14:52:57 +00:00
Alexander V. Chernikov
df9053920f Add IPv4/IPv6 rtentry prefix accessors.
Multiple consumers like ipfw, netflow or new route lookup algorithms
 need to get the prefix data out of struct rtentry.
Instead of providing direct access to the rtentry, create IPv4/IPv6
 accessors to abstract struct rtentry internals and avoid including
 internal routing headers for external consumers.

While here, move struct route_nhop_data to the public header, so external
 customers can actually use lookup functions returning rt&nhop data.

Differential Revision:	https://reviews.freebsd.org/D27416
2020-12-03 22:23:57 +00:00
Kristof Provost
7f883a9b5b net: Revert vnet/epair cleanup race mitigation
Revert the mitigation code for the vnet/epair cleanup race (done in r365457).
r368237 introduced a more reliable fix.

MFC after:	2 weeks
Sponsored by:	Modirum MDPay
2020-12-01 16:34:43 +00:00
Kristof Provost
e133271fc1 if: Fix panic when destroying vnet and epair simultaneously
When destroying a vnet and an epair (with one end in the vnet) we often
panicked. This was the result of the destruction of the epair, which destroys
both ends simultaneously, happening while vnet_if_return() was moving the
struct ifnet to its home vnet. This can result in a freed ifnet being re-added
to the home vnet V_ifnet list. That in turn panics the next time the ifnet is
used.

Prevent this race by ensuring that vnet_if_return() cannot run at the same time
as if_detach() or epair_clone_destroy().

PR:		238870, 234985, 244703, 250870
MFC after:	2 weeks
Sponsored by:	Modirum MDPay
Differential Revision:	https://reviews.freebsd.org/D27378
2020-12-01 16:23:59 +00:00
Alexander V. Chernikov
77df2c21cb Renumber NHR_* flags after NHR_IFAIF removal in r368127.
Suggested by:	rpokala
2020-11-30 21:42:55 +00:00
Alexander V. Chernikov
d1d941c5b9 Remove RADIX_MPATH config option.
ROUTE_MPATH is the new config option controlling new multipath routing
 implementation. Remove the last pieces of RADIX_MPATH-related code and
 the config option.

Reviewed by:	glebius
Differential Revision:	https://reviews.freebsd.org/D27244
2020-11-29 19:43:33 +00:00
Matt Macy
2338da0373 Import kernel WireGuard support
Data path largely shared with the OpenBSD implementation by
Matt Dunwoodie <ncon@nconroy.net>

Reviewed by:	grehan@freebsd.org
MFC after:	1 month
Sponsored by:	Rubicon LLC, (Netgate)
Differential Revision:	https://reviews.freebsd.org/D26137
2020-11-29 19:38:03 +00:00
Alexander V. Chernikov
3b1654cb14 Introduce rib_walk_ext_internal() to allow iteration with rnh pointer.
This solves the case when rib is not yet attached/detached to/from the
 system rib array.

Differential Revision:	https://reviews.freebsd.org/D27406
2020-11-29 13:54:49 +00:00
Alexander V. Chernikov
f47fa26065 Add nhop_ref_any() to unify referencing nhop or nexthop group.
It allows code within routing subsystem to transparently reference nexthops
 and nexthop groups, similar to nhop_free_any(), abstracting ROUTE_MPATH
 details.

Differential Revision:	https://reviews.freebsd.org/D27410
2020-11-29 13:52:06 +00:00
Alexander V. Chernikov
b712e3e343 Refactor fib4/fib6 functions.
No functional changes.

* Make lookup path of fib<4|6>_lookup_debugnet() separate functions
 (fib<46>_lookup_rt()). These will be used in the control plane code
 requiring unlocked radix operations and actual prefix pointer.
* Make lookup part of fib<4|6>_check_urpf() separate functions.
 This change simplifies the switch to alternative lookup implementations,
 which helps algorithmic lookups introduction.
* While here, use static initializers for IPv4/IPv6 keys

Differential Revision:	https://reviews.freebsd.org/D27405
2020-11-29 13:41:49 +00:00
Alexander V. Chernikov
98d5c4e5c8 Add tracking for rib/nhops/nhgrp objects and provide cumulative number accessors.
The resulting KPI can be used by routing table consumers to estimate the required
 scale for route table export.

* Add tracking for rib routes
* Add accessors for number of nexthops/nexthop objects
* Simplify rib_unsubscribe: store rnh we're attached to instead of requiring it up
 again on destruction. This helps in the cases when rnh is not linked yet/already unlinked.

Differential Revision:	https://reviews.freebsd.org/D27404
2020-11-29 13:27:24 +00:00
Alexander V. Chernikov
ef6ef7e5da Add nhgrp_get_idx() as a counterpart for nhop_get_idx().
It allows the routing-related code to reference nexthop groups by index
 instead of storing a pointer.
2020-11-28 15:46:40 +00:00
Alexander V. Chernikov
7a6dc73c98 Cleanup nexthops request flags:
* remove NHR_IFAIF as it was used by previous version of nexthop KPI
* update NHR_REF description
2020-11-28 15:11:59 +00:00
Konstantin Belousov
cd85379104 Make MAXPHYS tunable. Bump MAXPHYS to 1M.
Replace MAXPHYS by runtime variable maxphys. It is initialized from
MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.

Make b_pages[] array in struct buf flexible.  Size b_pages[] for buffer
cache buffers exactly to atop(maxbcachebuf) (currently it is sized to
atop(MAXPHYS)), and b_pages[] for pbufs is sized to atop(maxphys) + 1.
The +1 for pbufs allow several pbuf consumers, among them vmapbuf(),
to use unaligned buffers still sized to maxphys, esp. when such
buffers come from userspace (*).  Overall, we save significant amount
of otherwise wasted memory in b_pages[] for buffer cache buffers,
while bumping MAXPHYS to desired high value.

Eliminate all direct uses of the MAXPHYS constant in kernel and driver
sources, except a place which initialize maxphys.  Some random (and
arguably weird) uses of MAXPHYS, e.g. in linuxolator, are converted
straight.  Some drivers, which use MAXPHYS to size embeded structures,
get private MAXPHYS-like constant; their convertion is out of scope
for this work.

Changes to cam/, dev/ahci, dev/ata, dev/mpr, dev/mpt, dev/mvs,
dev/siis, where either submitted by, or based on changes by mav.

Suggested by: mav (*)
Reviewed by:	imp, mav, imp, mckusick, scottl (intermediate versions)
Tested by:	pho
Sponsored by:	The FreeBSD Foundation
Differential revision:	https://reviews.freebsd.org/D27225
2020-11-28 12:12:51 +00:00
Kristof Provost
bca0e1d2ac if: Fix non-VIMAGE build
if_link_ifnet() and if_unlink_ifnet() are needed even when VIMAGE is not
enabled.

MFC after:	2 weeks
Sponsored by:	Modirum MDPay
2020-11-25 17:15:24 +00:00
Kristof Provost
a779388f8b if: Protect V_ifnet in vnet_if_return()
When we terminate a vnet (i.e. jail) we move interfaces back to their home
vnet. We need to protect our access to the V_ifnet CK_LIST.

We could enter NET_EPOCH, but if_detach_internal() (called from if_vmove())
waits for net epoch callback completion. That's not possible from NET_EPOCH.
Instead, we take the IFNET_WLOCK, build a list of the interfaces that need to
move and, once we've released the lock, move them back to their home vnet.

We cannot hold the IFNET_WLOCK() during if_vmove(), because that results in a
LOR between ifnet_sx, in_multi_sx and iflib ctx lock.

Separate out moving the ifp into or out of V_ifnet, so we can hold the lock as
we do the list manipulation, but do not hold it as we if_vmove().

Reviewed by:	melifaro
MFC after:	2 weeks
Sponsored by:	Modirum MDPay
Differential Revision:	https://reviews.freebsd.org/D27279
2020-11-25 15:07:22 +00:00
Kristof Provost
a60100fdfc if: Remove ifnet_rwlock
It no longer serves any purpose, as evidenced by the fact that we never take it
without ifnet_sxlock.

Sponsored by:	Modirum MDPay
Differential Revision:	https://reviews.freebsd.org/D27278
2020-11-25 10:56:38 +00:00
Alexander V. Chernikov
7511a63825 Refactor rib iterator functions.
* Make rib_walk() order of arguments consistent with the rest of RIB api
* Add rib_walk_ext() allowing to exec callback before/after iteration.
* Rename rt_foreach_fib_walk_del -> rib_foreach_table_walk_del
* Rename rt_forach_fib_walk -> rib_foreach_table_walk
* Move rib_foreach_table_walk{_del} to route/route_helpers.c
* Slightly refactor rib_foreach_table_walk{_del} to make the implementation
 consistent and prepare for upcoming iterator optimizations.

Differential Revision:	https://reviews.freebsd.org/D27219
2020-11-22 20:21:10 +00:00
Mitchell Horne
70af7ce99a Make net/ifq.h C++ friendly
Don't use "new" as an identifier, and add explicit casts from void *.

As a general policy, FreeBSD doesn't make any C++ compatibility
guarantees for kernel headers like it does for userland, but it is a
small effort to do so in this case, to the benefit of a downstream
consumer (NetApp).

Reviewed by:	rscheff
Sponsored by:	NetApp, Inc.
Sponsored by:	Klara, Inc.
Differential Revision:	https://reviews.freebsd.org/D27286
2020-11-20 14:45:45 +00:00