"df", "rf" and "offset". This allows to match on specific
bits of ip_off field.
For compatibility reasons lack of keyword means "offset".
Reviewed by: ae
Differential Revision: https://reviews.freebsd.org/D26021
Upper level protocols defer checksums calculation in hope we have
checksums offloading in a network card. CSUM_DELAY_DATA flag is used
to determine that checksum calculation was deferred. And IP output
routine checks for this flag before pass mbuf to lower layer. Forwarded
packets have not this flag.
NAT64 uses checksums adjustment when it translates IP headers.
In most cases NAT64 is used for forwarded packets, but in case when it
handles locally originated packets we need to finish checksum calculation
that was deferred to correctly adjust it.
Add check for presence of CSUM_DELAY_DATA flag and finish checksum
calculation before adjustment.
Reported and tested by: Evgeniy Khramtsov <evgeniy at khramtsov org>
MFC after: 1 week
When dummynet initializes it prints a debug message with the current VNET
pointer unnecessarily revealing kernel memory layout. This appears to be left
over from when the first pieces of vimage support were added.
PR: 238658
Submitted by: huangfq.daxian@gmail.com
Reviewed by: markj, bz, gnn, kp, melifaro
Approved by: jtl (co-mentor), bz (co-mentor)
Event: July 2020 Bugathon
MFC after: 3 days
Differential Revision: https://reviews.freebsd.org/D25619
fib[46]_lookup_nh_ represents pre-epoch generation of fib api, providing less guarantees
over pointer validness and requiring on-stack data copying.
With no callers remaining, remove fib[46]_lookup_nh_ functions.
Submitted by: Neel Chauhan <neel AT neelc DOT org>
Differential Revision: https://reviews.freebsd.org/D25445
This is in preparation for enabling a loadable SCTP stack. Analogous to
IPSEC/IPSEC_SUPPORT, the SCTP_SUPPORT kernel option must be configured
in order to support a loadable SCTP implementation.
Discussed with: tuexen
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Right now we optionally allocate 8 counters per table entry, so in
addition to memory consumed by counters, we require 8 pointers worth of
space in each entry even when counters are not allocated (the default).
Instead, define a UMA zone that returns contiguous per-CPU counter
arrays for use in table entries. On amd64 this reduces sizeof(struct
pfr_kentry) from 216 to 160. The smaller size also results in better
slab efficiency, so memory usage for large tables is reduced by about
28%.
Reviewed by: kp
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D24843
pf by default does not do per-table address accounting unless the
"counters" keyword is specified in the corresponding pf.conf table
definition. Yet, we always allocate 12 per-CPU counters per table. For
large tables this carries a lot of overhead, so only allocate counters
when they will actually be used.
A further enhancement might be to use a dedicated UMA zone to allocate
counter arrays for table entries, since close to half of the structure
size comes from counter pointers. A related issue is the cost of
zeroing counters, since counter_u64_zero() calls smp_rendezvous() on
some architectures.
Reported by: loos, Jim Pingle <jimp@netgate.com>
Reviewed by: kp
MFC after: 2 weeks
Sponsored by: Rubicon Communications, LLC (Netgate)
Differential Revision: https://reviews.freebsd.org/D24803
Nexthop objects implementation, defined in r359823,
introduced sys/net/route directory intended to hold all
routing-related code. Move recently-introduced route_temporal.c and
private route_var.h header there.
Differential Revision: https://reviews.freebsd.org/D24597
The pf_frag_mtx mutex protects the fragments queue. The fragments queue
is virtualised already (i.e. per-vnet) so it makes no sense to block
jail A from accessing its fragments queue while jail B is accessing its
own fragments queue.
Virtualise the lock for improved concurrency.
Differential Revision: https://reviews.freebsd.org/D24504
If we pass an anchor name which doesn't exist pfr_table_count() returns
-1, which leads to an overflow in mallocarray() and thus a panic.
Explicitly check that pfr_table_count() does not return an error.
Reported-by: syzbot+bd09d55d897d63d5f4f4@syzkaller.appspotmail.com
Reviewed by: melifaro
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D24539
Both DIOCCHANGEADDR and DIOCADDADDR take a struct pf_pooladdr from
userspace. They failed to validate the dyn pointer contained in its
struct pf_addr_wrap member structure.
This triggered assertion failures under fuzz testing in
pfi_dynaddr_setup(). Happily the dyn variable was overruled there, but
we should verify that it's set to NULL anyway.
Reported-by: syzbot+93e93150bc29f9b4b85f@syzkaller.appspotmail.com
Reviewed by: emaste
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D24431
Switch uRPF to use specific fib(9)-provided uRPF.
Switch MSS calculation to the latest fib(9) kpi.
Reviewed by: kp
Differential Revision: https://reviews.freebsd.org/D24386
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are
still not MPSAFE (or already are but aren’t properly marked).
Use it in preparation for a general review of all nodes.
This is non-functional change that adds annotations to SYSCTL_NODE and
SYSCTL_PROC nodes using one of the soon-to-be-required flags.
Mark all obvious cases as MPSAFE. All entries that haven't been marked
as MPSAFE before are by default marked as NEEDGIANT
Approved by: kib (mentor, blanket)
Commented by: kib, gallatin, melifaro
Differential Revision: https://reviews.freebsd.org/D23718
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are
still not MPSAFE (or already are but aren’t properly marked).
Use it in preparation for a general review of all nodes.
Mark all nodes in pf, pfsync and carp as MPSAFE.
Reviewed by: kp
Approved by: kib (mentor, blanket)
Differential Revision: https://reviews.freebsd.org/D23634
If we have a 'set skip on <ifgroup>' rule this flag it set on the group
kif, but must also be set on all members. pfctl does this when the rules
are set, but if groups are added afterwards we must also apply the flags
to the new member. If not, new group members will not be skipped until
the rules are reloaded.
Reported by: dvl@
Reviewed by: glebius@
Differential Revision: https://reviews.freebsd.org/D23254
As of r356974 calls to ip_output() require us to be in the network epoch.
That wasn't the case for the calls done from pfsyncintr() and
pfsync_defer_tmo().
There's no reason for this to be a tunable. It's perfectly safe to
change this at runtime.
Reviewed by: Lutz Donnerhacke
Differential Revision: https://reviews.freebsd.org/D22737
Rework tcpopts_parse() to be more strict. Use const pointer. Add length
checks for specific TCP options. The main purpose of the change is
avoiding of possible out of mbuf's data access.
Reported by: Maxime Villard
Reviewed by: melifaro, emaste
MFC after: 1 week
datagrams.
Previously destination address from original datagram was used. That
looked confusing, especially in the traceroute6 output.
Also honor IPSTEALTH kernel option and do TTL/HLIM decrementing only
when stealth mode is disabled.
Reported by: Marco van Tol <marco at tols org>
Reviewed by: melifaro
MFC after: 2 weeks
Sponsored by: Yandex LLC
Differential Revision: https://reviews.freebsd.org/D22631
PULLUP_LEN_LOCKED().
PULLUP_LEN_LOCKED() could update mbuf and thus we need to update related
pointers that can be used in next opcodes.
Reported by: Maxime Villard <max at m00nbsd net>
MFC after: 1 week
icmp_reflect(), called through icmp_error() requires us to be in NET_EPOCH.
Failure to hold it leads to the following panic (with INVARIANTS):
panic: Assertion in_epoch(net_epoch_preempt) failed at /usr/src/sys/netinet/ip_icmp.c:742
cpuid = 2
time = 1571233273
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe00e0977920
vpanic() at vpanic+0x17e/frame 0xfffffe00e0977980
panic() at panic+0x43/frame 0xfffffe00e09779e0
icmp_reflect() at icmp_reflect+0x625/frame 0xfffffe00e0977aa0
icmp_error() at icmp_error+0x720/frame 0xfffffe00e0977b10
pf_intr() at pf_intr+0xd5/frame 0xfffffe00e0977b50
ithread_loop() at ithread_loop+0x1c6/frame 0xfffffe00e0977bb0
fork_exit() at fork_exit+0x80/frame 0xfffffe00e0977bf0
fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe00e0977bf0
Note that we now enter NET_EPOCH twice if we enter ip_output() from pf_intr(),
but ip_output() will soon be converted to a function that requires epoch, so
entering NET_EPOCH directly from pf_intr() makes more sense.
Discussed with: glebius@
When epoch(9) was introduced to network stack, it was basically
dropped in place of existing locking, which was mutexes and
rwlocks. For the sake of performance mutex covered areas were
as small as possible, so became epoch covered areas.
However, epoch doesn't introduce any contention, it just delays
memory reclaim. So, there is no point to minimise epoch covered
areas in sense of performance. Meanwhile entering/exiting epoch
also has non-zero CPU usage, so doing this less often is a win.
Not the least is also code maintainability. In the new paradigm
we can assume that at any stage of processing a packet, we are
inside network epoch. This makes coding both input and output
path way easier.
On output path we already enter epoch quite early - in the
ip_output(), in the ip6_output().
This patch does the same for the input path. All ISR processing,
network related callouts, other ways of packet injection to the
network stack shall be performed in net_epoch. Any leaf function
that walks network configuration now asserts epoch.
Tricky part is configuration code paths - ioctls, sysctls. They
also call into leaf functions, so some need to be changed.
This patch would introduce more epoch recursions (see EPOCH_TRACE)
than we had before. They will be cleaned up separately, as several
of them aren't trivial. Note, that unlike a lock recursion the
epoch recursion is safe and just wastes a bit of resources.
Reviewed by: gallatin, hselasky, cy, adrian, kristof
Differential Revision: https://reviews.freebsd.org/D19111
Avoid potential structure padding leak. r350294 identified a leak via
static analysis; although there's no report of a leak with the
DIOCGETSRCNODES ioctl it's a good practice to zero the memory.
Suggested by: kp
MFC after: 3 days
Sponsored by: The FreeBSD Foundation
ACTION_PTR() returns pointer to the start of rule action section,
but rule can keep several rule modifiers like O_LOG, O_TAG and O_ALTQ,
and only then real action opcode is stored.
ipfw_get_action() function inspects the rule action section, skips
all modifiers and returns action opcode.
Use this function in ipfw_reset_eaction() and flush_nat_ptrs().
MFC after: 1 week
Sponsored by: Yandex LLC
Remove our (very partial) support for RFC2675 Jumbograms. They're not
used, not actually supported and not a good idea.
Reviewed by: thj@
Differential Revision: https://reviews.freebsd.org/D21086
After r343619 ipfw uses own locking for packets flow. PULLUP_LEN() macro
is used in ipfw_chk() to make m_pullup(). When m_pullup() fails, it just
returns via `goto pullup_failed`. There are two places where PULLUP_LEN()
is called with IPFW_PF_RLOCK() held.
Add PULLUP_LEN_LOCKED() macro to use in these places to be able release
the lock, when m_pullup() fails.
Sponsored by: Yandex LLC
After r343631 pfil hooks are invoked in net_epoch_preempt section,
this allows to avoid extra locking. Add NET_EPOCH_ASSER() assertion
to each ipfw_bpf_*tap*() call to require to be called from inside
epoch section.
Use NET_EPOCH_WAIT() in ipfw_clone_destroy() to wait until it becomes
safe to free() ifnet. And use on-stack ifnet pointer in each
ipfw_bpf_*tap*() call to avoid NULL pointer dereference in case when
V_*log_if global variable will become NULL during ipfw_bpf_*tap*() call.
Sponsored by: Yandex LLC