have already done this, so I have styled the patch on their work:
1) introduce a ip_newid() static inline function that checks
the sysctl and then decides if it should return a sequential
or random IP ID.
2) named the sysctl net.inet.ip.random_id
3) IPv6 flow IDs and fragment IDs are now always random.
Flow IDs and frag IDs are significantly less common in the
IPv6 world (ie. rarely generated per-packet), so there should
be smaller performance concerns.
The sysctl defaults to 0 (sequential IP IDs).
Reviewed by: andre, silby, mlaier, ume
Based on: NetBSD
MFC after: 2 months
pf_cksum_fixup() was called without last argument from
normalization, also fixup checksum when random-id modifies ip_id.
This would previously lead to incorrect checksums for packets
modified by scrub random-id.
(Originally) Submitted by: yongari
(that does not compile with !gcc). Moreover we get the benefit for all archs
that have a hand optimized in_cksum_skip().
Submitted by: yongari
Tested by: me (i386, extensivly), pf4freebsd ML (various)
calls further down the stack. If we find the cksum to be okay we pretend
that the hardware did all the work and hence keep the upper layers from
checking again.
Submitted by: Pyun YongHyeon
This fixes checksum for some drivers with partial H/W ckcsum offloads.
Reported by: Simon 'corecode' Schubert, Devon H. O'Dell, hmp
Reviewed by: Pyun YongHyeon
MFC:
Fix by dhartmei@
change pf_route() loop detection: introduce a counter (number of times
a packet is routed already) in the mbuf tag, allow at most four times.
Fixes some legitimate cases broken by the previous change.
Reviewed by: dhartmei
icmp_error() packets. While here retire PACKET_TAG_PF_GENERATED (which
served the same purpose) and use M_SKIP_FIREWALL in pf as well. This should
speed up things a bit as we get rid of the tag allocations.
Discussed with: juli
- Split the code out into if_clone.[ch].
- Locked struct if_clone. [1]
- Add a per-cloner match function rather then simply matching names of
the form <name><unit> and <name>.
- Use the match function to allow creation of <interface>.<tag>
vlan interfaces. The old way is preserved unchanged!
- Also the match function to allow creation of stf(4) interfaces named
stf0, stf, or 6to4. This is the only major user visible change in
that "ifconfig stf" creates the interface stf rather then stf0 and
does not print "stf0" to stdout.
- Allow destroy functions to fail so they can refuse to delete
interfaces. Currently, we forbid the deletion of interfaces which
were created in the init function, particularly lo0, pflog0, and
pfsync0. In the case of lo0 this was a panic implementation so it
does not count as a user visiable change. :-)
- Since most interfaces do not need the new functionality, an family of
wrapper functions, ifc_simple_*(), were created to wrap old style
cloner functions.
- The IF_CLONE_INITIALIZER macro is replaced with a new incompatible
IFC_CLONE_INITIALIZER and ifc_simple consumers use IFC_SIMPLE_DECLARE
instead.
Submitted by: Maurycy Pawlowski-Wieronski <maurycy at fouk.org> [1]
Reviewed by: andre, mlaier
Discussed on: net
* block packets that fail to create state table entries
* only allow non-fragmented packets to influence whether or not a logged
packet is the same as the one logged before.
* correct the ICMP packet checksum fixing up when processing ICMP errors for NAT
* implement a maximum for the number of entries in the NAT table (NAT_TABLE_MAX
and ipf_nattable_max)
* frsynclist() wasn't paying attention to all the places where interface
names are, like it should.
* fix comparing ICMP packets with established TCP state where only 8 bytes
of header are returned in the ICMP error.
MFC after: 1 week
- prevent an endless loop with route-to lo0, fixes PR 3736 (dhartmei@)
- The rule_number parameter for pf_get_pool() needs to be 32 bits, not 8 -
this fixes corruption of the address pools with large rulesets.
(mcbride@, pb@)
Reviewed-by: dhartmei
Version 3.5 brings:
- Atomic commits of ruleset changes (reduce the chance of ending up in an
inconsistent state).
- A 30% reduction in the size of state table entries.
- Source-tracking (limit number of clients and states per client).
- Sticky-address (the flexibility of round-robin with the benefits of
source-hash).
- Significant improvements to interface handling.
- and many more ...
ALTQ enabled versions of IFQ_* macros by default, as requested by serveral
others. This is a follow-up to the quick fix I committed yesterday which
turned off the ALTQ checks for non-ALTQ kernels.
rig a PREPEND macro for ALTQ as the POLL/DEQUEUE semantic is very bad in
terms of locking. We make this a full functional queue to allow "bulk
dequeue" which will further reduce the locking overhead (for non-altq
enabled devices). Drivers will access this via the following macros, which
will show up in <net/if_var.h> once we expose ALTQ to the build:
IFQ_DRV_DEQUEUE(ifq, m) - takes a mbuf off the queue (driver queue first)
IFQ_DRV_PREPEND(ifq, m) - pushes a mbuf back to the driver queue
IFQ_DRV_PURGE(ifq) - drops all packets in both queues
IFQ_DRV_IS_EMPTY(ifq) - checks for pending mbufs in either queue
One has to make sure that the first three are protected by a driver mutex.
At the moment most network drivers still require Giant, so this is not an
issue. Even those that have thier own mutex usually hold it in if_start and
the like, so this requirement is almost always satisfied.
This evolved from a discussion with Andrew Gallatin.
- add locking
- disable ALTQ3_COMPAT by default (do not remove the code to keep the diff
towards KAME small)
- put some more code under ALTQ3 conditional compilation as it should be
- account for if_xname
- some more minor compile fixes
As people started wondering:
The strange path layout "altq/altq" is there to avoid "-Isys/contrib" and
make it "-Isys/contrib/altq" instead, as we will need at least <altq/altq.h>
and <altq/if_altq.h> for kernel compilation.
The "freebsd4_..." in the privious commit is just the best tag name in the
KAME tree I could find to classify this in order to track its history. It
does *not* mean that this will go to 4-STABLE or anything of that kind.
HEAD at this point). This will not exactly live in a vendor branch, but have
the vendor backing to make it easier to exchange diffs.
This will be followed by a diff which takes most of the .c files off the
vendor branch in order to:
- add locking
- disable ALTQ3_COMPAT code (which is outdated and "un-lockable")
There is work in progress to refine the configuration API. Import this "as
is" now to have more exposure time before 5-STABLE.
This is only the import, it will be some more days until you will actually
be able to compile ALTQ support into your kernel so don't hold your breath.
HEADUPs will be posted on current@ and net@ before this is actually enabled.
No-objection: re(scottl), core(rwatson)