Commit Graph

32 Commits

Author SHA1 Message Date
glebius
ae58ce75c3 Merge rev. 1.125 from OpenBSD:
date: 2009/06/12 02:03:51;  author: dlg;  state: Exp;  lines: +59 -69
  rewrite the way states from pfsync are merged into the local state tree
  and the conditions on which pfsync will notify its peers on a stale update.

  each side (ie, the sending and receiving side) of the state update is
  compared separately. any side that is further along than the local state
  tree is merged. if any side is further along in the local state table, an
  update is sent out telling the peers about it.
2012-11-07 07:35:05 +00:00
glebius
d989f8363f It may happen that pfsync holds the last reference on a state. In this
case keys had already been freed. If encountering such state, then
just release last reference.

Not sure this can happen as a runtime race, but can be reproduced by
the following scenario:

- enable pfsync
- disable pfsync
- wait some time
- enable pfsync
2012-11-07 07:30:40 +00:00
melifaro
e570ee3854 Add assertion to enforce 'nat global' locking requierements changed by r241908.
Suggested by:	adrian, glebius
MFC after:	3 days
2012-11-05 22:54:00 +00:00
melifaro
6056a71b0e Use unified print_dyn_rule_flags() function for debugging messages
instead of hand-made printfs in every place.

MFC after:	1 week
2012-11-05 22:30:56 +00:00
ae
4354018055 Remove the recently added sysctl variable net.pfil.forward.
Instead, add protocol specific mbuf flags M_IP_NEXTHOP and
M_IP6_NEXTHOP. Use them to indicate that the mbuf's chain
contains the PACKET_TAG_IPFORWARD tag. And do a tag lookup
only when this flag is set.

Suggested by:	andre
2012-11-02 01:20:55 +00:00
glebius
f79061ff05 o Remove last argument to ip_fragment(), and obtain all needed information
on checksums directly from mbuf flags. This simplifies code.
o Clear CSUM_IP from the mbuf in ip_fragment() if we did checksums in
  hardware. Some driver may not announce CSUM_IP in theur if_hwassist,
  although try to do checksums if CSUM_IP set on mbuf. Example is em(4).
o While here, consistently use CSUM_IP instead of its alias CSUM_DELAY_IP.
  After this change CSUM_DELAY_IP vanishes from the stack.

Submitted by:	Sebastian Kuzminsky <seb lineratesystems.com>
2012-10-26 21:06:33 +00:00
ae
71112b5a8e Remove the IPFIREWALL_FORWARD kernel option and make possible to turn
on the related functionality in the runtime via the sysctl variable
net.pfil.forward. It is turned off by default.

Sponsored by:	Yandex LLC
Discussed with:	net@
MFC after:	2 weeks
2012-10-25 09:39:14 +00:00
glebius
5cc3ac5902 Switch the entire IPv4 stack to keep the IP packet header
in network byte order. Any host byte order processing is
done in local variables and host byte order values are
never[1] written to a packet.

  After this change a packet processed by the stack isn't
modified at all[2] except for TTL.

  After this change a network stack hacker doesn't need to
scratch his head trying to figure out what is the byte order
at the given place in the stack.

[1] One exception still remains. The raw sockets convert host
byte order before pass a packet to an application. Probably
this would remain for ages for compatibility.

[2] The ip_input() still subtructs header len from ip->ip_len,
but this is planned to be fixed soon.

Reviewed by:	luigi, Maxim Dounin <mdounin mdounin.ru>
Tested by:	ray, Olivier Cochard-Labbe <olivier cochard.me>
2012-10-22 21:09:03 +00:00
melifaro
e377b20c99 Remove unnecessary chain read lock in ipfw nat 'global' code.
Document case when ipfw chain lock must be held while calling ipfw_nat().

MFC after:	2 weeks
2012-10-22 19:22:31 +00:00
glebius
05f24a6b77 Make the "struct if_clone" opaque to users of the cloning API. Users
now use function calls:

  if_clone_simple()
  if_clone_advanced()

to initialize a cloner, instead of macros that initialize if_clone
structure.

Discussed with:		brooks, bz, 1 year ago
2012-10-16 13:37:54 +00:00
kevlo
ceb08698f2 Revert previous commit...
Pointyhat to:	kevlo (myself)
2012-10-10 08:36:38 +00:00
kevlo
8747a46991 Prefer NULL over 0 for pointers 2012-10-09 08:27:40 +00:00
kevlo
543514a78b Fix typo: s/unknow/unknown 2012-10-09 06:15:16 +00:00
glebius
3435e7fe3f Any pfil(9) hooks should be called with already set VNET context.
Reviewed by:	bz
2012-10-08 23:02:32 +00:00
glebius
64d975866c Catch up with r241245 and do not return packet back in host byte order. 2012-10-08 22:58:28 +00:00
glebius
9086143e8c After r241245 it appeared that in_delayed_cksum(), which still expects
host byte order, was sometimes called with net byte order. Since we are
moving towards net byte order throughout the stack, the function was
converted to expect net byte order, and its consumers fixed appropriately:
  - ip_output(), ipfilter(4) not changed, since already call
    in_delayed_cksum() with header in net byte order.
  - divert(4), ng_nat(4), ipfw_nat(4) now don't need to swap byte order
    there and back.
  - mrouting code and IPv6 ipsec now need to switch byte order there and
    back, but I hope, this is temporary solution.
  - In ipsec(4) shifted switch to net byte order prior to in_delayed_cksum().
  - pf_route() catches up on r241245 changes to ip_output().
2012-10-08 08:03:58 +00:00
glebius
f3a0231bff A step in resolving mess with byte ordering for AF_INET. After this change:
- All packets in NETISR_IP queue are in net byte order.
  - ip_input() is entered in net byte order and converts packet
    to host byte order right _after_ processing pfil(9) hooks.
  - ip_output() is entered in host byte order and converts packet
    to net byte order right _before_ processing pfil(9) hooks.
  - ip_fragment() accepts and emits packet in net byte order.
  - ip_forward(), ip_mloopback() use host byte order (untouched actually).
  - ip_fastforward() no longer modifies packet at all (except ip_ttl).
  - Swapping of byte order there and back removed from the following modules:
    pf(4), ipfw(4), enc(4), if_bridge(4).
  - Swapping of byte order added to ipfilter(4), based on __FreeBSD_version
  - __FreeBSD_version bumped.
  - pfil(9) manual page updated.

Reviewed by:	ray, luigi, eri, melifaro
Tested by:	glebius (LE), ray (BE)
2012-10-06 10:02:11 +00:00
glebius
30f3c300d8 The pfil(9) layer guarantees us presence of the protocol header,
so remove extra check, that is always false.

P.S. Also, goto there lead to unlocking a not locked rwlock.
2012-10-06 07:06:57 +00:00
glebius
7cce4fce1b To reduce volume of pfsync traffic:
- Scan request update queue to prevent doubles.
- Do not push undersized daragram in pfsync_update_request().
2012-10-02 12:44:46 +00:00
glebius
2c4db2ae64 Clear and re-setup all function pointers that glue pf(4) and pfsync(4)
together whenever the pfsync0 is brought down or up respectively.
2012-09-29 20:11:00 +00:00
glebius
ad137f3f93 Simplify send out queue code:
- Write method of a queue now is void,length of item is taken
  as queue property.
- Write methods don't need to know about mbud, supply just buf
  to them.
- No need for safe queue iterator in pfsync_sendout().

Obtained from:	OpenBSD
2012-09-29 20:02:26 +00:00
glebius
5c64acd0e7 Simplify and somewhat redesign interaction between pf_purge_thread() and
pf_purge_expired_states().

Now pf purging daemon stores the current hash table index on stack
in pf_purge_thread(), and supplies it to next iteration of
pf_purge_expired_states(). The latter returns new index back.

The important change is that whenever pf_purge_expired_states() wraps
around the array it returns immediately. This makes our knowledge about
status of states expiry run more consistent. Prior to this change it
could happen that n-th run stopped on i-th entry, and returned (1) as
full run complete, then next (n+1) full run stopped on j-th entry, where
j < i, and that broke the mark-and-sweep algorythm that saves references
rules. A referenced rule was freed, and this later lead to a crash.
2012-09-28 20:43:03 +00:00
glebius
82dab36482 EBUSY is a better reply for refusing to unload pf(4) or pfsync(4).
Submitted by:	pluknet
2012-09-22 19:03:11 +00:00
glebius
ab6639fea0 When connection rate hits and we overload a source to a table,
we are actually editing table, which means editing rules,
thus we need writer access to 'em.

Fix this by offloading the update of table to the same taskqueue,
we already use for flushing. Since taskqueues major task is now
overloading, and flushing is optional, do mechanical rename
s/flush/overload/ in the code related to the taskqueue.

Since overloading tasks do unsafe referencing of rules, provide
a bandaid in pf_purge_unlinked_rules(). If the latter sees any
queued tasks, then it skips purging for this run.

In table code:
- Assert any lock in pfr_lookup_addr().
- Assert writer lock in pfr_route_kentry().
2012-09-22 10:14:47 +00:00
glebius
9c41e27b31 In pfr_insert_kentry() return ENOMEM if memory allocation failed. 2012-09-22 10:04:48 +00:00
glebius
0adf1aff0e Fix fallout from r236397 in pfr_update_stats(), that was missed
later in r237155. We need to zero sockaddr before lookup. While
here, make pfr_update_stats() panic on unknown af.
2012-09-22 10:02:44 +00:00
glebius
6c77a7b69f Reduce copy/paste when freeing an source node. 2012-09-20 07:04:08 +00:00
glebius
9062851653 Utilize Jenkins hash with random seed for source nodes storage. 2012-09-20 06:52:05 +00:00
glebius
df8f633169 Provide kernel compile time option to make pf(4) default rule to drop.
This is important to secure a small timeframe at boot time, when
network is already configured, but pf(4) is not yet.

PR:		kern/171622
Submitted by:	Olivier Cochard-LabbИ <olivier cochard.me>
2012-09-18 11:07:19 +00:00
glebius
c3ead4d7df Make ruleset anchors in pf(4) reentrant. We've got two problems here:
1) Ruleset parser uses a global variable for anchor stack.
2) When processing a wildcard anchor, matching anchors are marked.

To fix the first one:

o Allocate anchor processing stack on stack. To make this allocation
  as small as possible, following measures taken:
  - Maximum stack size reduced from 64 to 32.
  - The struct pf_anchor_stackframe trimmed by one pointer - parent.
    We can always obtain the parent via the rule pointer.
  - When pf_test_rule() calls pf_get_translation(), the former lends
    its stack to the latter, to avoid recursive allocation 32 entries.

The second one appeared more tricky. The code, that marks anchors was
added in OpenBSD rev. 1.516 of pf.c. According to commit log, the idea
is to enable the "quick" keyword on an anchor rule. The feature isn't
documented anywhere. The most obscure part of the 1.516 was that code
examines the "match" mark on a just processed child, which couldn't be
put here by current frame. Since this wasn't documented even in the
commit message and functionality of this is not clear to me, I decided
to drop this examination for now. The rest of 1.516 is redone in a
thread safe manner - the mark isn't put on the anchor itself, but on
current stack frame. To avoid growing stack frame, we utilize LSB
from the rule pointer, relying on kernel malloc(9) returning pointer
aligned addresses.

Discussed with:		dhartmei
2012-09-18 10:54:56 +00:00
glebius
0409dbb19c Fix DIOCNATLOOK: zero key padding before performing lookup. 2012-09-18 09:15:32 +00:00
glebius
0ccf4838d7 o Create directory sys/netpfil, where all packet filters should
reside, and move there ipfw(4) and pf(4).

o Move most modified parts of pf out of contrib.

Actual movements:

sys/contrib/pf/net/*.c		-> sys/netpfil/pf/
sys/contrib/pf/net/*.h		-> sys/net/
contrib/pf/pfctl/*.c		-> sbin/pfctl
contrib/pf/pfctl/*.h		-> sbin/pfctl
contrib/pf/pfctl/pfctl.8	-> sbin/pfctl
contrib/pf/pfctl/*.4		-> share/man/man4
contrib/pf/pfctl/*.5		-> share/man/man5

sys/netinet/ipfw		-> sys/netpfil/ipfw

The arguable movement is pf/net/*.h -> sys/net. There are
future plans to refactor pf includes, so I decided not to
break things twice.

Not modified bits of pf left in contrib: authpf, ftp-proxy,
tftp-proxy, pflogd.

The ipfw(4) movement is planned to be merged to stable/9,
to make head and stable match.

Discussed with:		bz, luigi
2012-09-14 11:51:49 +00:00