2001-11-22 04:50:44 +00:00
|
|
|
/*-
|
2005-01-30 19:28:27 +00:00
|
|
|
* Copyright (c) 2001 McAfee, Inc.
|
2006-06-17 17:32:38 +00:00
|
|
|
* Copyright (c) 2006 Andre Oppermann, Internet Business Solutions AG
|
2001-11-22 04:50:44 +00:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* This software was developed for the FreeBSD Project by Jonathan Lemon
|
2005-01-30 19:28:27 +00:00
|
|
|
* and McAfee Research, the Security Research Division of McAfee, Inc. under
|
|
|
|
* DARPA/SPAWAR contract N66001-01-C-8035 ("CBOSS"), as part of the
|
2001-11-22 04:50:44 +00:00
|
|
|
* DARPA CHATS research program.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
2007-10-07 20:44:24 +00:00
|
|
|
#include <sys/cdefs.h>
|
|
|
|
__FBSDID("$FreeBSD$");
|
|
|
|
|
Initial import of RFC 2385 (TCP-MD5) digest support.
This is the first of two commits; bringing in the kernel support first.
This can be enabled by compiling a kernel with options TCP_SIGNATURE
and FAST_IPSEC.
For the uninitiated, this is a TCP option which provides for a means of
authenticating TCP sessions which came into being before IPSEC. It is
still relevant today, however, as it is used by many commercial router
vendors, particularly with BGP, and as such has become a requirement for
interconnect at many major Internet points of presence.
Several parts of the TCP and IP headers, including the segment payload,
are digested with MD5, including a shared secret. The PF_KEY interface
is used to manage the secrets using security associations in the SADB.
There is a limitation here in that as there is no way to map a TCP flow
per-port back to an SPI without polluting tcpcb or using the SPD; the
code to do the latter is unstable at this time. Therefore this code only
supports per-host keying granularity.
Whilst FAST_IPSEC is mutually exclusive with KAME IPSEC (and thus IPv6),
TCP_SIGNATURE applies only to IPv4. For the vast majority of prospective
users of this feature, this will not pose any problem.
This implementation is output-only; that is, the option is honoured when
responding to a host initiating a TCP session, but no effort is made
[yet] to authenticate inbound traffic. This is, however, sufficient to
interwork with Cisco equipment.
Tested with a Cisco 2501 running IOS 12.0(27), and Quagga 0.96.4 with
local patches. Patches for tcpdump to validate TCP-MD5 sessions are also
available from me upon request.
Sponsored by: sentex.net
2004-02-11 04:26:04 +00:00
|
|
|
#include "opt_inet.h"
|
2001-11-22 04:50:44 +00:00
|
|
|
#include "opt_inet6.h"
|
|
|
|
#include "opt_ipsec.h"
|
2002-07-31 19:06:49 +00:00
|
|
|
#include "opt_mac.h"
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/sysctl.h>
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
#include <sys/limits.h>
|
2006-06-17 17:32:38 +00:00
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/mutex.h>
|
2001-11-22 04:50:44 +00:00
|
|
|
#include <sys/malloc.h>
|
|
|
|
#include <sys/mbuf.h>
|
|
|
|
#include <sys/md5.h>
|
|
|
|
#include <sys/proc.h> /* for proc0 declaration */
|
|
|
|
#include <sys/random.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/socketvar.h>
|
2007-05-18 21:13:01 +00:00
|
|
|
#include <sys/syslog.h>
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-09-13 13:21:17 +00:00
|
|
|
#include <vm/uma.h>
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
#include <net/if.h>
|
|
|
|
#include <net/route.h>
|
|
|
|
|
|
|
|
#include <netinet/in.h>
|
|
|
|
#include <netinet/in_systm.h>
|
|
|
|
#include <netinet/ip.h>
|
|
|
|
#include <netinet/in_var.h>
|
|
|
|
#include <netinet/in_pcb.h>
|
|
|
|
#include <netinet/ip_var.h>
|
2005-11-18 20:12:40 +00:00
|
|
|
#include <netinet/ip_options.h>
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
#include <netinet/ip6.h>
|
|
|
|
#include <netinet/icmp6.h>
|
|
|
|
#include <netinet6/nd6.h>
|
|
|
|
#include <netinet6/ip6_var.h>
|
|
|
|
#include <netinet6/in6_pcb.h>
|
|
|
|
#endif
|
|
|
|
#include <netinet/tcp.h>
|
|
|
|
#include <netinet/tcp_fsm.h>
|
|
|
|
#include <netinet/tcp_seq.h>
|
|
|
|
#include <netinet/tcp_timer.h>
|
|
|
|
#include <netinet/tcp_var.h>
|
2007-07-27 00:57:06 +00:00
|
|
|
#include <netinet/tcp_syncache.h>
|
2007-12-17 07:56:27 +00:00
|
|
|
#include <netinet/tcp_offload.h>
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
#include <netinet6/tcp6_var.h>
|
|
|
|
#endif
|
|
|
|
|
2007-07-03 12:13:45 +00:00
|
|
|
#ifdef IPSEC
|
2002-10-16 02:25:05 +00:00
|
|
|
#include <netipsec/ipsec.h>
|
|
|
|
#ifdef INET6
|
|
|
|
#include <netipsec/ipsec6.h>
|
|
|
|
#endif
|
|
|
|
#include <netipsec/key.h>
|
2007-07-03 12:13:45 +00:00
|
|
|
#endif /*IPSEC*/
|
2002-10-16 02:25:05 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
#include <machine/in_cksum.h>
|
|
|
|
|
2006-10-22 11:52:19 +00:00
|
|
|
#include <security/mac/mac_framework.h>
|
|
|
|
|
2001-12-19 06:12:14 +00:00
|
|
|
static int tcp_syncookies = 1;
|
|
|
|
SYSCTL_INT(_net_inet_tcp, OID_AUTO, syncookies, CTLFLAG_RW,
|
2004-08-16 18:32:07 +00:00
|
|
|
&tcp_syncookies, 0,
|
2001-12-19 06:12:14 +00:00
|
|
|
"Use TCP SYN cookies if the syncache overflows");
|
|
|
|
|
2006-09-13 13:08:27 +00:00
|
|
|
static int tcp_syncookiesonly = 0;
|
|
|
|
SYSCTL_INT(_net_inet_tcp, OID_AUTO, syncookies_only, CTLFLAG_RW,
|
|
|
|
&tcp_syncookiesonly, 0,
|
|
|
|
"Use only TCP SYN cookies");
|
|
|
|
|
|
|
|
#define SYNCOOKIE_SECRET_SIZE 8 /* dwords */
|
|
|
|
#define SYNCOOKIE_LIFETIME 16 /* seconds */
|
|
|
|
|
2006-06-18 12:26:11 +00:00
|
|
|
struct syncache {
|
|
|
|
TAILQ_ENTRY(syncache) sc_hash;
|
|
|
|
struct in_conninfo sc_inc; /* addresses */
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
int sc_rxttime; /* retransmit time */
|
2006-06-18 12:26:11 +00:00
|
|
|
u_int16_t sc_rxmits; /* retransmit counter */
|
|
|
|
|
2006-06-26 16:14:19 +00:00
|
|
|
u_int32_t sc_tsreflect; /* timestamp to reflect */
|
2006-09-13 13:08:27 +00:00
|
|
|
u_int32_t sc_ts; /* our timestamp to send */
|
|
|
|
u_int32_t sc_tsoff; /* ts offset w/ syncookies */
|
2006-06-18 12:26:11 +00:00
|
|
|
u_int32_t sc_flowlabel; /* IPv6 flowlabel */
|
|
|
|
tcp_seq sc_irs; /* seq from peer */
|
|
|
|
tcp_seq sc_iss; /* our ISS */
|
|
|
|
struct mbuf *sc_ipopts; /* source route */
|
|
|
|
|
|
|
|
u_int16_t sc_peer_mss; /* peer's MSS */
|
|
|
|
u_int16_t sc_wnd; /* advertised window */
|
|
|
|
u_int8_t sc_ip_ttl; /* IPv4 TTL */
|
|
|
|
u_int8_t sc_ip_tos; /* IPv4 TOS */
|
|
|
|
u_int8_t sc_requested_s_scale:4,
|
2006-06-26 16:14:19 +00:00
|
|
|
sc_requested_r_scale:4;
|
2006-06-18 12:26:11 +00:00
|
|
|
u_int8_t sc_flags;
|
|
|
|
#define SCF_NOOPT 0x01 /* no TCP options */
|
|
|
|
#define SCF_WINSCALE 0x02 /* negotiated window scaling */
|
|
|
|
#define SCF_TIMESTAMP 0x04 /* negotiated timestamps */
|
2006-06-26 16:14:19 +00:00
|
|
|
/* MSS is implicit */
|
2006-06-18 12:26:11 +00:00
|
|
|
#define SCF_UNREACH 0x10 /* icmp unreachable received */
|
|
|
|
#define SCF_SIGNATURE 0x20 /* send MD5 digests */
|
|
|
|
#define SCF_SACK 0x80 /* send SACK option */
|
2007-12-17 07:56:27 +00:00
|
|
|
#ifndef TCP_OFFLOAD_DISABLE
|
2007-12-12 20:35:59 +00:00
|
|
|
struct toe_usrreqs *sc_tu; /* TOE operations */
|
|
|
|
void *sc_toepcb; /* TOE protocol block */
|
2007-12-07 01:46:13 +00:00
|
|
|
#endif
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
|
|
|
struct label *sc_label; /* MAC label reference */
|
|
|
|
#endif
|
2006-06-18 12:26:11 +00:00
|
|
|
};
|
|
|
|
|
2007-12-17 07:56:27 +00:00
|
|
|
#ifdef TCP_OFFLOAD_DISABLE
|
|
|
|
#define TOEPCB_ISSET(sc) (0)
|
|
|
|
#else
|
|
|
|
#define TOEPCB_ISSET(sc) ((sc)->sc_toepcb != NULL)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
|
2006-06-18 12:26:11 +00:00
|
|
|
struct syncache_head {
|
|
|
|
struct mtx sch_mtx;
|
|
|
|
TAILQ_HEAD(sch_head, syncache) sch_bucket;
|
|
|
|
struct callout sch_timer;
|
|
|
|
int sch_nextc;
|
|
|
|
u_int sch_length;
|
2006-09-13 13:08:27 +00:00
|
|
|
u_int sch_oddeven;
|
|
|
|
u_int32_t sch_secbits_odd[SYNCOOKIE_SECRET_SIZE];
|
|
|
|
u_int32_t sch_secbits_even[SYNCOOKIE_SECRET_SIZE];
|
|
|
|
u_int sch_reseed; /* time_uptime, seconds */
|
2006-06-18 12:26:11 +00:00
|
|
|
};
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
static void syncache_drop(struct syncache *, struct syncache_head *);
|
|
|
|
static void syncache_free(struct syncache *);
|
2001-12-19 06:12:14 +00:00
|
|
|
static void syncache_insert(struct syncache *, struct syncache_head *);
|
2001-11-22 04:50:44 +00:00
|
|
|
struct syncache *syncache_lookup(struct in_conninfo *, struct syncache_head **);
|
2007-04-20 13:30:08 +00:00
|
|
|
static int syncache_respond(struct syncache *);
|
2004-08-16 18:32:07 +00:00
|
|
|
static struct socket *syncache_socket(struct syncache *, struct socket *,
|
2002-05-14 18:57:55 +00:00
|
|
|
struct mbuf *m);
|
2007-07-28 12:02:05 +00:00
|
|
|
static void syncache_timeout(struct syncache *sc, struct syncache_head *sch,
|
|
|
|
int docallout);
|
2001-11-22 04:50:44 +00:00
|
|
|
static void syncache_timer(void *);
|
2006-09-13 13:08:27 +00:00
|
|
|
static void syncookie_generate(struct syncache_head *, struct syncache *,
|
|
|
|
u_int32_t *);
|
2006-06-17 17:49:11 +00:00
|
|
|
static struct syncache
|
2006-09-13 13:08:27 +00:00
|
|
|
*syncookie_lookup(struct in_conninfo *, struct syncache_head *,
|
|
|
|
struct syncache *, struct tcpopt *, struct tcphdr *,
|
2006-06-17 17:49:11 +00:00
|
|
|
struct socket *);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Transmit the SYN,ACK fewer times than TCP_MAXRXTSHIFT specifies.
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
* 3 retransmits corresponds to a timeout of 3 * (1 + 2 + 4 + 8) == 45 seconds,
|
2001-11-22 04:50:44 +00:00
|
|
|
* the odds are that the user has given up attempting to connect by then.
|
|
|
|
*/
|
|
|
|
#define SYNCACHE_MAXREXMTS 3
|
|
|
|
|
|
|
|
/* Arbitrary values */
|
|
|
|
#define TCP_SYNCACHE_HASHSIZE 512
|
|
|
|
#define TCP_SYNCACHE_BUCKETLIMIT 30
|
|
|
|
|
|
|
|
struct tcp_syncache {
|
|
|
|
struct syncache_head *hashbase;
|
2002-03-20 05:48:55 +00:00
|
|
|
uma_zone_t zone;
|
2001-11-22 04:50:44 +00:00
|
|
|
u_int hashsize;
|
|
|
|
u_int hashmask;
|
|
|
|
u_int bucket_limit;
|
2006-06-17 17:32:38 +00:00
|
|
|
u_int cache_count; /* XXX: unprotected */
|
2001-11-22 04:50:44 +00:00
|
|
|
u_int cache_limit;
|
|
|
|
u_int rexmt_limit;
|
|
|
|
u_int hash_secret;
|
|
|
|
};
|
|
|
|
static struct tcp_syncache tcp_syncache;
|
|
|
|
|
|
|
|
SYSCTL_NODE(_net_inet_tcp, OID_AUTO, syncache, CTLFLAG_RW, 0, "TCP SYN cache");
|
|
|
|
|
2003-10-21 18:28:36 +00:00
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, bucketlimit, CTLFLAG_RDTUN,
|
2001-11-22 04:50:44 +00:00
|
|
|
&tcp_syncache.bucket_limit, 0, "Per-bucket hash limit for syncache");
|
|
|
|
|
2003-10-21 18:28:36 +00:00
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, cachelimit, CTLFLAG_RDTUN,
|
2001-11-22 04:50:44 +00:00
|
|
|
&tcp_syncache.cache_limit, 0, "Overall entry limit for syncache");
|
|
|
|
|
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, count, CTLFLAG_RD,
|
|
|
|
&tcp_syncache.cache_count, 0, "Current number of entries in syncache");
|
|
|
|
|
2003-10-21 18:28:36 +00:00
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, hashsize, CTLFLAG_RDTUN,
|
2001-11-22 04:50:44 +00:00
|
|
|
&tcp_syncache.hashsize, 0, "Size of TCP syncache hashtable");
|
|
|
|
|
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, rexmtlimit, CTLFLAG_RW,
|
|
|
|
&tcp_syncache.rexmt_limit, 0, "Limit on SYN/ACK retransmissions");
|
|
|
|
|
2007-05-28 11:03:53 +00:00
|
|
|
int tcp_sc_rst_sock_fail = 1;
|
|
|
|
SYSCTL_INT(_net_inet_tcp_syncache, OID_AUTO, rst_on_sock_fail, CTLFLAG_RW,
|
|
|
|
&tcp_sc_rst_sock_fail, 0, "Send reset on socket allocation failure");
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
static MALLOC_DEFINE(M_SYNCACHE, "syncache", "TCP syncache");
|
|
|
|
|
2004-08-16 18:32:07 +00:00
|
|
|
#define SYNCACHE_HASH(inc, mask) \
|
2001-11-22 04:50:44 +00:00
|
|
|
((tcp_syncache.hash_secret ^ \
|
|
|
|
(inc)->inc_faddr.s_addr ^ \
|
2004-08-16 18:32:07 +00:00
|
|
|
((inc)->inc_faddr.s_addr >> 16) ^ \
|
2001-11-22 04:50:44 +00:00
|
|
|
(inc)->inc_fport ^ (inc)->inc_lport) & mask)
|
|
|
|
|
2004-08-16 18:32:07 +00:00
|
|
|
#define SYNCACHE_HASH6(inc, mask) \
|
2001-11-22 04:50:44 +00:00
|
|
|
((tcp_syncache.hash_secret ^ \
|
2004-08-16 18:32:07 +00:00
|
|
|
(inc)->inc6_faddr.s6_addr32[0] ^ \
|
|
|
|
(inc)->inc6_faddr.s6_addr32[3] ^ \
|
2001-11-22 04:50:44 +00:00
|
|
|
(inc)->inc_fport ^ (inc)->inc_lport) & mask)
|
|
|
|
|
|
|
|
#define ENDPTS_EQ(a, b) ( \
|
2002-01-22 17:54:28 +00:00
|
|
|
(a)->ie_fport == (b)->ie_fport && \
|
2001-11-22 04:50:44 +00:00
|
|
|
(a)->ie_lport == (b)->ie_lport && \
|
|
|
|
(a)->ie_faddr.s_addr == (b)->ie_faddr.s_addr && \
|
|
|
|
(a)->ie_laddr.s_addr == (b)->ie_laddr.s_addr \
|
|
|
|
)
|
|
|
|
|
|
|
|
#define ENDPTS6_EQ(a, b) (memcmp(a, b, sizeof(*a)) == 0)
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
#define SCH_LOCK(sch) mtx_lock(&(sch)->sch_mtx)
|
|
|
|
#define SCH_UNLOCK(sch) mtx_unlock(&(sch)->sch_mtx)
|
|
|
|
#define SCH_LOCK_ASSERT(sch) mtx_assert(&(sch)->sch_mtx, MA_OWNED)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Requires the syncache entry to be already removed from the bucket list.
|
|
|
|
*/
|
2001-11-22 04:50:44 +00:00
|
|
|
static void
|
|
|
|
syncache_free(struct syncache *sc)
|
|
|
|
{
|
|
|
|
if (sc->sc_ipopts)
|
|
|
|
(void) m_free(sc->sc_ipopts);
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
2007-10-25 14:37:37 +00:00
|
|
|
mac_syncache_destroy(&sc->sc_label);
|
2006-12-13 06:00:57 +00:00
|
|
|
#endif
|
2003-11-20 20:07:39 +00:00
|
|
|
|
2002-03-20 05:48:55 +00:00
|
|
|
uma_zfree(tcp_syncache.zone, sc);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
syncache_init(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
tcp_syncache.cache_count = 0;
|
|
|
|
tcp_syncache.hashsize = TCP_SYNCACHE_HASHSIZE;
|
|
|
|
tcp_syncache.bucket_limit = TCP_SYNCACHE_BUCKETLIMIT;
|
|
|
|
tcp_syncache.rexmt_limit = SYNCACHE_MAXREXMTS;
|
|
|
|
tcp_syncache.hash_secret = arc4random();
|
|
|
|
|
2004-08-16 18:32:07 +00:00
|
|
|
TUNABLE_INT_FETCH("net.inet.tcp.syncache.hashsize",
|
2001-11-22 04:50:44 +00:00
|
|
|
&tcp_syncache.hashsize);
|
2004-08-16 18:32:07 +00:00
|
|
|
TUNABLE_INT_FETCH("net.inet.tcp.syncache.bucketlimit",
|
2001-11-22 04:50:44 +00:00
|
|
|
&tcp_syncache.bucket_limit);
|
2005-08-25 13:57:00 +00:00
|
|
|
if (!powerof2(tcp_syncache.hashsize) || tcp_syncache.hashsize == 0) {
|
2004-08-16 18:32:07 +00:00
|
|
|
printf("WARNING: syncache hash size is not a power of 2.\n");
|
2005-08-25 13:57:00 +00:00
|
|
|
tcp_syncache.hashsize = TCP_SYNCACHE_HASHSIZE;
|
2004-08-16 18:32:07 +00:00
|
|
|
}
|
2001-11-22 04:50:44 +00:00
|
|
|
tcp_syncache.hashmask = tcp_syncache.hashsize - 1;
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/* Set limits. */
|
|
|
|
tcp_syncache.cache_limit =
|
|
|
|
tcp_syncache.hashsize * tcp_syncache.bucket_limit;
|
|
|
|
TUNABLE_INT_FETCH("net.inet.tcp.syncache.cachelimit",
|
|
|
|
&tcp_syncache.cache_limit);
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/* Allocate the hash table. */
|
|
|
|
MALLOC(tcp_syncache.hashbase, struct syncache_head *,
|
|
|
|
tcp_syncache.hashsize * sizeof(struct syncache_head),
|
2006-06-20 08:11:30 +00:00
|
|
|
M_SYNCACHE, M_WAITOK | M_ZERO);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/* Initialize the hash buckets. */
|
|
|
|
for (i = 0; i < tcp_syncache.hashsize; i++) {
|
|
|
|
TAILQ_INIT(&tcp_syncache.hashbase[i].sch_bucket);
|
2006-06-17 17:32:38 +00:00
|
|
|
mtx_init(&tcp_syncache.hashbase[i].sch_mtx, "tcp_sc_head",
|
|
|
|
NULL, MTX_DEF);
|
|
|
|
callout_init_mtx(&tcp_syncache.hashbase[i].sch_timer,
|
|
|
|
&tcp_syncache.hashbase[i].sch_mtx, 0);
|
2001-11-22 04:50:44 +00:00
|
|
|
tcp_syncache.hashbase[i].sch_length = 0;
|
|
|
|
}
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/* Create the syncache entry zone. */
|
2002-03-20 05:48:55 +00:00
|
|
|
tcp_syncache.zone = uma_zcreate("syncache", sizeof(struct syncache),
|
2006-06-17 17:32:38 +00:00
|
|
|
NULL, NULL, NULL, NULL, UMA_ALIGN_PTR, 0);
|
2002-03-20 05:48:55 +00:00
|
|
|
uma_zone_set_max(tcp_syncache.zone, tcp_syncache.cache_limit);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/*
|
|
|
|
* Inserts a syncache entry into the specified bucket row.
|
|
|
|
* Locks and unlocks the syncache_head autonomously.
|
|
|
|
*/
|
2001-12-19 06:12:14 +00:00
|
|
|
static void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_insert(struct syncache *sc, struct syncache_head *sch)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc2;
|
2003-11-11 17:54:47 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_LOCK(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/*
|
2006-06-17 17:32:38 +00:00
|
|
|
* Make sure that we don't overflow the per-bucket limit.
|
|
|
|
* If the bucket is full, toss the oldest element.
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
|
|
|
if (sch->sch_length >= tcp_syncache.bucket_limit) {
|
2006-06-17 17:32:38 +00:00
|
|
|
KASSERT(!TAILQ_EMPTY(&sch->sch_bucket),
|
|
|
|
("sch->sch_length incorrect"));
|
|
|
|
sc2 = TAILQ_LAST(&sch->sch_bucket, sch_head);
|
2001-11-22 04:50:44 +00:00
|
|
|
syncache_drop(sc2, sch);
|
|
|
|
tcpstat.tcps_sc_bucketoverflow++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Put it into the bucket. */
|
2006-06-17 17:32:38 +00:00
|
|
|
TAILQ_INSERT_HEAD(&sch->sch_bucket, sc, sc_hash);
|
2001-11-22 04:50:44 +00:00
|
|
|
sch->sch_length++;
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
/* Reinitialize the bucket row's timer. */
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
if (sch->sch_length == 1)
|
|
|
|
sch->sch_nextc = ticks + INT_MAX;
|
2007-07-28 12:02:05 +00:00
|
|
|
syncache_timeout(sc, sch, 1);
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
tcp_syncache.cache_count++;
|
|
|
|
tcpstat.tcps_sc_added++;
|
|
|
|
}
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/*
|
|
|
|
* Remove and free entry from syncache bucket row.
|
|
|
|
* Expects locked syncache head.
|
|
|
|
*/
|
2001-11-22 04:50:44 +00:00
|
|
|
static void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_drop(struct syncache *sc, struct syncache_head *sch)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
TAILQ_REMOVE(&sch->sch_bucket, sc, sc_hash);
|
|
|
|
sch->sch_length--;
|
|
|
|
|
2007-12-17 07:56:27 +00:00
|
|
|
#ifndef TCP_OFFLOAD_DISABLE
|
2007-12-12 20:35:59 +00:00
|
|
|
if (sc->sc_tu)
|
2007-12-17 07:56:27 +00:00
|
|
|
sc->sc_tu->tu_syncache_event(TOE_SC_DROP, sc->sc_toepcb);
|
2007-12-12 20:35:59 +00:00
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
syncache_free(sc);
|
2006-06-17 17:32:38 +00:00
|
|
|
tcp_syncache.cache_count--;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
2007-07-28 12:02:05 +00:00
|
|
|
/*
|
|
|
|
* Engage/reengage time on bucket row.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
syncache_timeout(struct syncache *sc, struct syncache_head *sch, int docallout)
|
|
|
|
{
|
|
|
|
sc->sc_rxttime = ticks +
|
|
|
|
TCPTV_RTOBASE * (tcp_backoff[sc->sc_rxmits]);
|
|
|
|
sc->sc_rxmits++;
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
if (TSTMP_LT(sc->sc_rxttime, sch->sch_nextc)) {
|
2007-07-28 12:02:05 +00:00
|
|
|
sch->sch_nextc = sc->sc_rxttime;
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
if (docallout)
|
|
|
|
callout_reset(&sch->sch_timer, sch->sch_nextc - ticks,
|
|
|
|
syncache_timer, (void *)sch);
|
|
|
|
}
|
2007-07-28 12:02:05 +00:00
|
|
|
}
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* Walk the timer queues, looking for SYN,ACKs that need to be retransmitted.
|
|
|
|
* If we have retransmitted an entry the maximum number of times, expire it.
|
2006-06-17 17:32:38 +00:00
|
|
|
* One separate timer for each bucket row.
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
|
|
|
static void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_timer(void *xsch)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
2006-06-17 17:32:38 +00:00
|
|
|
struct syncache_head *sch = (struct syncache_head *)xsch;
|
2001-11-22 04:50:44 +00:00
|
|
|
struct syncache *sc, *nsc;
|
2006-06-17 17:32:38 +00:00
|
|
|
int tick = ticks;
|
2007-05-18 21:13:01 +00:00
|
|
|
char *s;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/* NB: syncache_head has already been locked by the callout. */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
/*
|
|
|
|
* In the following cycle we may remove some entries and/or
|
|
|
|
* advance some timeouts, so re-initialize the bucket timer.
|
|
|
|
*/
|
|
|
|
sch->sch_nextc = tick + INT_MAX;
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
TAILQ_FOREACH_SAFE(sc, &sch->sch_bucket, sc_hash, nsc) {
|
|
|
|
/*
|
|
|
|
* We do not check if the listen socket still exists
|
|
|
|
* and accept the case where the listen socket may be
|
|
|
|
* gone by the time we resend the SYN/ACK. We do
|
|
|
|
* not expect this to happens often. If it does,
|
|
|
|
* then the RST will be sent by the time the remote
|
|
|
|
* host does the SYN/ACK->ACK.
|
|
|
|
*/
|
Fix bugs in the TCP syncache timeout code. including:
When system ticks are positive, for entries in the cache
bucket, syncache_timer() ran on every tick (doing nothing
useful) instead of the supposed 3, 6, 12, and 24 seconds
later (when it's time to retransmit SYN,ACK).
When ticks are negative, syncache_timer() was scheduled
for the too far future (up to ~25 days on systems with
HZ=1000), no SYN,ACK retransmits were attempted at all,
and syncache entries added in that period that correspond
to non-established connections stay there forever.
Only HEAD and RELENG_7 are affected.
Reviewed by: silby, kmacy (earlier version)
Submitted by: Maxim Dounin, ru
2007-12-19 16:56:28 +00:00
|
|
|
if (TSTMP_GT(sc->sc_rxttime, tick)) {
|
|
|
|
if (TSTMP_LT(sc->sc_rxttime, sch->sch_nextc))
|
2006-06-17 17:32:38 +00:00
|
|
|
sch->sch_nextc = sc->sc_rxttime;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (sc->sc_rxmits > tcp_syncache.rexmt_limit) {
|
2007-05-18 21:13:01 +00:00
|
|
|
if ((s = tcp_log_addrs(&sc->sc_inc, NULL, NULL, NULL))) {
|
2007-07-28 12:02:05 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: Retransmits exhausted, "
|
|
|
|
"giving up and removing syncache entry\n",
|
2007-05-18 21:13:01 +00:00
|
|
|
s, __func__);
|
|
|
|
free(s, M_TCPLOG);
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
syncache_drop(sc, sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_stale++;
|
|
|
|
continue;
|
|
|
|
}
|
2007-07-28 12:02:05 +00:00
|
|
|
if ((s = tcp_log_addrs(&sc->sc_inc, NULL, NULL, NULL))) {
|
|
|
|
log(LOG_DEBUG, "%s; %s: Response timeout, "
|
|
|
|
"retransmitting (%u) SYN|ACK\n",
|
|
|
|
s, __func__, sc->sc_rxmits);
|
|
|
|
free(s, M_TCPLOG);
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
|
2007-04-20 13:30:08 +00:00
|
|
|
(void) syncache_respond(sc);
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_retransmitted++;
|
2007-07-28 12:02:05 +00:00
|
|
|
syncache_timeout(sc, sch, 0);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
if (!TAILQ_EMPTY(&(sch)->sch_bucket))
|
|
|
|
callout_reset(&(sch)->sch_timer, (sch)->sch_nextc - tick,
|
|
|
|
syncache_timer, (void *)(sch));
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find an entry in the syncache.
|
2006-06-17 17:32:38 +00:00
|
|
|
* Returns always with locked syncache_head plus a matching entry or NULL.
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
|
|
|
struct syncache *
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_lookup(struct in_conninfo *inc, struct syncache_head **schp)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
2003-11-11 17:54:47 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (inc->inc_isipv6) {
|
|
|
|
sch = &tcp_syncache.hashbase[
|
|
|
|
SYNCACHE_HASH6(inc, tcp_syncache.hashmask)];
|
|
|
|
*schp = sch;
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
SCH_LOCK(sch);
|
|
|
|
|
|
|
|
/* Circle through bucket row to find matching entry. */
|
2001-11-22 04:50:44 +00:00
|
|
|
TAILQ_FOREACH(sc, &sch->sch_bucket, sc_hash) {
|
2003-11-11 17:54:47 +00:00
|
|
|
if (ENDPTS6_EQ(&inc->inc_ie, &sc->sc_inc.inc_ie))
|
2001-11-22 04:50:44 +00:00
|
|
|
return (sc);
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
sch = &tcp_syncache.hashbase[
|
|
|
|
SYNCACHE_HASH(inc, tcp_syncache.hashmask)];
|
|
|
|
*schp = sch;
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
SCH_LOCK(sch);
|
|
|
|
|
|
|
|
/* Circle through bucket row to find matching entry. */
|
2001-11-22 04:50:44 +00:00
|
|
|
TAILQ_FOREACH(sc, &sch->sch_bucket, sc_hash) {
|
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6)
|
|
|
|
continue;
|
|
|
|
#endif
|
2003-11-11 17:54:47 +00:00
|
|
|
if (ENDPTS_EQ(&inc->inc_ie, &sc->sc_inc.inc_ie))
|
2001-11-22 04:50:44 +00:00
|
|
|
return (sc);
|
|
|
|
}
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_LOCK_ASSERT(*schp);
|
|
|
|
return (NULL); /* always returns with locked sch */
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function is called when we get a RST for a
|
|
|
|
* non-existent connection, so that we can see if the
|
|
|
|
* connection is in the syn cache. If it is, zap it.
|
|
|
|
*/
|
|
|
|
void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_chkrst(struct in_conninfo *inc, struct tcphdr *th)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
2007-07-28 11:51:44 +00:00
|
|
|
char *s = NULL;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
sc = syncache_lookup(inc, &sch); /* returns locked sch */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2007-07-28 11:51:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Any RST to our SYN|ACK must not carry ACK, SYN or FIN flags.
|
|
|
|
* See RFC 793 page 65, section SEGMENT ARRIVES.
|
|
|
|
*/
|
|
|
|
if (th->th_flags & (TH_ACK|TH_SYN|TH_FIN)) {
|
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
|
|
|
log(LOG_DEBUG, "%s; %s: Spurious RST with ACK, SYN or "
|
|
|
|
"FIN flag set, segment ignored\n", s, __func__);
|
|
|
|
tcpstat.tcps_badrst++;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No corresponding connection was found in syncache.
|
|
|
|
* If syncookies are enabled and possibly exclusively
|
|
|
|
* used, or we are under memory pressure, a valid RST
|
|
|
|
* may not find a syncache entry. In that case we're
|
|
|
|
* done and no SYN|ACK retransmissions will happen.
|
|
|
|
* Otherwise the the RST was misdirected or spoofed.
|
|
|
|
*/
|
|
|
|
if (sc == NULL) {
|
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
|
|
|
log(LOG_DEBUG, "%s; %s: Spurious RST without matching "
|
|
|
|
"syncache entry (possibly syncookie only), "
|
|
|
|
"segment ignored\n", s, __func__);
|
|
|
|
tcpstat.tcps_badrst++;
|
2006-06-17 17:32:38 +00:00
|
|
|
goto done;
|
2007-07-28 11:51:44 +00:00
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* If the RST bit is set, check the sequence number to see
|
|
|
|
* if this is a valid reset segment.
|
|
|
|
* RFC 793 page 37:
|
|
|
|
* In all states except SYN-SENT, all reset (RST) segments
|
|
|
|
* are validated by checking their SEQ-fields. A reset is
|
|
|
|
* valid if its sequence number is in the window.
|
|
|
|
*
|
|
|
|
* The sequence number in the reset segment is normally an
|
|
|
|
* echo of our outgoing acknowlegement numbers, but some hosts
|
|
|
|
* send a reset with the sequence number at the rightmost edge
|
|
|
|
* of our receive window, and we have to handle this case.
|
|
|
|
*/
|
|
|
|
if (SEQ_GEQ(th->th_seq, sc->sc_irs) &&
|
|
|
|
SEQ_LEQ(th->th_seq, sc->sc_irs + sc->sc_wnd)) {
|
|
|
|
syncache_drop(sc, sch);
|
2007-07-28 11:51:44 +00:00
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
|
|
|
log(LOG_DEBUG, "%s; %s: Our SYN|ACK was rejected, "
|
|
|
|
"connection attempt aborted by remote endpoint\n",
|
|
|
|
s, __func__);
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_reset++;
|
2008-05-08 22:21:09 +00:00
|
|
|
} else {
|
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
|
|
|
log(LOG_DEBUG, "%s; %s: RST with invalid SEQ %u != "
|
|
|
|
"IRS %u (+WND %u), segment ignored\n",
|
|
|
|
s, __func__, th->th_seq, sc->sc_irs, sc->sc_wnd);
|
2007-07-28 11:51:44 +00:00
|
|
|
tcpstat.tcps_badrst++;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2007-07-28 11:51:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
done:
|
2007-07-28 11:51:44 +00:00
|
|
|
if (s != NULL)
|
|
|
|
free(s, M_TCPLOG);
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_UNLOCK(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_badack(struct in_conninfo *inc)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
sc = syncache_lookup(inc, &sch); /* returns locked sch */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc != NULL) {
|
|
|
|
syncache_drop(sc, sch);
|
|
|
|
tcpstat.tcps_sc_badack++;
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_UNLOCK(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_unreach(struct in_conninfo *inc, struct tcphdr *th)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
sc = syncache_lookup(inc, &sch); /* returns locked sch */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc == NULL)
|
2006-06-17 17:32:38 +00:00
|
|
|
goto done;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/* If the sequence number != sc_iss, then it's a bogus ICMP msg */
|
|
|
|
if (ntohl(th->th_seq) != sc->sc_iss)
|
2006-06-17 17:32:38 +00:00
|
|
|
goto done;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we've rertransmitted 3 times and this is our second error,
|
|
|
|
* we remove the entry. Otherwise, we allow it to continue on.
|
|
|
|
* This prevents us from incorrectly nuking an entry during a
|
|
|
|
* spurious network outage.
|
|
|
|
*
|
|
|
|
* See tcp_notify().
|
|
|
|
*/
|
2006-06-17 17:32:38 +00:00
|
|
|
if ((sc->sc_flags & SCF_UNREACH) == 0 || sc->sc_rxmits < 3 + 1) {
|
2001-11-22 04:50:44 +00:00
|
|
|
sc->sc_flags |= SCF_UNREACH;
|
2006-06-17 17:32:38 +00:00
|
|
|
goto done;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
syncache_drop(sc, sch);
|
|
|
|
tcpstat.tcps_sc_unreach++;
|
2006-06-17 17:32:38 +00:00
|
|
|
done:
|
|
|
|
SCH_UNLOCK(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Build a new TCP socket structure from a syncache entry.
|
|
|
|
*/
|
|
|
|
static struct socket *
|
2006-06-17 17:49:11 +00:00
|
|
|
syncache_socket(struct syncache *sc, struct socket *lso, struct mbuf *m)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct inpcb *inp = NULL;
|
|
|
|
struct socket *so;
|
|
|
|
struct tcpcb *tp;
|
2007-05-18 21:13:01 +00:00
|
|
|
char *s;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2003-11-11 17:54:47 +00:00
|
|
|
INP_INFO_WLOCK_ASSERT(&tcbinfo);
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* Ok, create the full blown connection, and set things up
|
|
|
|
* as they would have been set up if we had created the
|
|
|
|
* connection when the SYN arrived. If we can't create
|
|
|
|
* the connection, abort it.
|
|
|
|
*/
|
|
|
|
so = sonewconn(lso, SS_ISCONNECTED);
|
|
|
|
if (so == NULL) {
|
|
|
|
/*
|
2007-05-18 21:13:01 +00:00
|
|
|
* Drop the connection; we will either send a RST or
|
|
|
|
* have the peer retransmit its SYN again after its
|
|
|
|
* RTO and try again.
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
|
|
|
tcpstat.tcps_listendrop++;
|
2007-05-18 21:13:01 +00:00
|
|
|
if ((s = tcp_log_addrs(&sc->sc_inc, NULL, NULL, NULL))) {
|
|
|
|
log(LOG_DEBUG, "%s; %s: Socket create failed "
|
|
|
|
"due to limits or memory shortage\n",
|
|
|
|
s, __func__);
|
|
|
|
free(s, M_TCPLOG);
|
|
|
|
}
|
2003-11-11 17:54:47 +00:00
|
|
|
goto abort2;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2002-07-31 19:06:49 +00:00
|
|
|
#ifdef MAC
|
2004-06-13 02:50:07 +00:00
|
|
|
SOCK_LOCK(so);
|
2007-10-24 19:04:04 +00:00
|
|
|
mac_socketpeer_set_from_mbuf(m, so);
|
2004-06-13 02:50:07 +00:00
|
|
|
SOCK_UNLOCK(so);
|
2002-07-31 19:06:49 +00:00
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
inp = sotoinpcb(so);
|
Add code to allow the system to handle multiple routing tables.
This particular implementation is designed to be fully backwards compatible
and to be MFC-able to 7.x (and 6.x)
Currently the only protocol that can make use of the multiple tables is IPv4
Similar functionality exists in OpenBSD and Linux.
From my notes:
-----
One thing where FreeBSD has been falling behind, and which by chance I
have some time to work on is "policy based routing", which allows
different
packet streams to be routed by more than just the destination address.
Constraints:
------------
I want to make some form of this available in the 6.x tree
(and by extension 7.x) , but FreeBSD in general needs it so I might as
well do it in -current and back port the portions I need.
One of the ways that this can be done is to have the ability to
instantiate multiple kernel routing tables (which I will now
refer to as "Forwarding Information Bases" or "FIBs" for political
correctness reasons). Which FIB a particular packet uses to make
the next hop decision can be decided by a number of mechanisms.
The policies these mechanisms implement are the "Policies" referred
to in "Policy based routing".
One of the constraints I have if I try to back port this work to
6.x is that it must be implemented as a EXTENSION to the existing
ABIs in 6.x so that third party applications do not need to be
recompiled in timespan of the branch.
This first version will not have some of the bells and whistles that
will come with later versions. It will, for example, be limited to 16
tables in the first commit.
Implementation method, Compatible version. (part 1)
-------------------------------
For this reason I have implemented a "sufficient subset" of a
multiple routing table solution in Perforce, and back-ported it
to 6.x. (also in Perforce though not always caught up with what I
have done in -current/P4). The subset allows a number of FIBs
to be defined at compile time (8 is sufficient for my purposes in 6.x)
and implements the changes needed to allow IPV4 to use them. I have not
done the changes for ipv6 simply because I do not need it, and I do not
have enough knowledge of ipv6 (e.g. neighbor discovery) needed to do it.
Other protocol families are left untouched and should there be
users with proprietary protocol families, they should continue to work
and be oblivious to the existence of the extra FIBs.
To understand how this is done, one must know that the current FIB
code starts everything off with a single dimensional array of
pointers to FIB head structures (One per protocol family), each of
which in turn points to the trie of routes available to that family.
The basic change in the ABI compatible version of the change is to
extent that array to be a 2 dimensional array, so that
instead of protocol family X looking at rt_tables[X] for the
table it needs, it looks at rt_tables[Y][X] when for all
protocol families except ipv4 Y is always 0.
Code that is unaware of the change always just sees the first row
of the table, which of course looks just like the one dimensional
array that existed before.
The entry points rtrequest(), rtalloc(), rtalloc1(), rtalloc_ign()
are all maintained, but refer only to the first row of the array,
so that existing callers in proprietary protocols can continue to
do the "right thing".
Some new entry points are added, for the exclusive use of ipv4 code
called in_rtrequest(), in_rtalloc(), in_rtalloc1() and in_rtalloc_ign(),
which have an extra argument which refers the code to the correct row.
In addition, there are some new entry points (currently called
rtalloc_fib() and friends) that check the Address family being
looked up and call either rtalloc() (and friends) if the protocol
is not IPv4 forcing the action to row 0 or to the appropriate row
if it IS IPv4 (and that info is available). These are for calling
from code that is not specific to any particular protocol. The way
these are implemented would change in the non ABI preserving code
to be added later.
One feature of the first version of the code is that for ipv4,
the interface routes show up automatically on all the FIBs, so
that no matter what FIB you select you always have the basic
direct attached hosts available to you. (rtinit() does this
automatically).
You CAN delete an interface route from one FIB should you want
to but by default it's there. ARP information is also available
in each FIB. It's assumed that the same machine would have the
same MAC address, regardless of which FIB you are using to get
to it.
This brings us as to how the correct FIB is selected for an outgoing
IPV4 packet.
Firstly, all packets have a FIB associated with them. if nothing
has been done to change it, it will be FIB 0. The FIB is changed
in the following ways.
Packets fall into one of a number of classes.
1/ locally generated packets, coming from a socket/PCB.
Such packets select a FIB from a number associated with the
socket/PCB. This in turn is inherited from the process,
but can be changed by a socket option. The process in turn
inherits it on fork. I have written a utility call setfib
that acts a bit like nice..
setfib -3 ping target.example.com # will use fib 3 for ping.
It is an obvious extension to make it a property of a jail
but I have not done so. It can be achieved by combining the setfib and
jail commands.
2/ packets received on an interface for forwarding.
By default these packets would use table 0,
(or possibly a number settable in a sysctl(not yet)).
but prior to routing the firewall can inspect them (see below).
(possibly in the future you may be able to associate a FIB
with packets received on an interface.. An ifconfig arg, but not yet.)
3/ packets inspected by a packet classifier, which can arbitrarily
associate a fib with it on a packet by packet basis.
A fib assigned to a packet by a packet classifier
(such as ipfw) would over-ride a fib associated by
a more default source. (such as cases 1 or 2).
4/ a tcp listen socket associated with a fib will generate
accept sockets that are associated with that same fib.
5/ Packets generated in response to some other packet (e.g. reset
or icmp packets). These should use the FIB associated with the
packet being reponded to.
6/ Packets generated during encapsulation.
gif, tun and other tunnel interfaces will encapsulate using the FIB
that was in effect withthe proces that set up the tunnel.
thus setfib 1 ifconfig gif0 [tunnel instructions]
will set the fib for the tunnel to use to be fib 1.
Routing messages would be associated with their
process, and thus select one FIB or another.
messages from the kernel would be associated with the fib they
refer to and would only be received by a routing socket associated
with that fib. (not yet implemented)
In addition Netstat has been edited to be able to cope with the
fact that the array is now 2 dimensional. (It looks in system
memory using libkvm (!)). Old versions of netstat see only the first FIB.
In addition two sysctls are added to give:
a) the number of FIBs compiled in (active)
b) the default FIB of the calling process.
Early testing experience:
-------------------------
Basically our (IronPort's) appliance does this functionality already
using ipfw fwd but that method has some drawbacks.
For example,
It can't fully simulate a routing table because it can't influence the
socket's choice of local address when a connect() is done.
Testing during the generating of these changes has been
remarkably smooth so far. Multiple tables have co-existed
with no notable side effects, and packets have been routes
accordingly.
ipfw has grown 2 new keywords:
setfib N ip from anay to any
count ip from any to any fib N
In pf there seems to be a requirement to be able to give symbolic names to the
fibs but I do not have that capacity. I am not sure if it is required.
SCTP has interestingly enough built in support for this, called VRFs
in Cisco parlance. it will be interesting to see how that handles it
when it suddenly actually does something.
Where to next:
--------------------
After committing the ABI compatible version and MFCing it, I'd
like to proceed in a forward direction in -current. this will
result in some roto-tilling in the routing code.
Firstly: the current code's idea of having a separate tree per
protocol family, all of the same format, and pointed to by the
1 dimensional array is a bit silly. Especially when one considers that
there is code that makes assumptions about every protocol having the
same internal structures there. Some protocols don't WANT that
sort of structure. (for example the whole idea of a netmask is foreign
to appletalk). This needs to be made opaque to the external code.
My suggested first change is to add routing method pointers to the
'domain' structure, along with information pointing the data.
instead of having an array of pointers to uniform structures,
there would be an array pointing to the 'domain' structures
for each protocol address domain (protocol family),
and the methods this reached would be called. The methods would have
an argument that gives FIB number, but the protocol would be free
to ignore it.
When the ABI can be changed it raises the possibilty of the
addition of a fib entry into the "struct route". Currently,
the structure contains the sockaddr of the desination, and the resulting
fib entry. To make this work fully, one could add a fib number
so that given an address and a fib, one can find the third element, the
fib entry.
Interaction with the ARP layer/ LL layer would need to be
revisited as well. Qing Li has been working on this already.
This work was sponsored by Ironport Systems/Cisco
Reviewed by: several including rwatson, bz and mlair (parts each)
Obtained from: Ironport systems/Cisco
2008-05-09 23:03:00 +00:00
|
|
|
inp->inp_inc.inc_fibnum = sc->sc_inc.inc_fibnum;
|
|
|
|
so->so_fibnum = sc->sc_inc.inc_fibnum;
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WLOCK(inp);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/* Insert new socket into PCB hash list. */
|
2002-02-28 17:11:10 +00:00
|
|
|
inp->inp_inc.inc_isipv6 = sc->sc_inc.inc_isipv6;
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6) {
|
|
|
|
inp->in6p_laddr = sc->sc_inc.inc6_laddr;
|
|
|
|
} else {
|
|
|
|
inp->inp_vflag &= ~INP_IPV6;
|
|
|
|
inp->inp_vflag |= INP_IPV4;
|
|
|
|
#endif
|
|
|
|
inp->inp_laddr = sc->sc_inc.inc_laddr;
|
|
|
|
#ifdef INET6
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
inp->inp_lport = sc->sc_inc.inc_lport;
|
|
|
|
if (in_pcbinshash(inp) != 0) {
|
|
|
|
/*
|
|
|
|
* Undo the assignments above if we failed to
|
|
|
|
* put the PCB on the hash lists.
|
|
|
|
*/
|
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6)
|
|
|
|
inp->in6p_laddr = in6addr_any;
|
2004-08-16 18:32:07 +00:00
|
|
|
else
|
2001-11-22 04:50:44 +00:00
|
|
|
#endif
|
|
|
|
inp->inp_laddr.s_addr = INADDR_ANY;
|
|
|
|
inp->inp_lport = 0;
|
|
|
|
goto abort;
|
|
|
|
}
|
2007-07-03 12:13:45 +00:00
|
|
|
#ifdef IPSEC
|
2006-06-17 17:49:11 +00:00
|
|
|
/* Copy old policy into new socket's. */
|
2001-11-22 04:50:44 +00:00
|
|
|
if (ipsec_copy_policy(sotoinpcb(lso)->inp_sp, inp->inp_sp))
|
2006-06-26 16:14:19 +00:00
|
|
|
printf("syncache_socket: could not copy policy\n");
|
2001-11-22 04:50:44 +00:00
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6) {
|
|
|
|
struct inpcb *oinp = sotoinpcb(lso);
|
|
|
|
struct in6_addr laddr6;
|
2004-01-22 23:10:11 +00:00
|
|
|
struct sockaddr_in6 sin6;
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* Inherit socket options from the listening socket.
|
|
|
|
* Note that in6p_inputopts are not (and should not be)
|
|
|
|
* copied, since it stores previously received options and is
|
|
|
|
* used to detect if each new option is different than the
|
|
|
|
* previous one and hence should be passed to a user.
|
2004-08-16 18:32:07 +00:00
|
|
|
* If we copied in6p_inputopts, a user would not be able to
|
2001-11-22 04:50:44 +00:00
|
|
|
* receive options just after calling the accept system call.
|
|
|
|
*/
|
|
|
|
inp->inp_flags |= oinp->inp_flags & INP_CONTROLOPTS;
|
|
|
|
if (oinp->in6p_outputopts)
|
|
|
|
inp->in6p_outputopts =
|
|
|
|
ip6_copypktopts(oinp->in6p_outputopts, M_NOWAIT);
|
|
|
|
|
2004-01-22 23:10:11 +00:00
|
|
|
sin6.sin6_family = AF_INET6;
|
|
|
|
sin6.sin6_len = sizeof(sin6);
|
|
|
|
sin6.sin6_addr = sc->sc_inc.inc6_faddr;
|
|
|
|
sin6.sin6_port = sc->sc_inc.inc_fport;
|
|
|
|
sin6.sin6_flowinfo = sin6.sin6_scope_id = 0;
|
2001-11-22 04:50:44 +00:00
|
|
|
laddr6 = inp->in6p_laddr;
|
|
|
|
if (IN6_IS_ADDR_UNSPECIFIED(&inp->in6p_laddr))
|
|
|
|
inp->in6p_laddr = sc->sc_inc.inc6_laddr;
|
2004-03-27 21:05:46 +00:00
|
|
|
if (in6_pcbconnect(inp, (struct sockaddr *)&sin6,
|
|
|
|
thread0.td_ucred)) {
|
2001-11-22 04:50:44 +00:00
|
|
|
inp->in6p_laddr = laddr6;
|
|
|
|
goto abort;
|
|
|
|
}
|
2004-07-17 19:44:13 +00:00
|
|
|
/* Override flowlabel from in6_pcbconnect. */
|
|
|
|
inp->in6p_flowinfo &= ~IPV6_FLOWLABEL_MASK;
|
|
|
|
inp->in6p_flowinfo |= sc->sc_flowlabel;
|
2001-11-22 04:50:44 +00:00
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
struct in_addr laddr;
|
2004-01-22 23:10:11 +00:00
|
|
|
struct sockaddr_in sin;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2007-12-17 07:56:27 +00:00
|
|
|
inp->inp_options = (m) ? ip_srcroute(m) : NULL;
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
if (inp->inp_options == NULL) {
|
|
|
|
inp->inp_options = sc->sc_ipopts;
|
|
|
|
sc->sc_ipopts = NULL;
|
|
|
|
}
|
|
|
|
|
2004-01-22 23:10:11 +00:00
|
|
|
sin.sin_family = AF_INET;
|
|
|
|
sin.sin_len = sizeof(sin);
|
|
|
|
sin.sin_addr = sc->sc_inc.inc_faddr;
|
|
|
|
sin.sin_port = sc->sc_inc.inc_fport;
|
|
|
|
bzero((caddr_t)sin.sin_zero, sizeof(sin.sin_zero));
|
2001-11-22 04:50:44 +00:00
|
|
|
laddr = inp->inp_laddr;
|
|
|
|
if (inp->inp_laddr.s_addr == INADDR_ANY)
|
|
|
|
inp->inp_laddr = sc->sc_inc.inc_laddr;
|
2004-03-27 21:05:46 +00:00
|
|
|
if (in_pcbconnect(inp, (struct sockaddr *)&sin,
|
|
|
|
thread0.td_ucred)) {
|
2001-11-22 04:50:44 +00:00
|
|
|
inp->inp_laddr = laddr;
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
tp = intotcpcb(inp);
|
|
|
|
tp->t_state = TCPS_SYN_RECEIVED;
|
|
|
|
tp->iss = sc->sc_iss;
|
|
|
|
tp->irs = sc->sc_irs;
|
|
|
|
tcp_rcvseqinit(tp);
|
|
|
|
tcp_sendseqinit(tp);
|
|
|
|
tp->snd_wl1 = sc->sc_irs;
|
2007-04-04 16:13:45 +00:00
|
|
|
tp->snd_max = tp->iss + 1;
|
|
|
|
tp->snd_nxt = tp->iss + 1;
|
2001-11-22 04:50:44 +00:00
|
|
|
tp->rcv_up = sc->sc_irs + 1;
|
|
|
|
tp->rcv_wnd = sc->sc_wnd;
|
|
|
|
tp->rcv_adv += tp->rcv_wnd;
|
2007-04-04 16:13:45 +00:00
|
|
|
tp->last_ack_sent = tp->rcv_nxt;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2002-02-20 16:47:11 +00:00
|
|
|
tp->t_flags = sototcpcb(lso)->t_flags & (TF_NOPUSH|TF_NODELAY);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc->sc_flags & SCF_NOOPT)
|
|
|
|
tp->t_flags |= TF_NOOPT;
|
2006-06-26 16:14:19 +00:00
|
|
|
else {
|
|
|
|
if (sc->sc_flags & SCF_WINSCALE) {
|
|
|
|
tp->t_flags |= TF_REQ_SCALE|TF_RCVD_SCALE;
|
|
|
|
tp->snd_scale = sc->sc_requested_s_scale;
|
|
|
|
tp->request_r_scale = sc->sc_requested_r_scale;
|
|
|
|
}
|
|
|
|
if (sc->sc_flags & SCF_TIMESTAMP) {
|
|
|
|
tp->t_flags |= TF_REQ_TSTMP|TF_RCVD_TSTMP;
|
|
|
|
tp->ts_recent = sc->sc_tsreflect;
|
|
|
|
tp->ts_recent_age = ticks;
|
2006-09-13 13:08:27 +00:00
|
|
|
tp->ts_offset = sc->sc_tsoff;
|
2006-06-26 16:14:19 +00:00
|
|
|
}
|
Initial import of RFC 2385 (TCP-MD5) digest support.
This is the first of two commits; bringing in the kernel support first.
This can be enabled by compiling a kernel with options TCP_SIGNATURE
and FAST_IPSEC.
For the uninitiated, this is a TCP option which provides for a means of
authenticating TCP sessions which came into being before IPSEC. It is
still relevant today, however, as it is used by many commercial router
vendors, particularly with BGP, and as such has become a requirement for
interconnect at many major Internet points of presence.
Several parts of the TCP and IP headers, including the segment payload,
are digested with MD5, including a shared secret. The PF_KEY interface
is used to manage the secrets using security associations in the SADB.
There is a limitation here in that as there is no way to map a TCP flow
per-port back to an SPI without polluting tcpcb or using the SPD; the
code to do the latter is unstable at this time. Therefore this code only
supports per-host keying granularity.
Whilst FAST_IPSEC is mutually exclusive with KAME IPSEC (and thus IPv6),
TCP_SIGNATURE applies only to IPv4. For the vast majority of prospective
users of this feature, this will not pose any problem.
This implementation is output-only; that is, the option is honoured when
responding to a host initiating a TCP session, but no effort is made
[yet] to authenticate inbound traffic. This is, however, sufficient to
interwork with Cisco equipment.
Tested with a Cisco 2501 running IOS 12.0(27), and Quagga 0.96.4 with
local patches. Patches for tcpdump to validate TCP-MD5 sessions are also
available from me upon request.
Sponsored by: sentex.net
2004-02-11 04:26:04 +00:00
|
|
|
#ifdef TCP_SIGNATURE
|
2006-06-26 16:14:19 +00:00
|
|
|
if (sc->sc_flags & SCF_SIGNATURE)
|
|
|
|
tp->t_flags |= TF_SIGNATURE;
|
2004-02-13 18:21:45 +00:00
|
|
|
#endif
|
2007-05-06 15:56:31 +00:00
|
|
|
if (sc->sc_flags & SCF_SACK)
|
2006-06-26 16:14:19 +00:00
|
|
|
tp->t_flags |= TF_SACK_PERMIT;
|
2004-06-23 21:04:37 +00:00
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
|
2003-11-20 20:07:39 +00:00
|
|
|
/*
|
|
|
|
* Set up MSS and get cached values from tcp_hostcache.
|
|
|
|
* This might overwrite some of the defaults we just set.
|
|
|
|
*/
|
2001-11-22 04:50:44 +00:00
|
|
|
tcp_mss(tp, sc->sc_peer_mss);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the SYN,ACK was retransmitted, reset cwnd to 1 segment.
|
|
|
|
*/
|
2007-07-28 12:02:05 +00:00
|
|
|
if (sc->sc_rxmits)
|
2004-08-16 18:32:07 +00:00
|
|
|
tp->snd_cwnd = tp->t_maxseg;
|
2007-04-11 09:45:16 +00:00
|
|
|
tcp_timer_activate(tp, TT_KEEP, tcp_keepinit);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WUNLOCK(inp);
|
2003-11-11 17:54:47 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_accepts++;
|
|
|
|
return (so);
|
|
|
|
|
|
|
|
abort:
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WUNLOCK(inp);
|
2003-11-11 17:54:47 +00:00
|
|
|
abort2:
|
2001-11-22 04:50:44 +00:00
|
|
|
if (so != NULL)
|
2006-03-16 07:03:14 +00:00
|
|
|
soabort(so);
|
2001-11-22 04:50:44 +00:00
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function gets called when we receive an ACK for a
|
|
|
|
* socket in the LISTEN state. We look up the connection
|
|
|
|
* in the syncache, and if its there, we pull it out of
|
|
|
|
* the cache and turn it into a full-blown connection in
|
|
|
|
* the SYN-RECEIVED state.
|
|
|
|
*/
|
|
|
|
int
|
2006-09-13 13:08:27 +00:00
|
|
|
syncache_expand(struct in_conninfo *inc, struct tcpopt *to, struct tcphdr *th,
|
2006-06-17 17:49:11 +00:00
|
|
|
struct socket **lsop, struct mbuf *m)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
2006-09-13 13:08:27 +00:00
|
|
|
struct syncache scs;
|
2007-05-18 21:13:01 +00:00
|
|
|
char *s;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/*
|
|
|
|
* Global TCP locks are held because we manipulate the PCB lists
|
|
|
|
* and create a new socket.
|
|
|
|
*/
|
2003-11-11 17:54:47 +00:00
|
|
|
INP_INFO_WLOCK_ASSERT(&tcbinfo);
|
2007-05-18 21:13:01 +00:00
|
|
|
KASSERT((th->th_flags & (TH_RST|TH_ACK|TH_SYN)) == TH_ACK,
|
|
|
|
("%s: can handle only ACK", __func__));
|
2003-11-11 17:54:47 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
sc = syncache_lookup(inc, &sch); /* returns locked sch */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-12-19 06:12:14 +00:00
|
|
|
if (sc == NULL) {
|
|
|
|
/*
|
2004-08-16 18:32:07 +00:00
|
|
|
* There is no syncache entry, so see if this ACK is
|
2001-12-19 06:12:14 +00:00
|
|
|
* a returning syncookie. To do this, first:
|
|
|
|
* A. See if this socket has had a syncache entry dropped in
|
|
|
|
* the past. We don't want to accept a bogus syncookie
|
2004-08-16 18:32:07 +00:00
|
|
|
* if we've never received a SYN.
|
2001-12-19 06:12:14 +00:00
|
|
|
* B. check that the syncookie is valid. If it is, then
|
|
|
|
* cobble up a fake syncache entry, and return.
|
|
|
|
*/
|
2006-09-13 13:08:27 +00:00
|
|
|
if (!tcp_syncookies) {
|
|
|
|
SCH_UNLOCK(sch);
|
2007-05-18 21:13:01 +00:00
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
2007-05-28 23:27:44 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: Spurious ACK, "
|
|
|
|
"segment rejected (syncookies disabled)\n",
|
2007-05-18 21:13:01 +00:00
|
|
|
s, __func__);
|
2006-06-17 17:32:38 +00:00
|
|
|
goto failed;
|
2006-09-13 13:08:27 +00:00
|
|
|
}
|
|
|
|
bzero(&scs, sizeof(scs));
|
|
|
|
sc = syncookie_lookup(inc, sch, &scs, to, th, *lsop);
|
|
|
|
SCH_UNLOCK(sch);
|
2007-05-18 21:13:01 +00:00
|
|
|
if (sc == NULL) {
|
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
|
|
|
log(LOG_DEBUG, "%s; %s: Segment failed "
|
2007-05-28 23:27:44 +00:00
|
|
|
"SYNCOOKIE authentication, segment rejected "
|
|
|
|
"(probably spoofed)\n", s, __func__);
|
2006-06-17 17:32:38 +00:00
|
|
|
goto failed;
|
2007-05-18 21:13:01 +00:00
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
} else {
|
|
|
|
/* Pull out the entry to unlock the bucket row. */
|
|
|
|
TAILQ_REMOVE(&sch->sch_bucket, sc, sc_hash);
|
|
|
|
sch->sch_length--;
|
2006-06-25 11:11:33 +00:00
|
|
|
tcp_syncache.cache_count--;
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_UNLOCK(sch);
|
2001-12-19 06:12:14 +00:00
|
|
|
}
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/*
|
2007-05-18 21:13:01 +00:00
|
|
|
* Segment validation:
|
|
|
|
* ACK must match our initial sequence number + 1 (the SYN|ACK).
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
2007-12-17 07:56:27 +00:00
|
|
|
if (th->th_ack != sc->sc_iss + 1 && !TOEPCB_ISSET(sc)) {
|
2007-05-18 21:13:01 +00:00
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
2007-05-28 23:27:44 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: ACK %u != ISS+1 %u, segment "
|
|
|
|
"rejected\n", s, __func__, th->th_ack, sc->sc_iss);
|
2006-06-17 17:32:38 +00:00
|
|
|
goto failed;
|
2007-05-18 21:13:01 +00:00
|
|
|
}
|
2007-05-18 21:42:25 +00:00
|
|
|
/*
|
2008-06-16 19:56:59 +00:00
|
|
|
* The SEQ must fall in the window starting a the received initial receive
|
|
|
|
* sequence number + 1 (the SYN).
|
2007-05-18 21:42:25 +00:00
|
|
|
*/
|
2008-06-16 19:56:59 +00:00
|
|
|
|
|
|
|
if ((SEQ_LEQ(th->th_seq, sc->sc_irs) ||
|
|
|
|
SEQ_GT(th->th_seq, sc->sc_irs + sc->sc_wnd )) &&
|
|
|
|
!TOEPCB_ISSET(sc))
|
|
|
|
{
|
2007-05-18 21:42:25 +00:00
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
2007-05-28 23:27:44 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: SEQ %u != IRS+1 %u, segment "
|
2007-06-06 22:10:12 +00:00
|
|
|
"rejected\n", s, __func__, th->th_seq, sc->sc_irs);
|
2007-05-18 21:42:25 +00:00
|
|
|
goto failed;
|
|
|
|
}
|
2007-12-12 06:11:50 +00:00
|
|
|
|
2007-05-18 21:42:25 +00:00
|
|
|
if (!(sc->sc_flags & SCF_TIMESTAMP) && (to->to_flags & TOF_TS)) {
|
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
2007-05-28 23:27:44 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: Timestamp not expected, "
|
|
|
|
"segment rejected\n", s, __func__);
|
2007-05-18 21:42:25 +00:00
|
|
|
goto failed;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* If timestamps were negotiated the reflected timestamp
|
|
|
|
* must be equal to what we actually sent in the SYN|ACK.
|
|
|
|
*/
|
2007-12-12 20:35:59 +00:00
|
|
|
if ((to->to_flags & TOF_TS) && to->to_tsecr != sc->sc_ts &&
|
2007-12-17 07:56:27 +00:00
|
|
|
!TOEPCB_ISSET(sc)) {
|
2007-05-18 21:42:25 +00:00
|
|
|
if ((s = tcp_log_addrs(inc, th, NULL, NULL)))
|
2007-05-28 23:27:44 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: TSECR %u != TS %u, "
|
|
|
|
"segment rejected\n",
|
2007-05-18 21:42:25 +00:00
|
|
|
s, __func__, to->to_tsecr, sc->sc_ts);
|
|
|
|
goto failed;
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
|
2007-04-20 13:51:34 +00:00
|
|
|
*lsop = syncache_socket(sc, *lsop, m);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2007-04-20 13:51:34 +00:00
|
|
|
if (*lsop == NULL)
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_aborted++;
|
2007-04-20 13:51:34 +00:00
|
|
|
else
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_completed++;
|
2003-11-20 20:07:39 +00:00
|
|
|
|
Add code to allow the system to handle multiple routing tables.
This particular implementation is designed to be fully backwards compatible
and to be MFC-able to 7.x (and 6.x)
Currently the only protocol that can make use of the multiple tables is IPv4
Similar functionality exists in OpenBSD and Linux.
From my notes:
-----
One thing where FreeBSD has been falling behind, and which by chance I
have some time to work on is "policy based routing", which allows
different
packet streams to be routed by more than just the destination address.
Constraints:
------------
I want to make some form of this available in the 6.x tree
(and by extension 7.x) , but FreeBSD in general needs it so I might as
well do it in -current and back port the portions I need.
One of the ways that this can be done is to have the ability to
instantiate multiple kernel routing tables (which I will now
refer to as "Forwarding Information Bases" or "FIBs" for political
correctness reasons). Which FIB a particular packet uses to make
the next hop decision can be decided by a number of mechanisms.
The policies these mechanisms implement are the "Policies" referred
to in "Policy based routing".
One of the constraints I have if I try to back port this work to
6.x is that it must be implemented as a EXTENSION to the existing
ABIs in 6.x so that third party applications do not need to be
recompiled in timespan of the branch.
This first version will not have some of the bells and whistles that
will come with later versions. It will, for example, be limited to 16
tables in the first commit.
Implementation method, Compatible version. (part 1)
-------------------------------
For this reason I have implemented a "sufficient subset" of a
multiple routing table solution in Perforce, and back-ported it
to 6.x. (also in Perforce though not always caught up with what I
have done in -current/P4). The subset allows a number of FIBs
to be defined at compile time (8 is sufficient for my purposes in 6.x)
and implements the changes needed to allow IPV4 to use them. I have not
done the changes for ipv6 simply because I do not need it, and I do not
have enough knowledge of ipv6 (e.g. neighbor discovery) needed to do it.
Other protocol families are left untouched and should there be
users with proprietary protocol families, they should continue to work
and be oblivious to the existence of the extra FIBs.
To understand how this is done, one must know that the current FIB
code starts everything off with a single dimensional array of
pointers to FIB head structures (One per protocol family), each of
which in turn points to the trie of routes available to that family.
The basic change in the ABI compatible version of the change is to
extent that array to be a 2 dimensional array, so that
instead of protocol family X looking at rt_tables[X] for the
table it needs, it looks at rt_tables[Y][X] when for all
protocol families except ipv4 Y is always 0.
Code that is unaware of the change always just sees the first row
of the table, which of course looks just like the one dimensional
array that existed before.
The entry points rtrequest(), rtalloc(), rtalloc1(), rtalloc_ign()
are all maintained, but refer only to the first row of the array,
so that existing callers in proprietary protocols can continue to
do the "right thing".
Some new entry points are added, for the exclusive use of ipv4 code
called in_rtrequest(), in_rtalloc(), in_rtalloc1() and in_rtalloc_ign(),
which have an extra argument which refers the code to the correct row.
In addition, there are some new entry points (currently called
rtalloc_fib() and friends) that check the Address family being
looked up and call either rtalloc() (and friends) if the protocol
is not IPv4 forcing the action to row 0 or to the appropriate row
if it IS IPv4 (and that info is available). These are for calling
from code that is not specific to any particular protocol. The way
these are implemented would change in the non ABI preserving code
to be added later.
One feature of the first version of the code is that for ipv4,
the interface routes show up automatically on all the FIBs, so
that no matter what FIB you select you always have the basic
direct attached hosts available to you. (rtinit() does this
automatically).
You CAN delete an interface route from one FIB should you want
to but by default it's there. ARP information is also available
in each FIB. It's assumed that the same machine would have the
same MAC address, regardless of which FIB you are using to get
to it.
This brings us as to how the correct FIB is selected for an outgoing
IPV4 packet.
Firstly, all packets have a FIB associated with them. if nothing
has been done to change it, it will be FIB 0. The FIB is changed
in the following ways.
Packets fall into one of a number of classes.
1/ locally generated packets, coming from a socket/PCB.
Such packets select a FIB from a number associated with the
socket/PCB. This in turn is inherited from the process,
but can be changed by a socket option. The process in turn
inherits it on fork. I have written a utility call setfib
that acts a bit like nice..
setfib -3 ping target.example.com # will use fib 3 for ping.
It is an obvious extension to make it a property of a jail
but I have not done so. It can be achieved by combining the setfib and
jail commands.
2/ packets received on an interface for forwarding.
By default these packets would use table 0,
(or possibly a number settable in a sysctl(not yet)).
but prior to routing the firewall can inspect them (see below).
(possibly in the future you may be able to associate a FIB
with packets received on an interface.. An ifconfig arg, but not yet.)
3/ packets inspected by a packet classifier, which can arbitrarily
associate a fib with it on a packet by packet basis.
A fib assigned to a packet by a packet classifier
(such as ipfw) would over-ride a fib associated by
a more default source. (such as cases 1 or 2).
4/ a tcp listen socket associated with a fib will generate
accept sockets that are associated with that same fib.
5/ Packets generated in response to some other packet (e.g. reset
or icmp packets). These should use the FIB associated with the
packet being reponded to.
6/ Packets generated during encapsulation.
gif, tun and other tunnel interfaces will encapsulate using the FIB
that was in effect withthe proces that set up the tunnel.
thus setfib 1 ifconfig gif0 [tunnel instructions]
will set the fib for the tunnel to use to be fib 1.
Routing messages would be associated with their
process, and thus select one FIB or another.
messages from the kernel would be associated with the fib they
refer to and would only be received by a routing socket associated
with that fib. (not yet implemented)
In addition Netstat has been edited to be able to cope with the
fact that the array is now 2 dimensional. (It looks in system
memory using libkvm (!)). Old versions of netstat see only the first FIB.
In addition two sysctls are added to give:
a) the number of FIBs compiled in (active)
b) the default FIB of the calling process.
Early testing experience:
-------------------------
Basically our (IronPort's) appliance does this functionality already
using ipfw fwd but that method has some drawbacks.
For example,
It can't fully simulate a routing table because it can't influence the
socket's choice of local address when a connect() is done.
Testing during the generating of these changes has been
remarkably smooth so far. Multiple tables have co-existed
with no notable side effects, and packets have been routes
accordingly.
ipfw has grown 2 new keywords:
setfib N ip from anay to any
count ip from any to any fib N
In pf there seems to be a requirement to be able to give symbolic names to the
fibs but I do not have that capacity. I am not sure if it is required.
SCTP has interestingly enough built in support for this, called VRFs
in Cisco parlance. it will be interesting to see how that handles it
when it suddenly actually does something.
Where to next:
--------------------
After committing the ABI compatible version and MFCing it, I'd
like to proceed in a forward direction in -current. this will
result in some roto-tilling in the routing code.
Firstly: the current code's idea of having a separate tree per
protocol family, all of the same format, and pointed to by the
1 dimensional array is a bit silly. Especially when one considers that
there is code that makes assumptions about every protocol having the
same internal structures there. Some protocols don't WANT that
sort of structure. (for example the whole idea of a netmask is foreign
to appletalk). This needs to be made opaque to the external code.
My suggested first change is to add routing method pointers to the
'domain' structure, along with information pointing the data.
instead of having an array of pointers to uniform structures,
there would be an array pointing to the 'domain' structures
for each protocol address domain (protocol family),
and the methods this reached would be called. The methods would have
an argument that gives FIB number, but the protocol would be free
to ignore it.
When the ABI can be changed it raises the possibilty of the
addition of a fib entry into the "struct route". Currently,
the structure contains the sockaddr of the desination, and the resulting
fib entry. To make this work fully, one could add a fib number
so that given an address and a fib, one can find the third element, the
fib entry.
Interaction with the ARP layer/ LL layer would need to be
revisited as well. Qing Li has been working on this already.
This work was sponsored by Ironport Systems/Cisco
Reviewed by: several including rwatson, bz and mlair (parts each)
Obtained from: Ironport systems/Cisco
2008-05-09 23:03:00 +00:00
|
|
|
/* how do we find the inp for the new socket? */
|
2006-09-13 13:08:27 +00:00
|
|
|
if (sc != &scs)
|
|
|
|
syncache_free(sc);
|
2001-11-22 04:50:44 +00:00
|
|
|
return (1);
|
2006-06-17 17:32:38 +00:00
|
|
|
failed:
|
2006-09-13 13:08:27 +00:00
|
|
|
if (sc != NULL && sc != &scs)
|
2006-06-17 17:32:38 +00:00
|
|
|
syncache_free(sc);
|
2007-05-18 21:13:01 +00:00
|
|
|
if (s != NULL)
|
|
|
|
free(s, M_TCPLOG);
|
2007-04-20 13:51:34 +00:00
|
|
|
*lsop = NULL;
|
2006-06-17 17:32:38 +00:00
|
|
|
return (0);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Given a LISTEN socket and an inbound SYN request, add
|
|
|
|
* this to the syn cache, and send back a segment:
|
|
|
|
* <SEQ=ISS><ACK=RCV_NXT><CTL=SYN,ACK>
|
|
|
|
* to the source.
|
|
|
|
*
|
|
|
|
* IMPORTANT NOTE: We do _NOT_ ACK data that might accompany the SYN.
|
|
|
|
* Doing so would require that we hold onto the data and deliver it
|
|
|
|
* to the application. However, if we are the target of a SYN-flood
|
|
|
|
* DoS attack, an attacker could send data which would eventually
|
|
|
|
* consume all available buffer space if it were ACKed. By not ACKing
|
|
|
|
* the data, we avoid this DoS scenario.
|
|
|
|
*/
|
2007-12-12 20:35:59 +00:00
|
|
|
static void
|
|
|
|
_syncache_add(struct in_conninfo *inc, struct tcpopt *to, struct tcphdr *th,
|
|
|
|
struct inpcb *inp, struct socket **lsop, struct mbuf *m,
|
|
|
|
struct toe_usrreqs *tu, void *toepcb)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct tcpcb *tp;
|
|
|
|
struct socket *so;
|
|
|
|
struct syncache *sc = NULL;
|
|
|
|
struct syncache_head *sch;
|
|
|
|
struct mbuf *ipopts = NULL;
|
2004-07-17 19:44:13 +00:00
|
|
|
u_int32_t flowtmp;
|
2006-06-18 13:03:42 +00:00
|
|
|
int win, sb_hiwat, ip_ttl, ip_tos, noopt;
|
2007-07-28 12:02:05 +00:00
|
|
|
char *s;
|
2006-06-17 18:42:07 +00:00
|
|
|
#ifdef INET6
|
|
|
|
int autoflowlabel = 0;
|
2006-12-13 06:00:57 +00:00
|
|
|
#endif
|
|
|
|
#ifdef MAC
|
|
|
|
struct label *maclabel;
|
2006-06-17 18:42:07 +00:00
|
|
|
#endif
|
2006-09-13 13:08:27 +00:00
|
|
|
struct syncache scs;
|
2003-11-11 17:54:47 +00:00
|
|
|
|
|
|
|
INP_INFO_WLOCK_ASSERT(&tcbinfo);
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WLOCK_ASSERT(inp); /* listen socket */
|
2007-07-28 20:13:40 +00:00
|
|
|
KASSERT((th->th_flags & (TH_RST|TH_ACK|TH_SYN)) == TH_SYN,
|
2007-07-28 12:02:05 +00:00
|
|
|
("%s: unexpected tcp flags", __func__));
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/*
|
|
|
|
* Combine all so/tp operations very early to drop the INP lock as
|
|
|
|
* soon as possible.
|
|
|
|
*/
|
|
|
|
so = *lsop;
|
2001-11-22 04:50:44 +00:00
|
|
|
tp = sototcpcb(so);
|
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (inc->inc_isipv6 &&
|
|
|
|
(inp->in6p_flags & IN6P_AUTOFLOWLABEL))
|
|
|
|
autoflowlabel = 1;
|
|
|
|
#endif
|
|
|
|
ip_ttl = inp->inp_ip_ttl;
|
|
|
|
ip_tos = inp->inp_ip_tos;
|
|
|
|
win = sbspace(&so->so_rcv);
|
|
|
|
sb_hiwat = so->so_rcv.sb_hiwat;
|
2006-06-18 13:03:42 +00:00
|
|
|
noopt = (tp->t_flags & TF_NOOPT);
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
so = NULL;
|
|
|
|
tp = NULL;
|
|
|
|
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
2007-10-25 14:37:37 +00:00
|
|
|
if (mac_syncache_init(&maclabel) != 0) {
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WUNLOCK(inp);
|
2006-12-13 06:00:57 +00:00
|
|
|
INP_INFO_WUNLOCK(&tcbinfo);
|
2007-04-20 13:30:08 +00:00
|
|
|
goto done;
|
2006-12-13 06:00:57 +00:00
|
|
|
} else
|
2007-10-25 14:37:37 +00:00
|
|
|
mac_syncache_create(maclabel, inp);
|
2006-12-13 06:00:57 +00:00
|
|
|
#endif
|
2008-04-17 21:38:18 +00:00
|
|
|
INP_WUNLOCK(inp);
|
2006-06-17 17:32:38 +00:00
|
|
|
INP_INFO_WUNLOCK(&tcbinfo);
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* Remember the IP options, if any.
|
|
|
|
*/
|
|
|
|
#ifdef INET6
|
|
|
|
if (!inc->inc_isipv6)
|
|
|
|
#endif
|
2007-12-17 07:56:27 +00:00
|
|
|
ipopts = (m) ? ip_srcroute(m) : NULL;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* See if we already have an entry for this connection.
|
|
|
|
* If we do, resend the SYN,ACK, and reset the retransmit timer.
|
|
|
|
*
|
2006-06-17 17:49:11 +00:00
|
|
|
* XXX: should the syncache be re-initialized with the contents
|
2001-11-22 04:50:44 +00:00
|
|
|
* of the new SYN here (which may have different options?)
|
2007-07-28 12:02:05 +00:00
|
|
|
*
|
|
|
|
* XXX: We do not check the sequence number to see if this is a
|
|
|
|
* real retransmit or a new connection attempt. The question is
|
|
|
|
* how to handle such a case; either ignore it as spoofed, or
|
|
|
|
* drop the current entry and create a new one?
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
2006-06-17 17:32:38 +00:00
|
|
|
sc = syncache_lookup(inc, &sch); /* returns locked entry */
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc != NULL) {
|
2007-12-17 07:56:27 +00:00
|
|
|
#ifndef TCP_OFFLOAD_DISABLE
|
2007-12-12 20:35:59 +00:00
|
|
|
if (sc->sc_tu)
|
2007-12-17 07:56:27 +00:00
|
|
|
sc->sc_tu->tu_syncache_event(TOE_SC_ENTRY_PRESENT,
|
2007-12-12 20:35:59 +00:00
|
|
|
sc->sc_toepcb);
|
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sc_dupsyn++;
|
|
|
|
if (ipopts) {
|
|
|
|
/*
|
|
|
|
* If we were remembering a previous source route,
|
|
|
|
* forget it and use the new one we've been given.
|
|
|
|
*/
|
|
|
|
if (sc->sc_ipopts)
|
|
|
|
(void) m_free(sc->sc_ipopts);
|
|
|
|
sc->sc_ipopts = ipopts;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Update timestamp if present.
|
|
|
|
*/
|
2007-04-20 13:36:48 +00:00
|
|
|
if ((sc->sc_flags & SCF_TIMESTAMP) && (to->to_flags & TOF_TS))
|
2006-06-26 16:14:19 +00:00
|
|
|
sc->sc_tsreflect = to->to_tsval;
|
2007-04-20 13:36:48 +00:00
|
|
|
else
|
|
|
|
sc->sc_flags &= ~SCF_TIMESTAMP;
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
|
|
|
/*
|
|
|
|
* Since we have already unconditionally allocated label
|
|
|
|
* storage, free it up. The syncache entry will already
|
|
|
|
* have an initialized label we can use.
|
|
|
|
*/
|
2007-10-25 14:37:37 +00:00
|
|
|
mac_syncache_destroy(&maclabel);
|
2006-12-13 06:00:57 +00:00
|
|
|
KASSERT(sc->sc_label != NULL,
|
|
|
|
("%s: label not initialized", __func__));
|
|
|
|
#endif
|
2007-07-28 12:02:05 +00:00
|
|
|
/* Retransmit SYN|ACK and reset retransmit count. */
|
|
|
|
if ((s = tcp_log_addrs(&sc->sc_inc, th, NULL, NULL))) {
|
2007-07-29 20:13:22 +00:00
|
|
|
log(LOG_DEBUG, "%s; %s: Received duplicate SYN, "
|
2007-07-28 12:02:05 +00:00
|
|
|
"resetting timer and retransmitting SYN|ACK\n",
|
|
|
|
s, __func__);
|
|
|
|
free(s, M_TCPLOG);
|
|
|
|
}
|
2007-12-17 07:56:27 +00:00
|
|
|
if (!TOEPCB_ISSET(sc) && syncache_respond(sc) == 0) {
|
2007-07-28 12:02:05 +00:00
|
|
|
sc->sc_rxmits = 0;
|
|
|
|
syncache_timeout(sc, sch, 1);
|
2004-08-16 18:32:07 +00:00
|
|
|
tcpstat.tcps_sndacks++;
|
2001-11-22 04:50:44 +00:00
|
|
|
tcpstat.tcps_sndtotal++;
|
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
goto done;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
2006-02-09 21:29:02 +00:00
|
|
|
sc = uma_zalloc(tcp_syncache.zone, M_NOWAIT | M_ZERO);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc == NULL) {
|
|
|
|
/*
|
|
|
|
* The zone allocator couldn't provide more entries.
|
2004-08-16 18:32:07 +00:00
|
|
|
* Treat this as if the cache was full; drop the oldest
|
2001-11-22 04:50:44 +00:00
|
|
|
* entry and insert the new one.
|
|
|
|
*/
|
2006-01-14 13:04:08 +00:00
|
|
|
tcpstat.tcps_sc_zonefail++;
|
2007-04-17 15:25:14 +00:00
|
|
|
if ((sc = TAILQ_LAST(&sch->sch_bucket, sch_head)) != NULL)
|
|
|
|
syncache_drop(sc, sch);
|
2006-02-09 21:29:02 +00:00
|
|
|
sc = uma_zalloc(tcp_syncache.zone, M_NOWAIT | M_ZERO);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (sc == NULL) {
|
2006-09-13 13:08:27 +00:00
|
|
|
if (tcp_syncookies) {
|
|
|
|
bzero(&scs, sizeof(scs));
|
|
|
|
sc = &scs;
|
|
|
|
} else {
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
if (ipopts)
|
|
|
|
(void) m_free(ipopts);
|
|
|
|
goto done;
|
|
|
|
}
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2006-09-13 13:08:27 +00:00
|
|
|
}
|
2007-12-12 20:35:59 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
|
|
|
* Fill in the syncache values.
|
|
|
|
*/
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
|
|
|
sc->sc_label = maclabel;
|
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
sc->sc_ipopts = ipopts;
|
Add code to allow the system to handle multiple routing tables.
This particular implementation is designed to be fully backwards compatible
and to be MFC-able to 7.x (and 6.x)
Currently the only protocol that can make use of the multiple tables is IPv4
Similar functionality exists in OpenBSD and Linux.
From my notes:
-----
One thing where FreeBSD has been falling behind, and which by chance I
have some time to work on is "policy based routing", which allows
different
packet streams to be routed by more than just the destination address.
Constraints:
------------
I want to make some form of this available in the 6.x tree
(and by extension 7.x) , but FreeBSD in general needs it so I might as
well do it in -current and back port the portions I need.
One of the ways that this can be done is to have the ability to
instantiate multiple kernel routing tables (which I will now
refer to as "Forwarding Information Bases" or "FIBs" for political
correctness reasons). Which FIB a particular packet uses to make
the next hop decision can be decided by a number of mechanisms.
The policies these mechanisms implement are the "Policies" referred
to in "Policy based routing".
One of the constraints I have if I try to back port this work to
6.x is that it must be implemented as a EXTENSION to the existing
ABIs in 6.x so that third party applications do not need to be
recompiled in timespan of the branch.
This first version will not have some of the bells and whistles that
will come with later versions. It will, for example, be limited to 16
tables in the first commit.
Implementation method, Compatible version. (part 1)
-------------------------------
For this reason I have implemented a "sufficient subset" of a
multiple routing table solution in Perforce, and back-ported it
to 6.x. (also in Perforce though not always caught up with what I
have done in -current/P4). The subset allows a number of FIBs
to be defined at compile time (8 is sufficient for my purposes in 6.x)
and implements the changes needed to allow IPV4 to use them. I have not
done the changes for ipv6 simply because I do not need it, and I do not
have enough knowledge of ipv6 (e.g. neighbor discovery) needed to do it.
Other protocol families are left untouched and should there be
users with proprietary protocol families, they should continue to work
and be oblivious to the existence of the extra FIBs.
To understand how this is done, one must know that the current FIB
code starts everything off with a single dimensional array of
pointers to FIB head structures (One per protocol family), each of
which in turn points to the trie of routes available to that family.
The basic change in the ABI compatible version of the change is to
extent that array to be a 2 dimensional array, so that
instead of protocol family X looking at rt_tables[X] for the
table it needs, it looks at rt_tables[Y][X] when for all
protocol families except ipv4 Y is always 0.
Code that is unaware of the change always just sees the first row
of the table, which of course looks just like the one dimensional
array that existed before.
The entry points rtrequest(), rtalloc(), rtalloc1(), rtalloc_ign()
are all maintained, but refer only to the first row of the array,
so that existing callers in proprietary protocols can continue to
do the "right thing".
Some new entry points are added, for the exclusive use of ipv4 code
called in_rtrequest(), in_rtalloc(), in_rtalloc1() and in_rtalloc_ign(),
which have an extra argument which refers the code to the correct row.
In addition, there are some new entry points (currently called
rtalloc_fib() and friends) that check the Address family being
looked up and call either rtalloc() (and friends) if the protocol
is not IPv4 forcing the action to row 0 or to the appropriate row
if it IS IPv4 (and that info is available). These are for calling
from code that is not specific to any particular protocol. The way
these are implemented would change in the non ABI preserving code
to be added later.
One feature of the first version of the code is that for ipv4,
the interface routes show up automatically on all the FIBs, so
that no matter what FIB you select you always have the basic
direct attached hosts available to you. (rtinit() does this
automatically).
You CAN delete an interface route from one FIB should you want
to but by default it's there. ARP information is also available
in each FIB. It's assumed that the same machine would have the
same MAC address, regardless of which FIB you are using to get
to it.
This brings us as to how the correct FIB is selected for an outgoing
IPV4 packet.
Firstly, all packets have a FIB associated with them. if nothing
has been done to change it, it will be FIB 0. The FIB is changed
in the following ways.
Packets fall into one of a number of classes.
1/ locally generated packets, coming from a socket/PCB.
Such packets select a FIB from a number associated with the
socket/PCB. This in turn is inherited from the process,
but can be changed by a socket option. The process in turn
inherits it on fork. I have written a utility call setfib
that acts a bit like nice..
setfib -3 ping target.example.com # will use fib 3 for ping.
It is an obvious extension to make it a property of a jail
but I have not done so. It can be achieved by combining the setfib and
jail commands.
2/ packets received on an interface for forwarding.
By default these packets would use table 0,
(or possibly a number settable in a sysctl(not yet)).
but prior to routing the firewall can inspect them (see below).
(possibly in the future you may be able to associate a FIB
with packets received on an interface.. An ifconfig arg, but not yet.)
3/ packets inspected by a packet classifier, which can arbitrarily
associate a fib with it on a packet by packet basis.
A fib assigned to a packet by a packet classifier
(such as ipfw) would over-ride a fib associated by
a more default source. (such as cases 1 or 2).
4/ a tcp listen socket associated with a fib will generate
accept sockets that are associated with that same fib.
5/ Packets generated in response to some other packet (e.g. reset
or icmp packets). These should use the FIB associated with the
packet being reponded to.
6/ Packets generated during encapsulation.
gif, tun and other tunnel interfaces will encapsulate using the FIB
that was in effect withthe proces that set up the tunnel.
thus setfib 1 ifconfig gif0 [tunnel instructions]
will set the fib for the tunnel to use to be fib 1.
Routing messages would be associated with their
process, and thus select one FIB or another.
messages from the kernel would be associated with the fib they
refer to and would only be received by a routing socket associated
with that fib. (not yet implemented)
In addition Netstat has been edited to be able to cope with the
fact that the array is now 2 dimensional. (It looks in system
memory using libkvm (!)). Old versions of netstat see only the first FIB.
In addition two sysctls are added to give:
a) the number of FIBs compiled in (active)
b) the default FIB of the calling process.
Early testing experience:
-------------------------
Basically our (IronPort's) appliance does this functionality already
using ipfw fwd but that method has some drawbacks.
For example,
It can't fully simulate a routing table because it can't influence the
socket's choice of local address when a connect() is done.
Testing during the generating of these changes has been
remarkably smooth so far. Multiple tables have co-existed
with no notable side effects, and packets have been routes
accordingly.
ipfw has grown 2 new keywords:
setfib N ip from anay to any
count ip from any to any fib N
In pf there seems to be a requirement to be able to give symbolic names to the
fibs but I do not have that capacity. I am not sure if it is required.
SCTP has interestingly enough built in support for this, called VRFs
in Cisco parlance. it will be interesting to see how that handles it
when it suddenly actually does something.
Where to next:
--------------------
After committing the ABI compatible version and MFCing it, I'd
like to proceed in a forward direction in -current. this will
result in some roto-tilling in the routing code.
Firstly: the current code's idea of having a separate tree per
protocol family, all of the same format, and pointed to by the
1 dimensional array is a bit silly. Especially when one considers that
there is code that makes assumptions about every protocol having the
same internal structures there. Some protocols don't WANT that
sort of structure. (for example the whole idea of a netmask is foreign
to appletalk). This needs to be made opaque to the external code.
My suggested first change is to add routing method pointers to the
'domain' structure, along with information pointing the data.
instead of having an array of pointers to uniform structures,
there would be an array pointing to the 'domain' structures
for each protocol address domain (protocol family),
and the methods this reached would be called. The methods would have
an argument that gives FIB number, but the protocol would be free
to ignore it.
When the ABI can be changed it raises the possibilty of the
addition of a fib entry into the "struct route". Currently,
the structure contains the sockaddr of the desination, and the resulting
fib entry. To make this work fully, one could add a fib number
so that given an address and a fib, one can find the third element, the
fib entry.
Interaction with the ARP layer/ LL layer would need to be
revisited as well. Qing Li has been working on this already.
This work was sponsored by Ironport Systems/Cisco
Reviewed by: several including rwatson, bz and mlair (parts each)
Obtained from: Ironport systems/Cisco
2008-05-09 23:03:00 +00:00
|
|
|
sc->sc_inc.inc_fibnum = inp->inp_inc.inc_fibnum;
|
2006-06-26 16:14:19 +00:00
|
|
|
bcopy(inc, &sc->sc_inc, sizeof(struct in_conninfo));
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
2006-06-26 16:14:19 +00:00
|
|
|
if (!inc->inc_isipv6)
|
2001-11-22 04:50:44 +00:00
|
|
|
#endif
|
|
|
|
{
|
2006-06-17 17:32:38 +00:00
|
|
|
sc->sc_ip_tos = ip_tos;
|
|
|
|
sc->sc_ip_ttl = ip_ttl;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2007-12-17 07:56:27 +00:00
|
|
|
#ifndef TCP_OFFLOAD_DISABLE
|
2007-12-12 20:35:59 +00:00
|
|
|
sc->sc_tu = tu;
|
|
|
|
sc->sc_toepcb = toepcb;
|
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
sc->sc_irs = th->th_seq;
|
2006-09-13 13:08:27 +00:00
|
|
|
sc->sc_iss = arc4random();
|
2003-01-29 03:49:49 +00:00
|
|
|
sc->sc_flags = 0;
|
2004-07-17 19:44:13 +00:00
|
|
|
sc->sc_flowlabel = 0;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/*
|
|
|
|
* Initial receive window: clip sbspace to [0 .. TCP_MAXWIN].
|
|
|
|
* win was derived from socket earlier in the function.
|
|
|
|
*/
|
2001-11-22 04:50:44 +00:00
|
|
|
win = imax(win, 0);
|
|
|
|
win = imin(win, TCP_MAXWIN);
|
|
|
|
sc->sc_wnd = win;
|
|
|
|
|
|
|
|
if (tcp_do_rfc1323) {
|
|
|
|
/*
|
|
|
|
* A timestamp received in a SYN makes
|
|
|
|
* it ok to send timestamp requests and replies.
|
|
|
|
*/
|
|
|
|
if (to->to_flags & TOF_TS) {
|
2006-06-26 16:14:19 +00:00
|
|
|
sc->sc_tsreflect = to->to_tsval;
|
2007-05-18 21:42:25 +00:00
|
|
|
sc->sc_ts = ticks;
|
2001-11-22 04:50:44 +00:00
|
|
|
sc->sc_flags |= SCF_TIMESTAMP;
|
|
|
|
}
|
|
|
|
if (to->to_flags & TOF_SCALE) {
|
|
|
|
int wscale = 0;
|
|
|
|
|
2007-02-01 17:39:18 +00:00
|
|
|
/*
|
2007-10-19 08:53:14 +00:00
|
|
|
* Pick the smallest possible scaling factor that
|
|
|
|
* will still allow us to scale up to sb_max, aka
|
|
|
|
* kern.ipc.maxsockbuf.
|
|
|
|
*
|
|
|
|
* We do this because there are broken firewalls that
|
|
|
|
* will corrupt the window scale option, leading to
|
|
|
|
* the other endpoint believing that our advertised
|
|
|
|
* window is unscaled. At scale factors larger than
|
|
|
|
* 5 the unscaled window will drop below 1500 bytes,
|
|
|
|
* leading to serious problems when traversing these
|
|
|
|
* broken firewalls.
|
|
|
|
*
|
|
|
|
* With the default maxsockbuf of 256K, a scale factor
|
|
|
|
* of 3 will be chosen by this algorithm. Those who
|
|
|
|
* choose a larger maxsockbuf should watch out
|
|
|
|
* for the compatiblity problems mentioned above.
|
2007-03-15 15:59:28 +00:00
|
|
|
*
|
|
|
|
* RFC1323: The Window field in a SYN (i.e., a <SYN>
|
|
|
|
* or <SYN,ACK>) segment itself is never scaled.
|
2007-02-01 17:39:18 +00:00
|
|
|
*/
|
2001-11-22 04:50:44 +00:00
|
|
|
while (wscale < TCP_MAX_WINSHIFT &&
|
2007-10-19 08:53:14 +00:00
|
|
|
(TCP_MAXWIN << wscale) < sb_max)
|
2001-11-22 04:50:44 +00:00
|
|
|
wscale++;
|
2006-06-26 16:14:19 +00:00
|
|
|
sc->sc_requested_r_scale = wscale;
|
2007-03-15 15:59:28 +00:00
|
|
|
sc->sc_requested_s_scale = to->to_wscale;
|
2001-11-22 04:50:44 +00:00
|
|
|
sc->sc_flags |= SCF_WINSCALE;
|
|
|
|
}
|
|
|
|
}
|
Initial import of RFC 2385 (TCP-MD5) digest support.
This is the first of two commits; bringing in the kernel support first.
This can be enabled by compiling a kernel with options TCP_SIGNATURE
and FAST_IPSEC.
For the uninitiated, this is a TCP option which provides for a means of
authenticating TCP sessions which came into being before IPSEC. It is
still relevant today, however, as it is used by many commercial router
vendors, particularly with BGP, and as such has become a requirement for
interconnect at many major Internet points of presence.
Several parts of the TCP and IP headers, including the segment payload,
are digested with MD5, including a shared secret. The PF_KEY interface
is used to manage the secrets using security associations in the SADB.
There is a limitation here in that as there is no way to map a TCP flow
per-port back to an SPI without polluting tcpcb or using the SPD; the
code to do the latter is unstable at this time. Therefore this code only
supports per-host keying granularity.
Whilst FAST_IPSEC is mutually exclusive with KAME IPSEC (and thus IPv6),
TCP_SIGNATURE applies only to IPv4. For the vast majority of prospective
users of this feature, this will not pose any problem.
This implementation is output-only; that is, the option is honoured when
responding to a host initiating a TCP session, but no effort is made
[yet] to authenticate inbound traffic. This is, however, sufficient to
interwork with Cisco equipment.
Tested with a Cisco 2501 running IOS 12.0(27), and Quagga 0.96.4 with
local patches. Patches for tcpdump to validate TCP-MD5 sessions are also
available from me upon request.
Sponsored by: sentex.net
2004-02-11 04:26:04 +00:00
|
|
|
#ifdef TCP_SIGNATURE
|
|
|
|
/*
|
2005-04-21 20:24:43 +00:00
|
|
|
* If listening socket requested TCP digests, and received SYN
|
|
|
|
* contains the option, flag this in the syncache so that
|
|
|
|
* syncache_respond() will do the right thing with the SYN+ACK.
|
2006-06-17 17:49:11 +00:00
|
|
|
* XXX: Currently we always record the option by default and will
|
2005-04-21 20:24:43 +00:00
|
|
|
* attempt to use it in syncache_respond().
|
Initial import of RFC 2385 (TCP-MD5) digest support.
This is the first of two commits; bringing in the kernel support first.
This can be enabled by compiling a kernel with options TCP_SIGNATURE
and FAST_IPSEC.
For the uninitiated, this is a TCP option which provides for a means of
authenticating TCP sessions which came into being before IPSEC. It is
still relevant today, however, as it is used by many commercial router
vendors, particularly with BGP, and as such has become a requirement for
interconnect at many major Internet points of presence.
Several parts of the TCP and IP headers, including the segment payload,
are digested with MD5, including a shared secret. The PF_KEY interface
is used to manage the secrets using security associations in the SADB.
There is a limitation here in that as there is no way to map a TCP flow
per-port back to an SPI without polluting tcpcb or using the SPD; the
code to do the latter is unstable at this time. Therefore this code only
supports per-host keying granularity.
Whilst FAST_IPSEC is mutually exclusive with KAME IPSEC (and thus IPv6),
TCP_SIGNATURE applies only to IPv4. For the vast majority of prospective
users of this feature, this will not pose any problem.
This implementation is output-only; that is, the option is honoured when
responding to a host initiating a TCP session, but no effort is made
[yet] to authenticate inbound traffic. This is, however, sufficient to
interwork with Cisco equipment.
Tested with a Cisco 2501 running IOS 12.0(27), and Quagga 0.96.4 with
local patches. Patches for tcpdump to validate TCP-MD5 sessions are also
available from me upon request.
Sponsored by: sentex.net
2004-02-11 04:26:04 +00:00
|
|
|
*/
|
2005-04-21 20:24:43 +00:00
|
|
|
if (to->to_flags & TOF_SIGNATURE)
|
2005-09-14 15:06:22 +00:00
|
|
|
sc->sc_flags |= SCF_SIGNATURE;
|
2004-02-13 18:21:45 +00:00
|
|
|
#endif
|
2007-12-04 07:11:13 +00:00
|
|
|
if (to->to_flags & TOF_SACKPERM)
|
2004-06-23 21:04:37 +00:00
|
|
|
sc->sc_flags |= SCF_SACK;
|
2006-09-13 13:08:27 +00:00
|
|
|
if (to->to_flags & TOF_MSS)
|
|
|
|
sc->sc_peer_mss = to->to_mss; /* peer mss may be zero */
|
2006-06-18 13:03:42 +00:00
|
|
|
if (noopt)
|
|
|
|
sc->sc_flags |= SCF_NOOPT;
|
2004-06-23 21:04:37 +00:00
|
|
|
|
2006-09-13 13:08:27 +00:00
|
|
|
if (tcp_syncookies) {
|
|
|
|
syncookie_generate(sch, sc, &flowtmp);
|
|
|
|
#ifdef INET6
|
|
|
|
if (autoflowlabel)
|
|
|
|
sc->sc_flowlabel = flowtmp;
|
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
#ifdef INET6
|
|
|
|
if (autoflowlabel)
|
|
|
|
sc->sc_flowlabel =
|
|
|
|
(htonl(ip6_randomflowlabel()) & IPV6_FLOWLABEL_MASK);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
/*
|
2004-11-02 22:22:22 +00:00
|
|
|
* Do a standard 3-way handshake.
|
2001-11-22 04:50:44 +00:00
|
|
|
*/
|
2007-12-17 07:56:27 +00:00
|
|
|
if (TOEPCB_ISSET(sc) || syncache_respond(sc) == 0) {
|
2006-09-13 13:08:27 +00:00
|
|
|
if (tcp_syncookies && tcp_syncookiesonly && sc != &scs)
|
|
|
|
syncache_free(sc);
|
|
|
|
else if (sc != &scs)
|
|
|
|
syncache_insert(sc, sch); /* locks and unlocks sch */
|
2001-12-19 06:12:14 +00:00
|
|
|
tcpstat.tcps_sndacks++;
|
|
|
|
tcpstat.tcps_sndtotal++;
|
2001-11-22 04:50:44 +00:00
|
|
|
} else {
|
2006-12-13 06:00:57 +00:00
|
|
|
if (sc != &scs)
|
|
|
|
syncache_free(sc);
|
2001-12-19 06:12:14 +00:00
|
|
|
tcpstat.tcps_sc_dropped++;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
2006-06-17 17:32:38 +00:00
|
|
|
|
|
|
|
done:
|
2007-04-20 13:30:08 +00:00
|
|
|
#ifdef MAC
|
|
|
|
if (sc == &scs)
|
2007-10-25 14:37:37 +00:00
|
|
|
mac_syncache_destroy(&maclabel);
|
2007-04-20 13:30:08 +00:00
|
|
|
#endif
|
2007-12-12 20:35:59 +00:00
|
|
|
if (m) {
|
|
|
|
|
|
|
|
*lsop = NULL;
|
|
|
|
m_freem(m);
|
|
|
|
}
|
2007-04-20 14:34:54 +00:00
|
|
|
return;
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2007-04-20 13:30:08 +00:00
|
|
|
syncache_respond(struct syncache *sc)
|
2001-11-22 04:50:44 +00:00
|
|
|
{
|
|
|
|
struct ip *ip = NULL;
|
2007-04-20 13:30:08 +00:00
|
|
|
struct mbuf *m;
|
2001-11-22 04:50:44 +00:00
|
|
|
struct tcphdr *th;
|
2006-06-26 16:14:19 +00:00
|
|
|
int optlen, error;
|
2007-03-15 15:59:28 +00:00
|
|
|
u_int16_t hlen, tlen, mssopt;
|
|
|
|
struct tcpopt to;
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
struct ip6_hdr *ip6 = NULL;
|
|
|
|
#endif
|
|
|
|
|
2003-11-20 20:07:39 +00:00
|
|
|
hlen =
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
2004-08-16 18:32:07 +00:00
|
|
|
(sc->sc_inc.inc_isipv6) ? sizeof(struct ip6_hdr) :
|
2001-11-22 04:50:44 +00:00
|
|
|
#endif
|
2003-11-20 20:07:39 +00:00
|
|
|
sizeof(struct ip);
|
2007-03-15 15:59:28 +00:00
|
|
|
tlen = hlen + sizeof(struct tcphdr);
|
2003-11-20 20:07:39 +00:00
|
|
|
|
2006-06-17 17:49:11 +00:00
|
|
|
/* Determine MSS we advertize to other end of connection. */
|
2003-11-20 20:07:39 +00:00
|
|
|
mssopt = tcp_mssopt(&sc->sc_inc);
|
2006-06-26 17:54:53 +00:00
|
|
|
if (sc->sc_peer_mss)
|
|
|
|
mssopt = max( min(sc->sc_peer_mss, mssopt), tcp_minmss);
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2007-03-15 15:59:28 +00:00
|
|
|
/* XXX: Assume that the entire packet will fit in a header mbuf. */
|
2007-04-20 15:08:09 +00:00
|
|
|
KASSERT(max_linkhdr + tlen + TCP_MAXOLEN <= MHLEN,
|
2007-03-15 15:59:28 +00:00
|
|
|
("syncache: mbuf too small"));
|
2001-11-22 04:50:44 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
/* Create the IP+TCP header from scratch. */
|
2005-11-02 13:46:32 +00:00
|
|
|
m = m_gethdr(M_DONTWAIT, MT_DATA);
|
2001-11-22 04:50:44 +00:00
|
|
|
if (m == NULL)
|
|
|
|
return (ENOBUFS);
|
2006-12-13 06:00:57 +00:00
|
|
|
#ifdef MAC
|
2007-10-25 14:37:37 +00:00
|
|
|
mac_syncache_create_mbuf(sc->sc_label, m);
|
2006-12-13 06:00:57 +00:00
|
|
|
#endif
|
2001-11-22 04:50:44 +00:00
|
|
|
m->m_data += max_linkhdr;
|
|
|
|
m->m_len = tlen;
|
|
|
|
m->m_pkthdr.len = tlen;
|
|
|
|
m->m_pkthdr.rcvif = NULL;
|
2006-06-17 17:32:38 +00:00
|
|
|
|
2001-11-22 04:50:44 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6) {
|
|
|
|
ip6 = mtod(m, struct ip6_hdr *);
|
|
|
|
ip6->ip6_vfc = IPV6_VERSION;
|
|
|
|
ip6->ip6_nxt = IPPROTO_TCP;
|
|
|
|
ip6->ip6_src = sc->sc_inc.inc6_laddr;
|
|
|
|
ip6->ip6_dst = sc->sc_inc.inc6_faddr;
|
|
|
|
ip6->ip6_plen = htons(tlen - hlen);
|
|
|
|
/* ip6_hlim is set after checksum */
|
2004-07-17 19:44:13 +00:00
|
|
|
ip6->ip6_flow &= ~IPV6_FLOWLABEL_MASK;
|
|
|
|
ip6->ip6_flow |= sc->sc_flowlabel;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
th = (struct tcphdr *)(ip6 + 1);
|
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
|
|
|
ip = mtod(m, struct ip *);
|
|
|
|
ip->ip_v = IPVERSION;
|
|
|
|
ip->ip_hl = sizeof(struct ip) >> 2;
|
|
|
|
ip->ip_len = tlen;
|
|
|
|
ip->ip_id = 0;
|
|
|
|
ip->ip_off = 0;
|
|
|
|
ip->ip_sum = 0;
|
|
|
|
ip->ip_p = IPPROTO_TCP;
|
|
|
|
ip->ip_src = sc->sc_inc.inc_laddr;
|
|
|
|
ip->ip_dst = sc->sc_inc.inc_faddr;
|
2006-06-17 17:32:38 +00:00
|
|
|
ip->ip_ttl = sc->sc_ip_ttl;
|
|
|
|
ip->ip_tos = sc->sc_ip_tos;
|
2002-06-14 03:08:05 +00:00
|
|
|
|
|
|
|
/*
|
2002-12-20 11:24:02 +00:00
|
|
|
* See if we should do MTU discovery. Route lookups are
|
|
|
|
* expensive, so we will only unset the DF bit if:
|
2002-08-05 22:34:15 +00:00
|
|
|
*
|
|
|
|
* 1) path_mtu_discovery is disabled
|
|
|
|
* 2) the SCF_UNREACH flag has been set
|
2002-06-14 03:08:05 +00:00
|
|
|
*/
|
2002-12-20 11:24:02 +00:00
|
|
|
if (path_mtu_discovery && ((sc->sc_flags & SCF_UNREACH) == 0))
|
2002-06-14 03:08:05 +00:00
|
|
|
ip->ip_off |= IP_DF;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
th = (struct tcphdr *)(ip + 1);
|
|
|
|
}
|
|
|
|
th->th_sport = sc->sc_inc.inc_lport;
|
|
|
|
th->th_dport = sc->sc_inc.inc_fport;
|
|
|
|
|
|
|
|
th->th_seq = htonl(sc->sc_iss);
|
|
|
|
th->th_ack = htonl(sc->sc_irs + 1);
|
2007-03-15 15:59:28 +00:00
|
|
|
th->th_off = sizeof(struct tcphdr) >> 2;
|
2001-11-22 04:50:44 +00:00
|
|
|
th->th_x2 = 0;
|
|
|
|
th->th_flags = TH_SYN|TH_ACK;
|
|
|
|
th->th_win = htons(sc->sc_wnd);
|
|
|
|
th->th_urp = 0;
|
|
|
|
|
|
|
|
/* Tack on the TCP options. */
|
2007-03-15 15:59:28 +00:00
|
|
|
if ((sc->sc_flags & SCF_NOOPT) == 0) {
|
|
|
|
to.to_flags = 0;
|
2002-12-20 11:24:02 +00:00
|
|
|
|
2007-03-15 15:59:28 +00:00
|
|
|
to.to_mss = mssopt;
|
|
|
|
to.to_flags = TOF_MSS;
|
2002-12-20 11:24:02 +00:00
|
|
|
if (sc->sc_flags & SCF_WINSCALE) {
|
2007-03-15 15:59:28 +00:00
|
|
|
to.to_wscale = sc->sc_requested_r_scale;
|
|
|
|
to.to_flags |= TOF_SCALE;
|
2002-12-20 11:24:02 +00:00
|
|
|
}
|
|
|
|
if (sc->sc_flags & SCF_TIMESTAMP) {
|
2007-03-15 15:59:28 +00:00
|
|
|
/* Virgin timestamp or TCP cookie enhanced one. */
|
2007-05-18 21:42:25 +00:00
|
|
|
to.to_tsval = sc->sc_ts;
|
2007-03-15 15:59:28 +00:00
|
|
|
to.to_tsecr = sc->sc_tsreflect;
|
|
|
|
to.to_flags |= TOF_TS;
|
2002-12-20 11:24:02 +00:00
|
|
|
}
|
2007-03-15 15:59:28 +00:00
|
|
|
if (sc->sc_flags & SCF_SACK)
|
|
|
|
to.to_flags |= TOF_SACKPERM;
|
Initial import of RFC 2385 (TCP-MD5) digest support.
This is the first of two commits; bringing in the kernel support first.
This can be enabled by compiling a kernel with options TCP_SIGNATURE
and FAST_IPSEC.
For the uninitiated, this is a TCP option which provides for a means of
authenticating TCP sessions which came into being before IPSEC. It is
still relevant today, however, as it is used by many commercial router
vendors, particularly with BGP, and as such has become a requirement for
interconnect at many major Internet points of presence.
Several parts of the TCP and IP headers, including the segment payload,
are digested with MD5, including a shared secret. The PF_KEY interface
is used to manage the secrets using security associations in the SADB.
There is a limitation here in that as there is no way to map a TCP flow
per-port back to an SPI without polluting tcpcb or using the SPD; the
code to do the latter is unstable at this time. Therefore this code only
supports per-host keying granularity.
Whilst FAST_IPSEC is mutually exclusive with KAME IPSEC (and thus IPv6),
TCP_SIGNATURE applies only to IPv4. For the vast majority of prospective
users of this feature, this will not pose any problem.
This implementation is output-only; that is, the option is honoured when
responding to a host initiating a TCP session, but no effort is made
[yet] to authenticate inbound traffic. This is, however, sufficient to
interwork with Cisco equipment.
Tested with a Cisco 2501 running IOS 12.0(27), and Quagga 0.96.4 with
local patches. Patches for tcpdump to validate TCP-MD5 sessions are also
available from me upon request.
Sponsored by: sentex.net
2004-02-11 04:26:04 +00:00
|
|
|
#ifdef TCP_SIGNATURE
|
2007-03-15 15:59:28 +00:00
|
|
|
if (sc->sc_flags & SCF_SIGNATURE)
|
|
|
|
to.to_flags |= TOF_SIGNATURE;
|
|
|
|
#endif
|
|
|
|
optlen = tcp_addoptions(&to, (u_char *)(th + 1));
|
2004-06-23 21:04:37 +00:00
|
|
|
|
2007-03-15 15:59:28 +00:00
|
|
|
/* Adjust headers by option size. */
|
|
|
|
th->th_off = (sizeof(struct tcphdr) + optlen) >> 2;
|
|
|
|
m->m_len += optlen;
|
|
|
|
m->m_pkthdr.len += optlen;
|
2007-11-30 23:41:51 +00:00
|
|
|
|
|
|
|
#ifdef TCP_SIGNATURE
|
|
|
|
if (sc->sc_flags & SCF_SIGNATURE)
|
|
|
|
tcp_signature_compute(m, sizeof(struct ip), 0, optlen,
|
|
|
|
to.to_signature, IPSEC_DIR_OUTBOUND);
|
|
|
|
#endif
|
2007-03-15 15:59:28 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6)
|
|
|
|
ip6->ip6_plen = htons(ntohs(ip6->ip6_plen) + optlen);
|
2007-03-17 06:40:09 +00:00
|
|
|
else
|
2007-03-15 15:59:28 +00:00
|
|
|
#endif
|
|
|
|
ip->ip_len += optlen;
|
|
|
|
} else
|
|
|
|
optlen = 0;
|
2001-11-22 04:50:44 +00:00
|
|
|
|
|
|
|
#ifdef INET6
|
|
|
|
if (sc->sc_inc.inc_isipv6) {
|
|
|
|
th->th_sum = 0;
|
2007-03-17 11:52:54 +00:00
|
|
|
th->th_sum = in6_cksum(m, IPPROTO_TCP, hlen,
|
|
|
|
tlen + optlen - hlen);
|
2003-11-20 20:07:39 +00:00
|
|
|
ip6->ip6_hlim = in6_selecthlim(NULL, NULL);
|
2006-06-17 17:32:38 +00:00
|
|
|
error = ip6_output(m, NULL, NULL, 0, NULL, NULL, NULL);
|
2001-11-22 04:50:44 +00:00
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
2004-08-16 18:32:07 +00:00
|
|
|
th->th_sum = in_pseudo(ip->ip_src.s_addr, ip->ip_dst.s_addr,
|
2007-03-15 15:59:28 +00:00
|
|
|
htons(tlen + optlen - hlen + IPPROTO_TCP));
|
2001-11-22 04:50:44 +00:00
|
|
|
m->m_pkthdr.csum_flags = CSUM_TCP;
|
|
|
|
m->m_pkthdr.csum_data = offsetof(struct tcphdr, th_sum);
|
2006-06-17 17:32:38 +00:00
|
|
|
error = ip_output(m, sc->sc_ipopts, NULL, 0, NULL, NULL);
|
2001-11-22 04:50:44 +00:00
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|
2001-12-19 06:12:14 +00:00
|
|
|
|
2007-12-12 20:35:59 +00:00
|
|
|
void
|
|
|
|
syncache_add(struct in_conninfo *inc, struct tcpopt *to, struct tcphdr *th,
|
|
|
|
struct inpcb *inp, struct socket **lsop, struct mbuf *m)
|
|
|
|
{
|
|
|
|
_syncache_add(inc, to, th, inp, lsop, m, NULL, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
syncache_offload_add(struct in_conninfo *inc, struct tcpopt *to,
|
|
|
|
struct tcphdr *th, struct inpcb *inp, struct socket **lsop,
|
|
|
|
struct toe_usrreqs *tu, void *toepcb)
|
|
|
|
{
|
2008-04-19 03:39:17 +00:00
|
|
|
|
|
|
|
INP_INFO_WLOCK(&tcbinfo);
|
|
|
|
INP_WLOCK(inp);
|
2007-12-12 20:35:59 +00:00
|
|
|
_syncache_add(inc, to, th, inp, lsop, NULL, tu, toepcb);
|
|
|
|
}
|
|
|
|
|
2001-12-19 06:12:14 +00:00
|
|
|
/*
|
2006-09-13 13:08:27 +00:00
|
|
|
* The purpose of SYN cookies is to avoid keeping track of all SYN's we
|
|
|
|
* receive and to be able to handle SYN floods from bogus source addresses
|
|
|
|
* (where we will never receive any reply). SYN floods try to exhaust all
|
|
|
|
* our memory and available slots in the SYN cache table to cause a denial
|
|
|
|
* of service to legitimate users of the local host.
|
2001-12-19 06:12:14 +00:00
|
|
|
*
|
2006-09-13 13:08:27 +00:00
|
|
|
* The idea of SYN cookies is to encode and include all necessary information
|
|
|
|
* about the connection setup state within the SYN-ACK we send back and thus
|
|
|
|
* to get along without keeping any local state until the ACK to the SYN-ACK
|
|
|
|
* arrives (if ever). Everything we need to know should be available from
|
|
|
|
* the information we encoded in the SYN-ACK.
|
|
|
|
*
|
|
|
|
* More information about the theory behind SYN cookies and its first
|
|
|
|
* discussion and specification can be found at:
|
|
|
|
* http://cr.yp.to/syncookies.html (overview)
|
|
|
|
* http://cr.yp.to/syncookies/archive (gory details)
|
|
|
|
*
|
|
|
|
* This implementation extends the orginal idea and first implementation
|
|
|
|
* of FreeBSD by using not only the initial sequence number field to store
|
|
|
|
* information but also the timestamp field if present. This way we can
|
|
|
|
* keep track of the entire state we need to know to recreate the session in
|
|
|
|
* its original form. Almost all TCP speakers implement RFC1323 timestamps
|
|
|
|
* these days. For those that do not we still have to live with the known
|
|
|
|
* shortcomings of the ISN only SYN cookies.
|
|
|
|
*
|
|
|
|
* Cookie layers:
|
|
|
|
*
|
|
|
|
* Initial sequence number we send:
|
|
|
|
* 31|................................|0
|
|
|
|
* DDDDDDDDDDDDDDDDDDDDDDDDDMMMRRRP
|
|
|
|
* D = MD5 Digest (first dword)
|
|
|
|
* M = MSS index
|
|
|
|
* R = Rotation of secret
|
|
|
|
* P = Odd or Even secret
|
|
|
|
*
|
|
|
|
* The MD5 Digest is computed with over following parameters:
|
|
|
|
* a) randomly rotated secret
|
|
|
|
* b) struct in_conninfo containing the remote/local ip/port (IPv4&IPv6)
|
|
|
|
* c) the received initial sequence number from remote host
|
|
|
|
* d) the rotation offset and odd/even bit
|
|
|
|
*
|
|
|
|
* Timestamp we send:
|
|
|
|
* 31|................................|0
|
|
|
|
* DDDDDDDDDDDDDDDDDDDDDDSSSSRRRRA5
|
|
|
|
* D = MD5 Digest (third dword) (only as filler)
|
|
|
|
* S = Requested send window scale
|
|
|
|
* R = Requested receive window scale
|
|
|
|
* A = SACK allowed
|
|
|
|
* 5 = TCP-MD5 enabled (not implemented yet)
|
|
|
|
* XORed with MD5 Digest (forth dword)
|
|
|
|
*
|
|
|
|
* The timestamp isn't cryptographically secure and doesn't need to be.
|
|
|
|
* The double use of the MD5 digest dwords ties it to a specific remote/
|
|
|
|
* local host/port, remote initial sequence number and our local time
|
|
|
|
* limited secret. A received timestamp is reverted (XORed) and then
|
|
|
|
* the contained MD5 dword is compared to the computed one to ensure the
|
|
|
|
* timestamp belongs to the SYN-ACK we sent. The other parameters may
|
|
|
|
* have been tampered with but this isn't different from supplying bogus
|
|
|
|
* values in the SYN in the first place.
|
|
|
|
*
|
|
|
|
* Some problems with SYN cookies remain however:
|
2001-12-19 06:12:14 +00:00
|
|
|
* Consider the problem of a recreated (and retransmitted) cookie. If the
|
2004-08-16 18:32:07 +00:00
|
|
|
* original SYN was accepted, the connection is established. The second
|
|
|
|
* SYN is inflight, and if it arrives with an ISN that falls within the
|
|
|
|
* receive window, the connection is killed.
|
2001-12-19 06:12:14 +00:00
|
|
|
*
|
2006-09-13 13:08:27 +00:00
|
|
|
* Notes:
|
|
|
|
* A heuristic to determine when to accept syn cookies is not necessary.
|
|
|
|
* An ACK flood would cause the syncookie verification to be attempted,
|
|
|
|
* but a SYN flood causes syncookies to be generated. Both are of equal
|
|
|
|
* cost, so there's no point in trying to optimize the ACK flood case.
|
|
|
|
* Also, if you don't process certain ACKs for some reason, then all someone
|
|
|
|
* would have to do is launch a SYN and ACK flood at the same time, which
|
|
|
|
* would stop cookie verification and defeat the entire purpose of syncookies.
|
2001-12-19 06:12:14 +00:00
|
|
|
*/
|
2006-09-13 13:08:27 +00:00
|
|
|
static int tcp_sc_msstab[] = { 0, 256, 468, 536, 996, 1452, 1460, 8960 };
|
2001-12-19 06:12:14 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
static void
|
2006-09-13 13:08:27 +00:00
|
|
|
syncookie_generate(struct syncache_head *sch, struct syncache *sc,
|
|
|
|
u_int32_t *flowlabel)
|
2001-12-19 06:12:14 +00:00
|
|
|
{
|
2006-09-13 13:08:27 +00:00
|
|
|
MD5_CTX ctx;
|
|
|
|
u_int32_t md5_buffer[MD5_DIGEST_LENGTH / sizeof(u_int32_t)];
|
2001-12-19 06:12:14 +00:00
|
|
|
u_int32_t data;
|
2006-09-13 13:08:27 +00:00
|
|
|
u_int32_t *secbits;
|
|
|
|
u_int off, pmss, mss;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
|
|
|
|
|
|
|
/* Which of the two secrets to use. */
|
|
|
|
secbits = sch->sch_oddeven ?
|
|
|
|
sch->sch_secbits_odd : sch->sch_secbits_even;
|
|
|
|
|
|
|
|
/* Reseed secret if too old. */
|
|
|
|
if (sch->sch_reseed < time_uptime) {
|
|
|
|
sch->sch_oddeven = sch->sch_oddeven ? 0 : 1; /* toggle */
|
|
|
|
secbits = sch->sch_oddeven ?
|
|
|
|
sch->sch_secbits_odd : sch->sch_secbits_even;
|
|
|
|
for (i = 0; i < SYNCOOKIE_SECRET_SIZE; i++)
|
|
|
|
secbits[i] = arc4random();
|
|
|
|
sch->sch_reseed = time_uptime + SYNCOOKIE_LIFETIME;
|
2001-12-19 06:12:14 +00:00
|
|
|
}
|
2006-09-13 13:08:27 +00:00
|
|
|
|
|
|
|
/* Secret rotation offset. */
|
|
|
|
off = sc->sc_iss & 0x7; /* iss was randomized before */
|
|
|
|
|
|
|
|
/* Maximum segment size calculation. */
|
|
|
|
pmss = max( min(sc->sc_peer_mss, tcp_mssopt(&sc->sc_inc)), tcp_minmss);
|
|
|
|
for (mss = sizeof(tcp_sc_msstab) / sizeof(int) - 1; mss > 0; mss--)
|
|
|
|
if (tcp_sc_msstab[mss] <= pmss)
|
2001-12-19 06:12:14 +00:00
|
|
|
break;
|
2006-09-13 13:08:27 +00:00
|
|
|
|
|
|
|
/* Fold parameters and MD5 digest into the ISN we will send. */
|
|
|
|
data = sch->sch_oddeven;/* odd or even secret, 1 bit */
|
|
|
|
data |= off << 1; /* secret offset, derived from iss, 3 bits */
|
|
|
|
data |= mss << 4; /* mss, 3 bits */
|
|
|
|
|
|
|
|
MD5Init(&ctx);
|
|
|
|
MD5Update(&ctx, ((u_int8_t *)secbits) + off,
|
|
|
|
SYNCOOKIE_SECRET_SIZE * sizeof(*secbits) - off);
|
|
|
|
MD5Update(&ctx, secbits, off);
|
|
|
|
MD5Update(&ctx, &sc->sc_inc, sizeof(sc->sc_inc));
|
|
|
|
MD5Update(&ctx, &sc->sc_irs, sizeof(sc->sc_irs));
|
|
|
|
MD5Update(&ctx, &data, sizeof(data));
|
|
|
|
MD5Final((u_int8_t *)&md5_buffer, &ctx);
|
|
|
|
|
|
|
|
data |= (md5_buffer[0] << 7);
|
|
|
|
sc->sc_iss = data;
|
|
|
|
|
2006-09-14 10:22:35 +00:00
|
|
|
#ifdef INET6
|
2006-09-13 13:08:27 +00:00
|
|
|
*flowlabel = md5_buffer[1] & IPV6_FLOWLABEL_MASK;
|
2006-09-14 10:22:35 +00:00
|
|
|
#endif
|
2006-09-13 13:08:27 +00:00
|
|
|
|
|
|
|
/* Additional parameters are stored in the timestamp if present. */
|
|
|
|
if (sc->sc_flags & SCF_TIMESTAMP) {
|
|
|
|
data = ((sc->sc_flags & SCF_SIGNATURE) ? 1 : 0); /* TCP-MD5, 1 bit */
|
|
|
|
data |= ((sc->sc_flags & SCF_SACK) ? 1 : 0) << 1; /* SACK, 1 bit */
|
|
|
|
data |= sc->sc_requested_s_scale << 2; /* SWIN scale, 4 bits */
|
|
|
|
data |= sc->sc_requested_r_scale << 6; /* RWIN scale, 4 bits */
|
|
|
|
data |= md5_buffer[2] << 10; /* more digest bits */
|
|
|
|
data ^= md5_buffer[3];
|
|
|
|
sc->sc_ts = data;
|
|
|
|
sc->sc_tsoff = data - ticks; /* after XOR */
|
2007-05-18 21:42:25 +00:00
|
|
|
}
|
2006-09-13 13:08:27 +00:00
|
|
|
|
2007-07-28 12:02:05 +00:00
|
|
|
tcpstat.tcps_sc_sendcookie++;
|
2006-09-13 13:08:27 +00:00
|
|
|
return;
|
2001-12-19 06:12:14 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct syncache *
|
2006-09-13 13:08:27 +00:00
|
|
|
syncookie_lookup(struct in_conninfo *inc, struct syncache_head *sch,
|
|
|
|
struct syncache *sc, struct tcpopt *to, struct tcphdr *th,
|
|
|
|
struct socket *so)
|
2001-12-19 06:12:14 +00:00
|
|
|
{
|
2006-09-13 13:08:27 +00:00
|
|
|
MD5_CTX ctx;
|
|
|
|
u_int32_t md5_buffer[MD5_DIGEST_LENGTH / sizeof(u_int32_t)];
|
|
|
|
u_int32_t data = 0;
|
|
|
|
u_int32_t *secbits;
|
|
|
|
tcp_seq ack, seq;
|
|
|
|
int off, mss, wnd, flags;
|
|
|
|
|
|
|
|
SCH_LOCK_ASSERT(sch);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Pull information out of SYN-ACK/ACK and
|
|
|
|
* revert sequence number advances.
|
|
|
|
*/
|
|
|
|
ack = th->th_ack - 1;
|
|
|
|
seq = th->th_seq - 1;
|
|
|
|
off = (ack >> 1) & 0x7;
|
|
|
|
mss = (ack >> 4) & 0x7;
|
|
|
|
flags = ack & 0x7f;
|
|
|
|
|
|
|
|
/* Which of the two secrets to use. */
|
|
|
|
secbits = (flags & 0x1) ? sch->sch_secbits_odd : sch->sch_secbits_even;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The secret wasn't updated for the lifetime of a syncookie,
|
|
|
|
* so this SYN-ACK/ACK is either too old (replay) or totally bogus.
|
|
|
|
*/
|
2008-06-16 20:08:22 +00:00
|
|
|
if (sch->sch_reseed + SYNCOOKIE_LIFETIME < time_uptime) {
|
2001-12-19 06:12:14 +00:00
|
|
|
return (NULL);
|
2006-06-17 17:32:38 +00:00
|
|
|
}
|
2001-12-19 06:12:14 +00:00
|
|
|
|
2006-09-13 13:08:27 +00:00
|
|
|
/* Recompute the digest so we can compare it. */
|
|
|
|
MD5Init(&ctx);
|
|
|
|
MD5Update(&ctx, ((u_int8_t *)secbits) + off,
|
|
|
|
SYNCOOKIE_SECRET_SIZE * sizeof(*secbits) - off);
|
|
|
|
MD5Update(&ctx, secbits, off);
|
|
|
|
MD5Update(&ctx, inc, sizeof(*inc));
|
|
|
|
MD5Update(&ctx, &seq, sizeof(seq));
|
|
|
|
MD5Update(&ctx, &flags, sizeof(flags));
|
|
|
|
MD5Final((u_int8_t *)&md5_buffer, &ctx);
|
|
|
|
|
|
|
|
/* Does the digest part of or ACK'ed ISS match? */
|
|
|
|
if ((ack & (~0x7f)) != (md5_buffer[0] << 7))
|
2001-12-19 06:12:14 +00:00
|
|
|
return (NULL);
|
2006-09-13 13:08:27 +00:00
|
|
|
|
|
|
|
/* Does the digest part of our reflected timestamp match? */
|
|
|
|
if (to->to_flags & TOF_TS) {
|
|
|
|
data = md5_buffer[3] ^ to->to_tsecr;
|
|
|
|
if ((data & (~0x3ff)) != (md5_buffer[2] << 10))
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill in the syncache values. */
|
2006-06-26 16:14:19 +00:00
|
|
|
bcopy(inc, &sc->sc_inc, sizeof(struct in_conninfo));
|
2006-09-13 13:08:27 +00:00
|
|
|
sc->sc_ipopts = NULL;
|
|
|
|
|
|
|
|
sc->sc_irs = seq;
|
|
|
|
sc->sc_iss = ack;
|
|
|
|
|
2001-12-19 06:12:14 +00:00
|
|
|
#ifdef INET6
|
|
|
|
if (inc->inc_isipv6) {
|
2006-06-17 17:32:38 +00:00
|
|
|
if (sotoinpcb(so)->in6p_flags & IN6P_AUTOFLOWLABEL)
|
2004-07-17 19:44:13 +00:00
|
|
|
sc->sc_flowlabel = md5_buffer[1] & IPV6_FLOWLABEL_MASK;
|
2001-12-19 06:12:14 +00:00
|
|
|
} else
|
|
|
|
#endif
|
|
|
|
{
|
2006-06-17 17:32:38 +00:00
|
|
|
sc->sc_ip_ttl = sotoinpcb(so)->inp_ip_ttl;
|
|
|
|
sc->sc_ip_tos = sotoinpcb(so)->inp_ip_tos;
|
2001-12-19 06:12:14 +00:00
|
|
|
}
|
2006-09-13 13:08:27 +00:00
|
|
|
|
|
|
|
/* Additional parameters that were encoded in the timestamp. */
|
|
|
|
if (data) {
|
|
|
|
sc->sc_flags |= SCF_TIMESTAMP;
|
|
|
|
sc->sc_tsreflect = to->to_tsval;
|
2007-05-18 21:42:25 +00:00
|
|
|
sc->sc_ts = to->to_tsecr;
|
2006-09-13 13:08:27 +00:00
|
|
|
sc->sc_tsoff = to->to_tsecr - ticks;
|
|
|
|
sc->sc_flags |= (data & 0x1) ? SCF_SIGNATURE : 0;
|
|
|
|
sc->sc_flags |= ((data >> 1) & 0x1) ? SCF_SACK : 0;
|
|
|
|
sc->sc_requested_s_scale = min((data >> 2) & 0xf,
|
|
|
|
TCP_MAX_WINSHIFT);
|
|
|
|
sc->sc_requested_r_scale = min((data >> 6) & 0xf,
|
|
|
|
TCP_MAX_WINSHIFT);
|
|
|
|
if (sc->sc_requested_s_scale || sc->sc_requested_r_scale)
|
|
|
|
sc->sc_flags |= SCF_WINSCALE;
|
|
|
|
} else
|
|
|
|
sc->sc_flags |= SCF_NOOPT;
|
|
|
|
|
2001-12-19 06:12:14 +00:00
|
|
|
wnd = sbspace(&so->so_rcv);
|
|
|
|
wnd = imax(wnd, 0);
|
|
|
|
wnd = imin(wnd, TCP_MAXWIN);
|
|
|
|
sc->sc_wnd = wnd;
|
2006-09-13 13:08:27 +00:00
|
|
|
|
2006-06-17 17:32:38 +00:00
|
|
|
sc->sc_rxmits = 0;
|
2006-09-13 13:08:27 +00:00
|
|
|
sc->sc_peer_mss = tcp_sc_msstab[mss];
|
|
|
|
|
2007-07-28 12:02:05 +00:00
|
|
|
tcpstat.tcps_sc_recvcookie++;
|
2001-12-19 06:12:14 +00:00
|
|
|
return (sc);
|
|
|
|
}
|
2007-07-27 00:57:06 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns the current number of syncache entries. This number
|
|
|
|
* will probably change before you get around to calling
|
|
|
|
* syncache_pcblist.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int
|
|
|
|
syncache_pcbcount(void)
|
|
|
|
{
|
|
|
|
struct syncache_head *sch;
|
|
|
|
int count, i;
|
|
|
|
|
|
|
|
for (count = 0, i = 0; i < tcp_syncache.hashsize; i++) {
|
|
|
|
/* No need to lock for a read. */
|
|
|
|
sch = &tcp_syncache.hashbase[i];
|
|
|
|
count += sch->sch_length;
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Exports the syncache entries to userland so that netstat can display
|
|
|
|
* them alongside the other sockets. This function is intended to be
|
|
|
|
* called only from tcp_pcblist.
|
|
|
|
*
|
|
|
|
* Due to concurrency on an active system, the number of pcbs exported
|
|
|
|
* may have no relation to max_pcbs. max_pcbs merely indicates the
|
|
|
|
* amount of space the caller allocated for this function to use.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
syncache_pcblist(struct sysctl_req *req, int max_pcbs, int *pcbs_exported)
|
|
|
|
{
|
|
|
|
struct xtcpcb xt;
|
|
|
|
struct syncache *sc;
|
|
|
|
struct syncache_head *sch;
|
|
|
|
int count, error, i;
|
|
|
|
|
|
|
|
for (count = 0, error = 0, i = 0; i < tcp_syncache.hashsize; i++) {
|
|
|
|
sch = &tcp_syncache.hashbase[i];
|
|
|
|
SCH_LOCK(sch);
|
|
|
|
TAILQ_FOREACH(sc, &sch->sch_bucket, sc_hash) {
|
|
|
|
if (count >= max_pcbs) {
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
bzero(&xt, sizeof(xt));
|
|
|
|
xt.xt_len = sizeof(xt);
|
|
|
|
if (sc->sc_inc.inc_isipv6)
|
|
|
|
xt.xt_inp.inp_vflag = INP_IPV6;
|
|
|
|
else
|
|
|
|
xt.xt_inp.inp_vflag = INP_IPV4;
|
|
|
|
bcopy(&sc->sc_inc, &xt.xt_inp.inp_inc, sizeof (struct in_conninfo));
|
|
|
|
xt.xt_tp.t_inpcb = &xt.xt_inp;
|
|
|
|
xt.xt_tp.t_state = TCPS_SYN_RECEIVED;
|
|
|
|
xt.xt_socket.xso_protocol = IPPROTO_TCP;
|
|
|
|
xt.xt_socket.xso_len = sizeof (struct xsocket);
|
|
|
|
xt.xt_socket.so_type = SOCK_STREAM;
|
|
|
|
xt.xt_socket.so_state = SS_ISCONNECTING;
|
|
|
|
error = SYSCTL_OUT(req, &xt, sizeof xt);
|
|
|
|
if (error) {
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
SCH_UNLOCK(sch);
|
|
|
|
}
|
|
|
|
exit:
|
|
|
|
*pcbs_exported = count;
|
|
|
|
return error;
|
|
|
|
}
|