2014-06-14 10:58:39 +00:00
|
|
|
/*-
|
2014-09-05 14:19:02 +00:00
|
|
|
* Copyright (c) 2014 Yandex LLC
|
|
|
|
* Copyright (c) 2014 Alexander V. Chernikov
|
2014-06-14 10:58:39 +00:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <sys/cdefs.h>
|
Merge projects/ipfw to HEAD.
Main user-visible changes are related to tables:
* Tables are now identified by names, not numbers.
There can be up to 65k tables with up to 63-byte long names.
* Tables are now set-aware (default off), so you can switch/move
them atomically with rules.
* More functionality is supported (swap, lock, limits, user-level lookup,
batched add/del) by generic table code.
* New table types are added (flow) so you can match multiple packet fields at once.
* Ability to add different type of lookup algorithms for particular
table type has been added.
* New table algorithms are added (cidr:hash, iface:array, number:array and
flow:hash) to make certain types of lookup more effective.
* Table value are now capable of holding multiple data fields for
different tablearg users
Performance changes:
* Main ipfw lock was converted to rmlock
* Rule counters were separated from rule itself and made per-cpu.
* Radix table entries fits into 128 bytes
* struct ip_fw is now more compact so more rules will fit into 64 bytes
* interface tables uses array of existing ifindexes for faster match
ABI changes:
All functionality supported by old ipfw(8) remains functional.
Old & new binaries can work together with the following restrictions:
* Tables named other than ^\d+$ are shown as table(65535) in
ruleset in old binaries
Internal changes:.
Changing table ids to numbers resulted in format modification for
most sockopt codes. Old sopt format was compact, but very hard to
extend (no versioning, inability to add more opcodes), so
* All relevant opcodes were converted to TLV-based versioned IP_FW3-based codes.
* The remaining opcodes were also converted to be able to eliminate
all older opcodes at once
* All IP_FW3 handlers uses special API instead of calling sooptcopy*
directly to ease adding another communication methods
* struct ip_fw is now different for kernel and userland
* tablearg value has been changed to 0 to ease future extensions
* table "values" are now indexes in special value array which
holds extended data for given index
* Batched add/delete has been added to tables code
* Most changes has been done to permit batched rule addition.
* interface tracking API has been added (started on demand)
to permit effective interface tables operations
* O(1) skipto cache, currently turned off by default at
compile-time (eats 512K).
* Several steps has been made towards making libipfw:
* most of new functions were separated into "parse/prepare/show
and actuall-do-stuff" pieces (already merged).
* there are separate functions for parsing text string into "struct ip_fw"
and printing "struct ip_fw" to supplied buffer (already merged).
* Probably some more less significant/forgotten features
MFC after: 1 month
Sponsored by: Yandex LLC
2014-10-09 19:32:35 +00:00
|
|
|
__FBSDID("$FreeBSD$");
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Lookup table algorithms.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "opt_ipfw.h"
|
|
|
|
#include "opt_inet.h"
|
|
|
|
#ifndef INET
|
|
|
|
#error IPFIREWALL requires INET.
|
|
|
|
#endif /* INET */
|
|
|
|
#include "opt_inet6.h"
|
|
|
|
|
|
|
|
#include <sys/param.h>
|
|
|
|
#include <sys/systm.h>
|
|
|
|
#include <sys/malloc.h>
|
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/lock.h>
|
|
|
|
#include <sys/rwlock.h>
|
2014-10-04 13:57:14 +00:00
|
|
|
#include <sys/rmlock.h>
|
2014-06-14 10:58:39 +00:00
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/queue.h>
|
|
|
|
#include <net/if.h> /* ip_fw.h requires IFNAMSIZ */
|
|
|
|
#include <net/radix.h>
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
#include <net/route.h>
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
#include <net/route_var.h>
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
#include <netinet/in.h>
|
2016-01-10 06:43:43 +00:00
|
|
|
#include <netinet/in_fib.h>
|
2014-06-14 10:58:39 +00:00
|
|
|
#include <netinet/ip_var.h> /* struct ipfw_rule_ref */
|
|
|
|
#include <netinet/ip_fw.h>
|
2016-01-10 06:43:43 +00:00
|
|
|
#include <netinet6/in6_fib.h>
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
#include <netpfil/ipfw/ip_fw_private.h>
|
2014-06-14 11:13:02 +00:00
|
|
|
#include <netpfil/ipfw/ip_fw_table.h>
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-12 14:09:15 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* IPFW table lookup algorithms.
|
|
|
|
*
|
|
|
|
* What is needed to add another table algo?
|
|
|
|
*
|
|
|
|
* Algo init:
|
|
|
|
* * struct table_algo has to be filled with:
|
2014-08-14 21:43:20 +00:00
|
|
|
* name: "type:algoname" format, e.g. "addr:radix". Currently
|
|
|
|
* there are the following types: "addr", "iface", "number" and "flow".
|
2014-08-12 14:09:15 +00:00
|
|
|
* type: one of IPFW_TABLE_* types
|
|
|
|
* flags: one or more TA_FLAGS_*
|
|
|
|
* ta_buf_size: size of structure used to store add/del item state.
|
|
|
|
* Needs to be less than TA_BUF_SZ.
|
|
|
|
* callbacks: see below for description.
|
|
|
|
* * ipfw_add_table_algo / ipfw_del_table_algo has to be called
|
|
|
|
*
|
|
|
|
* Callbacks description:
|
|
|
|
*
|
|
|
|
* -init: request to initialize new table instance.
|
|
|
|
* typedef int (ta_init)(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
* struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
* MANDATORY, unlocked. (M_WAITOK). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Allocate all structures needed for normal operations.
|
|
|
|
* * Caller may want to parse @data for some algo-specific
|
|
|
|
* options provided by userland.
|
|
|
|
* * Caller may want to save configuration state pointer to @ta_state
|
|
|
|
* * Caller needs to save desired runtime structure pointer(s)
|
|
|
|
* inside @ti fields. Note that it is not correct to save
|
|
|
|
* @ti pointer at this moment. Use -change_ti hook for that.
|
|
|
|
* * Caller has to fill in ti->lookup to appropriate function
|
|
|
|
* pointer.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -destroy: request to destroy table instance.
|
|
|
|
* typedef void (ta_destroy)(void *ta_state, struct table_info *ti);
|
2015-02-05 13:49:04 +00:00
|
|
|
* MANDATORY, unlocked. (M_WAITOK).
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Frees all table entries and all tables structures allocated by -init.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -prepare_add: request to allocate state for adding new entry.
|
|
|
|
* typedef int (ta_prepare_add)(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
* void *ta_buf);
|
|
|
|
* MANDATORY, unlocked. (M_WAITOK). Returns 0 on success.
|
|
|
|
*
|
2014-08-30 17:18:11 +00:00
|
|
|
* Allocates state and fills it in with all necessary data (EXCEPT value)
|
|
|
|
* from @tei to minimize operations needed to be done under WLOCK.
|
|
|
|
* "value" field has to be copied to new entry in @add callback.
|
2014-08-12 14:09:15 +00:00
|
|
|
* Buffer ta_buf of size ta->ta_buf_sz may be used to store
|
|
|
|
* allocated state.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -prepare_del: request to set state for deleting existing entry.
|
|
|
|
* typedef int (ta_prepare_del)(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
* void *ta_buf);
|
|
|
|
* MANDATORY, locked, UH. (M_NOWAIT). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Buffer ta_buf of size ta->ta_buf_sz may be used to store
|
|
|
|
* allocated state. Caller should use on-stack ta_buf allocation
|
|
|
|
* instead of doing malloc().
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -add: request to insert new entry into runtime/config structures.
|
|
|
|
* typedef int (ta_add)(void *ta_state, struct table_info *ti,
|
|
|
|
* struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
* MANDATORY, UH+WLOCK. (M_NOWAIT). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Insert new entry using previously-allocated state in @ta_buf.
|
|
|
|
* * @tei may have the following flags:
|
|
|
|
* TEI_FLAGS_UPDATE: request to add or update entry.
|
|
|
|
* TEI_FLAGS_DONTADD: request to update (but not add) entry.
|
|
|
|
* * Caller is required to do the following:
|
2014-08-30 17:18:11 +00:00
|
|
|
* copy real entry value from @tei
|
2014-08-12 14:09:15 +00:00
|
|
|
* entry added: return 0, set 1 to @pnum
|
|
|
|
* entry updated: return 0, store 0 to @pnum, store old value in @tei,
|
|
|
|
* add TEI_FLAGS_UPDATED flag to @tei.
|
|
|
|
* entry exists: return EEXIST
|
|
|
|
* entry not found: return ENOENT
|
|
|
|
* other error: return non-zero error code.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -del: request to delete existing entry from runtime/config structures.
|
|
|
|
* typedef int (ta_del)(void *ta_state, struct table_info *ti,
|
|
|
|
* struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
* MANDATORY, UH+WLOCK. (M_NOWAIT). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Delete entry using previously set up in @ta_buf.
|
|
|
|
* * Caller is required to do the following:
|
2014-08-30 17:18:11 +00:00
|
|
|
* entry deleted: return 0, set 1 to @pnum, store old value in @tei.
|
2014-08-12 14:09:15 +00:00
|
|
|
* entry not found: return ENOENT
|
|
|
|
* other error: return non-zero error code.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -flush_entry: flush entry state created by -prepare_add / -del / others
|
|
|
|
* typedef void (ta_flush_entry)(struct ip_fw_chain *ch,
|
|
|
|
* struct tentry_info *tei, void *ta_buf);
|
|
|
|
* MANDATORY, may be locked. (M_NOWAIT).
|
|
|
|
*
|
|
|
|
* Delete state allocated by:
|
|
|
|
* -prepare_add (-add returned EEXIST|UPDATED)
|
|
|
|
* -prepare_del (if any)
|
|
|
|
* -del
|
|
|
|
* * Caller is required to handle empty @ta_buf correctly.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -find_tentry: finds entry specified by key @tei
|
|
|
|
* typedef int ta_find_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
* ipfw_obj_tentry *tent);
|
|
|
|
* OPTIONAL, locked (UH). (M_NOWAIT). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Finds entry specified by given key.
|
2016-05-03 18:05:43 +00:00
|
|
|
* * Caller is required to do the following:
|
2014-08-12 14:09:15 +00:00
|
|
|
* entry found: returns 0, export entry to @tent
|
|
|
|
* entry not found: returns ENOENT
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -need_modify: checks if @ti has enough space to hold another @count items.
|
|
|
|
* typedef int (ta_need_modify)(void *ta_state, struct table_info *ti,
|
|
|
|
* uint32_t count, uint64_t *pflags);
|
2014-08-14 17:31:04 +00:00
|
|
|
* OPTIONAL, locked (UH). (M_NOWAIT). Returns 0 if has.
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Checks if given table has enough space to add @count items without
|
|
|
|
* resize. Caller may use @pflags to store desired modification data.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -prepare_mod: allocate structures for table modification.
|
|
|
|
* typedef int (ta_prepare_mod)(void *ta_buf, uint64_t *pflags);
|
2014-08-14 17:31:04 +00:00
|
|
|
* OPTIONAL(need_modify), unlocked. (M_WAITOK). Returns 0 on success.
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Allocate all needed state for table modification. Caller
|
|
|
|
* should use `struct mod_item` to store new state in @ta_buf.
|
|
|
|
* Up to TA_BUF_SZ (128 bytes) can be stored in @ta_buf.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -fill_mod: copy some data to new state/
|
|
|
|
* typedef int (ta_fill_mod)(void *ta_state, struct table_info *ti,
|
|
|
|
* void *ta_buf, uint64_t *pflags);
|
2014-08-14 17:31:04 +00:00
|
|
|
* OPTIONAL(need_modify), locked (UH). (M_NOWAIT). Returns 0 on success.
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Copy as much data as we can to minimize changes under WLOCK.
|
|
|
|
* For example, array can be merged inside this callback.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -modify: perform final modification.
|
|
|
|
* typedef void (ta_modify)(void *ta_state, struct table_info *ti,
|
|
|
|
* void *ta_buf, uint64_t pflags);
|
2014-08-14 17:31:04 +00:00
|
|
|
* OPTIONAL(need_modify), locked (UH+WLOCK). (M_NOWAIT).
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Performs all changes necessary to switch to new structures.
|
|
|
|
* * Caller should save old pointers to @ta_buf storage.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -flush_mod: flush table modification state.
|
|
|
|
* typedef void (ta_flush_mod)(void *ta_buf);
|
2014-08-14 17:31:04 +00:00
|
|
|
* OPTIONAL(need_modify), unlocked. (M_WAITOK).
|
2014-08-12 14:09:15 +00:00
|
|
|
*
|
|
|
|
* Performs flush for the following:
|
|
|
|
* - prepare_mod (modification was not necessary)
|
|
|
|
* - modify (for the old state)
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -change_gi: monitor table info pointer changes
|
|
|
|
* typedef void (ta_change_ti)(void *ta_state, struct table_info *ti);
|
|
|
|
* OPTIONAL, locked (UH). (M_NOWAIT).
|
|
|
|
*
|
|
|
|
* Called on @ti pointer changed. Called immediately after -init
|
|
|
|
* to set initial state.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -foreach: calls @f for each table entry
|
|
|
|
* typedef void ta_foreach(void *ta_state, struct table_info *ti,
|
|
|
|
* ta_foreach_f *f, void *arg);
|
|
|
|
* MANDATORY, locked(UH). (M_NOWAIT).
|
|
|
|
*
|
|
|
|
* Runs callback with specified argument for each table entry,
|
|
|
|
* Typically used for dumping table entries.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -dump_tentry: dump table entry in current @tentry format.
|
|
|
|
* typedef int ta_dump_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
* ipfw_obj_tentry *tent);
|
|
|
|
* MANDATORY, locked(UH). (M_NOWAIT). Returns 0 on success.
|
|
|
|
*
|
|
|
|
* Dumps entry @e to @tent.
|
|
|
|
*
|
|
|
|
*
|
2016-05-03 18:05:43 +00:00
|
|
|
* -print_config: prints custom algorithm options into buffer.
|
2014-08-12 14:09:15 +00:00
|
|
|
* typedef void (ta_print_config)(void *ta_state, struct table_info *ti,
|
|
|
|
* char *buf, size_t bufsize);
|
|
|
|
* OPTIONAL. locked(UH). (M_NOWAIT).
|
|
|
|
*
|
|
|
|
* Prints custom algorithm options in the format suitable to pass
|
|
|
|
* back to -init callback.
|
|
|
|
*
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* -dump_tinfo: dumps algo-specific info.
|
|
|
|
* typedef void ta_dump_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
* ipfw_ta_tinfo *tinfo);
|
|
|
|
* OPTIONAL. locked(UH). (M_NOWAIT).
|
|
|
|
*
|
|
|
|
* Dumps options like items size/hash size, etc.
|
|
|
|
*/
|
|
|
|
|
2014-09-21 18:15:09 +00:00
|
|
|
MALLOC_DEFINE(M_IPFW_TBL, "ipfw_tbl", "IpFw tables");
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
/*
|
|
|
|
* Utility structures/functions common to more than one algo
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct mod_item {
|
|
|
|
void *main_ptr;
|
|
|
|
size_t size;
|
|
|
|
void *main_ptr6;
|
|
|
|
size_t size6;
|
|
|
|
};
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
static int badd(const void *key, void *item, void *base, size_t nmemb,
|
|
|
|
size_t size, int (*compar) (const void *, const void *));
|
|
|
|
static int bdel(const void *key, void *base, size_t nmemb, size_t size,
|
|
|
|
int (*compar) (const void *, const void *));
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
2014-08-14 21:43:20 +00:00
|
|
|
* ADDR implementation using radix
|
2014-07-28 19:01:25 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
/*
|
|
|
|
* The radix code expects addr and mask to be array of bytes,
|
|
|
|
* with the first byte being the length of the array. rn_inithead
|
|
|
|
* is called with the offset in bits of the lookup key within the
|
|
|
|
* array. If we use a sockaddr_in as the underlying type,
|
|
|
|
* sin_len is conveniently located at offset 0, sin_addr is at
|
|
|
|
* offset 4 and normally aligned.
|
|
|
|
* But for portability, let's avoid assumption and make the code explicit
|
|
|
|
*/
|
|
|
|
#define KEY_LEN(v) *((uint8_t *)&(v))
|
|
|
|
/*
|
|
|
|
* Do not require radix to compare more than actual IPv4/IPv6 address
|
|
|
|
*/
|
|
|
|
#define KEY_LEN_INET (offsetof(struct sockaddr_in, sin_addr) + sizeof(in_addr_t))
|
2014-07-09 18:52:12 +00:00
|
|
|
#define KEY_LEN_INET6 (offsetof(struct sa_in6, sin6_addr) + sizeof(struct in6_addr))
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
#define OFF_LEN_INET (8 * offsetof(struct sockaddr_in, sin_addr))
|
2014-07-09 18:52:12 +00:00
|
|
|
#define OFF_LEN_INET6 (8 * offsetof(struct sa_in6, sin6_addr))
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_entry {
|
2014-06-14 10:58:39 +00:00
|
|
|
struct radix_node rn[2];
|
2014-07-09 18:52:12 +00:00
|
|
|
struct sockaddr_in addr;
|
|
|
|
uint32_t value;
|
|
|
|
uint8_t masklen;
|
2014-06-14 10:58:39 +00:00
|
|
|
};
|
|
|
|
|
2014-07-09 18:52:12 +00:00
|
|
|
struct sa_in6 {
|
|
|
|
uint8_t sin6_len;
|
|
|
|
uint8_t sin6_family;
|
|
|
|
uint8_t pad[2];
|
|
|
|
struct in6_addr sin6_addr;
|
2014-06-14 10:58:39 +00:00
|
|
|
};
|
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_xentry {
|
2014-06-14 10:58:39 +00:00
|
|
|
struct radix_node rn[2];
|
2014-07-09 18:52:12 +00:00
|
|
|
struct sa_in6 addr6;
|
|
|
|
uint32_t value;
|
|
|
|
uint8_t masklen;
|
2014-06-14 10:58:39 +00:00
|
|
|
};
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
struct radix_cfg {
|
|
|
|
struct radix_node_head *head4;
|
|
|
|
struct radix_node_head *head6;
|
|
|
|
size_t count4;
|
|
|
|
size_t count6;
|
|
|
|
};
|
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix
|
2014-08-03 09:04:36 +00:00
|
|
|
{
|
|
|
|
void *ent_ptr;
|
|
|
|
struct sockaddr *addr_ptr;
|
|
|
|
struct sockaddr *mask_ptr;
|
|
|
|
union {
|
|
|
|
struct {
|
|
|
|
struct sockaddr_in sa;
|
|
|
|
struct sockaddr_in ma;
|
|
|
|
} a4;
|
|
|
|
struct {
|
|
|
|
struct sa_in6 sa;
|
|
|
|
struct sa_in6 ma;
|
|
|
|
} a6;
|
|
|
|
} addr;
|
|
|
|
};
|
|
|
|
|
2014-10-10 17:24:56 +00:00
|
|
|
static int ta_lookup_radix(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int ta_init_radix(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static int flush_radix_entry(struct radix_node *rn, void *arg);
|
|
|
|
static void ta_destroy_radix(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_radix_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int ta_dump_radix_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
void *e, ipfw_obj_tentry *tent);
|
|
|
|
static int ta_find_radix_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static void ta_foreach_radix(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
|
|
|
static void tei_to_sockaddr_ent(struct tentry_info *tei, struct sockaddr *sa,
|
|
|
|
struct sockaddr *ma, int *set_mask);
|
|
|
|
static int ta_prepare_add_radix(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_add_radix(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static int ta_prepare_del_radix(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_del_radix(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static void ta_flush_radix_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_need_modify_radix(void *ta_state, struct table_info *ti,
|
|
|
|
uint32_t count, uint64_t *pflags);
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
static int
|
|
|
|
ta_lookup_radix(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct radix_node_head *rnh;
|
|
|
|
|
|
|
|
if (keylen == sizeof(in_addr_t)) {
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_entry *ent;
|
2014-06-14 10:58:39 +00:00
|
|
|
struct sockaddr_in sa;
|
|
|
|
KEY_LEN(sa) = KEY_LEN_INET;
|
|
|
|
sa.sin_addr.s_addr = *((in_addr_t *)key);
|
|
|
|
rnh = (struct radix_node_head *)ti->state;
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
ent = (struct radix_addr_entry *)(rnh->rnh_matchaddr(&sa, &rnh->rh));
|
2014-06-14 10:58:39 +00:00
|
|
|
if (ent != NULL) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
} else {
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_xentry *xent;
|
2014-07-09 18:52:12 +00:00
|
|
|
struct sa_in6 sa6;
|
2014-06-14 10:58:39 +00:00
|
|
|
KEY_LEN(sa6) = KEY_LEN_INET6;
|
|
|
|
memcpy(&sa6.sin6_addr, key, sizeof(struct in6_addr));
|
|
|
|
rnh = (struct radix_node_head *)ti->xstate;
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
xent = (struct radix_addr_xentry *)(rnh->rnh_matchaddr(&sa6, &rnh->rh));
|
2014-06-14 10:58:39 +00:00
|
|
|
if (xent != NULL) {
|
|
|
|
*val = xent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* New table
|
|
|
|
*/
|
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_init_radix(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
char *data, uint8_t tflags)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-03 12:19:45 +00:00
|
|
|
struct radix_cfg *cfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
if (!rn_inithead(&ti->state, OFF_LEN_INET))
|
|
|
|
return (ENOMEM);
|
|
|
|
if (!rn_inithead(&ti->xstate, OFF_LEN_INET6)) {
|
|
|
|
rn_detachhead(&ti->state);
|
|
|
|
return (ENOMEM);
|
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
cfg = malloc(sizeof(struct radix_cfg), M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
*ta_state = cfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
ti->lookup = ta_lookup_radix;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-03 09:53:34 +00:00
|
|
|
flush_radix_entry(struct radix_node *rn, void *arg)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
|
|
|
struct radix_node_head * const rnh = arg;
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_entry *ent;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
ent = (struct radix_addr_entry *)
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rnh->rnh_deladdr(rn->rn_key, rn->rn_mask, &rnh->rh);
|
2014-06-14 10:58:39 +00:00
|
|
|
if (ent != NULL)
|
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_destroy_radix(void *ta_state, struct table_info *ti)
|
|
|
|
{
|
2014-08-03 12:19:45 +00:00
|
|
|
struct radix_cfg *cfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
struct radix_node_head *rnh;
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
cfg = (struct radix_cfg *)ta_state;
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
rnh = (struct radix_node_head *)(ti->state);
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rnh->rnh_walktree(&rnh->rh, flush_radix_entry, rnh);
|
2014-06-14 10:58:39 +00:00
|
|
|
rn_detachhead(&ti->state);
|
|
|
|
|
|
|
|
rnh = (struct radix_node_head *)(ti->xstate);
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rnh->rnh_walktree(&rnh->rh, flush_radix_entry, rnh);
|
2014-06-14 10:58:39 +00:00
|
|
|
rn_detachhead(&ti->xstate);
|
2014-08-03 12:19:45 +00:00
|
|
|
|
|
|
|
free(cfg, M_IPFW);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Provide algo-specific table info
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_dump_radix_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
struct radix_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct radix_cfg *)ta_state;
|
|
|
|
|
|
|
|
tinfo->flags = IPFW_TATFLAGS_AFDATA | IPFW_TATFLAGS_AFITEM;
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_RADIX;
|
|
|
|
tinfo->count4 = cfg->count4;
|
2014-08-14 21:43:20 +00:00
|
|
|
tinfo->itemsize4 = sizeof(struct radix_addr_entry);
|
2014-08-03 12:19:45 +00:00
|
|
|
tinfo->taclass6 = IPFW_TACLASS_RADIX;
|
|
|
|
tinfo->count6 = cfg->count6;
|
2014-08-14 21:43:20 +00:00
|
|
|
tinfo->itemsize6 = sizeof(struct radix_addr_xentry);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-07-06 18:16:04 +00:00
|
|
|
ta_dump_radix_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_entry *n;
|
2014-10-10 17:24:56 +00:00
|
|
|
#ifdef INET6
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_xentry *xn;
|
2014-10-10 17:24:56 +00:00
|
|
|
#endif
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
n = (struct radix_addr_entry *)e;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
/* Guess IPv4/IPv6 radix by sockaddr family */
|
|
|
|
if (n->addr.sin_family == AF_INET) {
|
2014-07-06 18:16:04 +00:00
|
|
|
tent->k.addr.s_addr = n->addr.sin_addr.s_addr;
|
2014-07-09 18:52:12 +00:00
|
|
|
tent->masklen = n->masklen;
|
2014-07-06 18:16:04 +00:00
|
|
|
tent->subtype = AF_INET;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = n->value;
|
2014-06-14 10:58:39 +00:00
|
|
|
#ifdef INET6
|
|
|
|
} else {
|
2014-08-14 21:43:20 +00:00
|
|
|
xn = (struct radix_addr_xentry *)e;
|
2014-07-09 18:52:12 +00:00
|
|
|
memcpy(&tent->k, &xn->addr6.sin6_addr, sizeof(struct in6_addr));
|
|
|
|
tent->masklen = xn->masklen;
|
2014-07-06 18:16:04 +00:00
|
|
|
tent->subtype = AF_INET6;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = xn->value;
|
2014-06-14 10:58:39 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-06 18:16:04 +00:00
|
|
|
static int
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ta_find_radix_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
2014-07-06 18:16:04 +00:00
|
|
|
{
|
|
|
|
struct radix_node_head *rnh;
|
|
|
|
void *e;
|
|
|
|
|
|
|
|
e = NULL;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
if (tent->subtype == AF_INET) {
|
2014-07-06 18:16:04 +00:00
|
|
|
struct sockaddr_in sa;
|
|
|
|
KEY_LEN(sa) = KEY_LEN_INET;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
sa.sin_addr.s_addr = tent->k.addr.s_addr;
|
2014-07-06 18:16:04 +00:00
|
|
|
rnh = (struct radix_node_head *)ti->state;
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
e = rnh->rnh_matchaddr(&sa, &rnh->rh);
|
2014-07-06 18:16:04 +00:00
|
|
|
} else {
|
2014-07-09 18:52:12 +00:00
|
|
|
struct sa_in6 sa6;
|
2014-07-06 18:16:04 +00:00
|
|
|
KEY_LEN(sa6) = KEY_LEN_INET6;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
memcpy(&sa6.sin6_addr, &tent->k.addr6, sizeof(struct in6_addr));
|
2014-07-06 18:16:04 +00:00
|
|
|
rnh = (struct radix_node_head *)ti->xstate;
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
e = rnh->rnh_matchaddr(&sa6, &rnh->rh);
|
2014-07-06 18:16:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (e != NULL) {
|
|
|
|
ta_dump_radix_tentry(ta_state, ti, e, tent);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
static void
|
|
|
|
ta_foreach_radix(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct radix_node_head *rnh;
|
|
|
|
|
|
|
|
rnh = (struct radix_node_head *)(ti->state);
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rnh->rnh_walktree(&rnh->rh, (walktree_f_t *)f, arg);
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
rnh = (struct radix_node_head *)(ti->xstate);
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rnh->rnh_walktree(&rnh->rh, (walktree_f_t *)f, arg);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#ifdef INET6
|
2014-10-10 18:31:35 +00:00
|
|
|
static inline void ipv6_writemask(struct in6_addr *addr6, uint8_t mask);
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
static inline void
|
|
|
|
ipv6_writemask(struct in6_addr *addr6, uint8_t mask)
|
|
|
|
{
|
|
|
|
uint32_t *cp;
|
|
|
|
|
|
|
|
for (cp = (uint32_t *)addr6; mask >= 32; mask -= 32)
|
|
|
|
*cp++ = 0xFFFFFFFF;
|
2016-06-05 10:33:53 +00:00
|
|
|
if (mask > 0)
|
|
|
|
*cp = htonl(mask ? ~((1 << (32 - mask)) - 1) : 0);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2014-08-01 08:28:18 +00:00
|
|
|
static void
|
|
|
|
tei_to_sockaddr_ent(struct tentry_info *tei, struct sockaddr *sa,
|
|
|
|
struct sockaddr *ma, int *set_mask)
|
|
|
|
{
|
|
|
|
int mlen;
|
2014-10-10 17:24:56 +00:00
|
|
|
#ifdef INET
|
2014-08-01 08:28:18 +00:00
|
|
|
struct sockaddr_in *addr, *mask;
|
2014-10-10 17:24:56 +00:00
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
2014-08-08 21:09:22 +00:00
|
|
|
struct sa_in6 *addr6, *mask6;
|
2014-10-10 17:24:56 +00:00
|
|
|
#endif
|
2014-08-01 08:28:18 +00:00
|
|
|
in_addr_t a4;
|
|
|
|
|
|
|
|
mlen = tei->masklen;
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET) {
|
|
|
|
#ifdef INET
|
|
|
|
addr = (struct sockaddr_in *)sa;
|
|
|
|
mask = (struct sockaddr_in *)ma;
|
|
|
|
/* Set 'total' structure length */
|
|
|
|
KEY_LEN(*addr) = KEY_LEN_INET;
|
|
|
|
KEY_LEN(*mask) = KEY_LEN_INET;
|
|
|
|
addr->sin_family = AF_INET;
|
|
|
|
mask->sin_addr.s_addr =
|
|
|
|
htonl(mlen ? ~((1 << (32 - mlen)) - 1) : 0);
|
|
|
|
a4 = *((in_addr_t *)tei->paddr);
|
|
|
|
addr->sin_addr.s_addr = a4 & mask->sin_addr.s_addr;
|
|
|
|
if (mlen != 32)
|
|
|
|
*set_mask = 1;
|
|
|
|
else
|
|
|
|
*set_mask = 0;
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
} else if (tei->subtype == AF_INET6) {
|
|
|
|
/* IPv6 case */
|
2014-08-08 21:09:22 +00:00
|
|
|
addr6 = (struct sa_in6 *)sa;
|
|
|
|
mask6 = (struct sa_in6 *)ma;
|
2014-08-01 08:28:18 +00:00
|
|
|
/* Set 'total' structure length */
|
|
|
|
KEY_LEN(*addr6) = KEY_LEN_INET6;
|
|
|
|
KEY_LEN(*mask6) = KEY_LEN_INET6;
|
|
|
|
addr6->sin6_family = AF_INET6;
|
|
|
|
ipv6_writemask(&mask6->sin6_addr, mlen);
|
|
|
|
memcpy(&addr6->sin6_addr, tei->paddr, sizeof(struct in6_addr));
|
|
|
|
APPLY_MASK(&addr6->sin6_addr, &mask6->sin6_addr);
|
|
|
|
if (mlen != 128)
|
|
|
|
*set_mask = 1;
|
|
|
|
else
|
|
|
|
*set_mask = 0;
|
|
|
|
#endif
|
2014-10-10 18:31:35 +00:00
|
|
|
}
|
2014-08-01 08:28:18 +00:00
|
|
|
}
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
static int
|
2014-08-03 09:53:34 +00:00
|
|
|
ta_prepare_add_radix(struct ip_fw_chain *ch, struct tentry_info *tei,
|
2014-07-28 19:01:25 +00:00
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix *tb;
|
|
|
|
struct radix_addr_entry *ent;
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
2014-08-14 21:43:20 +00:00
|
|
|
struct radix_addr_xentry *xent;
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
2014-08-01 08:28:18 +00:00
|
|
|
struct sockaddr *addr, *mask;
|
|
|
|
int mlen, set_mask;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
tb = (struct ta_buf_radix *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
mlen = tei->masklen;
|
2014-08-01 08:28:18 +00:00
|
|
|
set_mask = 0;
|
2014-07-03 22:25:59 +00:00
|
|
|
|
|
|
|
if (tei->subtype == AF_INET) {
|
2014-06-14 10:58:39 +00:00
|
|
|
#ifdef INET
|
|
|
|
if (mlen > 32)
|
|
|
|
return (EINVAL);
|
|
|
|
ent = malloc(sizeof(*ent), M_IPFW_TBL, M_WAITOK | M_ZERO);
|
2014-07-09 18:52:12 +00:00
|
|
|
ent->masklen = mlen;
|
2014-08-01 08:28:18 +00:00
|
|
|
|
|
|
|
addr = (struct sockaddr *)&ent->addr;
|
|
|
|
mask = (struct sockaddr *)&tb->addr.a4.ma;
|
2014-06-14 10:58:39 +00:00
|
|
|
tb->ent_ptr = ent;
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
2014-07-03 22:25:59 +00:00
|
|
|
} else if (tei->subtype == AF_INET6) {
|
2014-06-14 10:58:39 +00:00
|
|
|
/* IPv6 case */
|
|
|
|
if (mlen > 128)
|
|
|
|
return (EINVAL);
|
|
|
|
xent = malloc(sizeof(*xent), M_IPFW_TBL, M_WAITOK | M_ZERO);
|
2014-07-09 18:52:12 +00:00
|
|
|
xent->masklen = mlen;
|
2014-08-01 08:28:18 +00:00
|
|
|
|
|
|
|
addr = (struct sockaddr *)&xent->addr6;
|
|
|
|
mask = (struct sockaddr *)&tb->addr.a6.ma;
|
2014-06-14 10:58:39 +00:00
|
|
|
tb->ent_ptr = xent;
|
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
/* Unknown CIDR type */
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
|
2014-08-01 08:28:18 +00:00
|
|
|
tei_to_sockaddr_ent(tei, addr, mask, &set_mask);
|
|
|
|
/* Set pointers */
|
|
|
|
tb->addr_ptr = addr;
|
|
|
|
if (set_mask != 0)
|
|
|
|
tb->mask_ptr = mask;
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-03 09:53:34 +00:00
|
|
|
ta_add_radix(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-03 12:19:45 +00:00
|
|
|
struct radix_cfg *cfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
struct radix_node_head *rnh;
|
|
|
|
struct radix_node *rn;
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix *tb;
|
2014-08-03 08:32:54 +00:00
|
|
|
uint32_t *old_value, value;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
cfg = (struct radix_cfg *)ta_state;
|
2014-08-14 21:43:20 +00:00
|
|
|
tb = (struct ta_buf_radix *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-30 17:18:11 +00:00
|
|
|
/* Save current entry value from @tei */
|
|
|
|
if (tei->subtype == AF_INET) {
|
2014-06-14 10:58:39 +00:00
|
|
|
rnh = ti->state;
|
2014-08-30 17:18:11 +00:00
|
|
|
((struct radix_addr_entry *)tb->ent_ptr)->value = tei->value;
|
|
|
|
} else {
|
2014-06-14 10:58:39 +00:00
|
|
|
rnh = ti->xstate;
|
2014-08-30 17:18:11 +00:00
|
|
|
((struct radix_addr_xentry *)tb->ent_ptr)->value = tei->value;
|
|
|
|
}
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-01 15:17:46 +00:00
|
|
|
/* Search for an entry first */
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rn = rnh->rnh_lookup(tb->addr_ptr, tb->mask_ptr, &rnh->rh);
|
2014-08-01 15:17:46 +00:00
|
|
|
if (rn != NULL) {
|
2014-07-03 22:25:59 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_UPDATE) == 0)
|
|
|
|
return (EEXIST);
|
|
|
|
/* Record already exists. Update value if we're asked to */
|
2014-08-03 08:32:54 +00:00
|
|
|
if (tei->subtype == AF_INET)
|
2014-08-14 21:43:20 +00:00
|
|
|
old_value = &((struct radix_addr_entry *)rn)->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
else
|
2014-08-14 21:43:20 +00:00
|
|
|
old_value = &((struct radix_addr_xentry *)rn)->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
|
|
|
|
value = *old_value;
|
|
|
|
*old_value = tei->value;
|
|
|
|
tei->value = value;
|
2014-07-03 22:25:59 +00:00
|
|
|
|
|
|
|
/* Indicate that update has happened instead of addition */
|
|
|
|
tei->flags |= TEI_FLAGS_UPDATED;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 0;
|
2014-07-09 18:52:12 +00:00
|
|
|
|
|
|
|
return (0);
|
2014-07-03 22:25:59 +00:00
|
|
|
}
|
|
|
|
|
2014-08-01 15:17:46 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_DONTADD) != 0)
|
|
|
|
return (EFBIG);
|
|
|
|
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rn = rnh->rnh_addaddr(tb->addr_ptr, tb->mask_ptr, &rnh->rh,tb->ent_ptr);
|
2014-08-01 15:17:46 +00:00
|
|
|
if (rn == NULL) {
|
|
|
|
/* Unknown error */
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
if (tei->subtype == AF_INET)
|
|
|
|
cfg->count4++;
|
|
|
|
else
|
|
|
|
cfg->count6++;
|
2014-07-03 22:25:59 +00:00
|
|
|
tb->ent_ptr = NULL;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 1;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-03 09:53:34 +00:00
|
|
|
ta_prepare_del_radix(struct ip_fw_chain *ch, struct tentry_info *tei,
|
2014-07-28 19:01:25 +00:00
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix *tb;
|
2014-08-01 08:28:18 +00:00
|
|
|
struct sockaddr *addr, *mask;
|
|
|
|
int mlen, set_mask;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
tb = (struct ta_buf_radix *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
mlen = tei->masklen;
|
2014-08-01 08:28:18 +00:00
|
|
|
set_mask = 0;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-03 22:25:59 +00:00
|
|
|
if (tei->subtype == AF_INET) {
|
2014-07-09 18:52:12 +00:00
|
|
|
if (mlen > 32)
|
|
|
|
return (EINVAL);
|
2014-08-01 08:28:18 +00:00
|
|
|
|
|
|
|
addr = (struct sockaddr *)&tb->addr.a4.sa;
|
|
|
|
mask = (struct sockaddr *)&tb->addr.a4.ma;
|
2014-06-14 10:58:39 +00:00
|
|
|
#ifdef INET6
|
2014-07-03 22:25:59 +00:00
|
|
|
} else if (tei->subtype == AF_INET6) {
|
2014-06-14 10:58:39 +00:00
|
|
|
if (mlen > 128)
|
|
|
|
return (EINVAL);
|
2014-08-01 08:28:18 +00:00
|
|
|
|
|
|
|
addr = (struct sockaddr *)&tb->addr.a6.sa;
|
|
|
|
mask = (struct sockaddr *)&tb->addr.a6.ma;
|
2014-06-14 10:58:39 +00:00
|
|
|
#endif
|
|
|
|
} else
|
|
|
|
return (EINVAL);
|
|
|
|
|
2014-08-01 08:28:18 +00:00
|
|
|
tei_to_sockaddr_ent(tei, addr, mask, &set_mask);
|
|
|
|
tb->addr_ptr = addr;
|
|
|
|
if (set_mask != 0)
|
|
|
|
tb->mask_ptr = mask;
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-03 09:53:34 +00:00
|
|
|
ta_del_radix(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-03 12:19:45 +00:00
|
|
|
struct radix_cfg *cfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
struct radix_node_head *rnh;
|
|
|
|
struct radix_node *rn;
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix *tb;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
cfg = (struct radix_cfg *)ta_state;
|
2014-08-14 21:43:20 +00:00
|
|
|
tb = (struct ta_buf_radix *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-03 22:25:59 +00:00
|
|
|
if (tei->subtype == AF_INET)
|
2014-06-14 10:58:39 +00:00
|
|
|
rnh = ti->state;
|
|
|
|
else
|
|
|
|
rnh = ti->xstate;
|
|
|
|
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rn = rnh->rnh_deladdr(tb->addr_ptr, tb->mask_ptr, &rnh->rh);
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-11 17:34:25 +00:00
|
|
|
if (rn == NULL)
|
|
|
|
return (ENOENT);
|
|
|
|
|
2014-08-03 08:32:54 +00:00
|
|
|
/* Save entry value to @tei */
|
|
|
|
if (tei->subtype == AF_INET)
|
2014-08-14 21:43:20 +00:00
|
|
|
tei->value = ((struct radix_addr_entry *)rn)->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
else
|
2014-08-14 21:43:20 +00:00
|
|
|
tei->value = ((struct radix_addr_xentry *)rn)->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
tb->ent_ptr = rn;
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
if (tei->subtype == AF_INET)
|
|
|
|
cfg->count4--;
|
|
|
|
else
|
|
|
|
cfg->count6--;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 1;
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2014-08-03 09:53:34 +00:00
|
|
|
ta_flush_radix_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
2014-07-28 19:01:25 +00:00
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-08-14 21:43:20 +00:00
|
|
|
struct ta_buf_radix *tb;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
tb = (struct ta_buf_radix *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-03 22:25:59 +00:00
|
|
|
if (tb->ent_ptr != NULL)
|
|
|
|
free(tb->ent_ptr, M_IPFW_TBL);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
static int
|
2014-08-12 14:09:15 +00:00
|
|
|
ta_need_modify_radix(void *ta_state, struct table_info *ti, uint32_t count,
|
2014-08-02 17:18:47 +00:00
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
|
|
|
|
/*
|
2014-08-03 09:04:36 +00:00
|
|
|
* radix does not require additional memory allocations
|
2014-08-02 17:18:47 +00:00
|
|
|
* other than nodes itself. Adding new masks to the tree do
|
|
|
|
* but we don't have any API to call (and we don't known which
|
|
|
|
* sizes do we need).
|
|
|
|
*/
|
2014-08-12 14:09:15 +00:00
|
|
|
return (0);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct table_algo addr_radix = {
|
|
|
|
.name = "addr:radix",
|
|
|
|
.type = IPFW_TABLE_ADDR,
|
2014-08-01 07:35:17 +00:00
|
|
|
.flags = TA_FLAG_DEFAULT,
|
2014-08-14 21:43:20 +00:00
|
|
|
.ta_buf_size = sizeof(struct ta_buf_radix),
|
2014-06-14 10:58:39 +00:00
|
|
|
.init = ta_init_radix,
|
|
|
|
.destroy = ta_destroy_radix,
|
2014-08-03 09:53:34 +00:00
|
|
|
.prepare_add = ta_prepare_add_radix,
|
|
|
|
.prepare_del = ta_prepare_del_radix,
|
|
|
|
.add = ta_add_radix,
|
|
|
|
.del = ta_del_radix,
|
|
|
|
.flush_entry = ta_flush_radix_entry,
|
2014-06-14 10:58:39 +00:00
|
|
|
.foreach = ta_foreach_radix,
|
2014-07-06 18:16:04 +00:00
|
|
|
.dump_tentry = ta_dump_radix_tentry,
|
|
|
|
.find_tentry = ta_find_radix_tentry,
|
2014-08-03 12:19:45 +00:00
|
|
|
.dump_tinfo = ta_dump_radix_tinfo,
|
2014-08-12 14:09:15 +00:00
|
|
|
.need_modify = ta_need_modify_radix,
|
2014-06-14 10:58:39 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/*
|
2014-08-14 21:43:20 +00:00
|
|
|
* addr:hash cmds
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* ti->data:
|
|
|
|
* [inv.mask4][inv.mask6][log2hsize4][log2hsize6]
|
|
|
|
* [ 8][ 8[ 8][ 8]
|
|
|
|
*
|
|
|
|
* inv.mask4: 32 - mask
|
|
|
|
* inv.mask6:
|
|
|
|
* 1) _slow lookup: mask
|
|
|
|
* 2) _aligned: (128 - mask) / 8
|
|
|
|
* 3) _64: 8
|
2014-07-30 12:39:49 +00:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* pflags:
|
|
|
|
* [v4=1/v6=0][hsize]
|
|
|
|
* [ 32][ 32]
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
struct chashentry;
|
|
|
|
|
|
|
|
SLIST_HEAD(chashbhead, chashentry);
|
|
|
|
|
|
|
|
struct chash_cfg {
|
|
|
|
struct chashbhead *head4;
|
|
|
|
struct chashbhead *head6;
|
|
|
|
size_t size4;
|
|
|
|
size_t size6;
|
2014-07-30 12:39:49 +00:00
|
|
|
size_t items4;
|
|
|
|
size_t items6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
uint8_t mask4;
|
|
|
|
uint8_t mask6;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct chashentry {
|
|
|
|
SLIST_ENTRY(chashentry) next;
|
|
|
|
uint32_t value;
|
|
|
|
uint32_t type;
|
|
|
|
union {
|
|
|
|
uint32_t a4; /* Host format */
|
|
|
|
struct in6_addr a6; /* Network format */
|
|
|
|
} a;
|
|
|
|
};
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
struct ta_buf_chash
|
|
|
|
{
|
|
|
|
void *ent_ptr;
|
|
|
|
struct chashentry ent;
|
|
|
|
};
|
|
|
|
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
2014-10-10 17:24:56 +00:00
|
|
|
static __inline uint32_t hash_ip(uint32_t addr, int hsize);
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
2014-10-10 17:24:56 +00:00
|
|
|
static __inline uint32_t hash_ip6(struct in6_addr *addr6, int hsize);
|
|
|
|
static __inline uint16_t hash_ip64(struct in6_addr *addr6, int hsize);
|
|
|
|
static __inline uint32_t hash_ip6_slow(struct in6_addr *addr6, void *key,
|
|
|
|
int mask, int hsize);
|
|
|
|
static __inline uint32_t hash_ip6_al(struct in6_addr *addr6, void *key, int mask,
|
|
|
|
int hsize);
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
2014-10-10 17:24:56 +00:00
|
|
|
static int ta_lookup_chash_slow(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int ta_lookup_chash_aligned(struct table_info *ti, void *key,
|
|
|
|
uint32_t keylen, uint32_t *val);
|
|
|
|
static int ta_lookup_chash_64(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int chash_parse_opts(struct chash_cfg *cfg, char *data);
|
|
|
|
static void ta_print_chash_config(void *ta_state, struct table_info *ti,
|
|
|
|
char *buf, size_t bufsize);
|
2014-10-22 21:20:37 +00:00
|
|
|
static int ta_log2(uint32_t v);
|
2014-10-10 17:24:56 +00:00
|
|
|
static int ta_init_chash(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static void ta_destroy_chash(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_chash_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int ta_dump_chash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
void *e, ipfw_obj_tentry *tent);
|
|
|
|
static uint32_t hash_ent(struct chashentry *ent, int af, int mlen,
|
|
|
|
uint32_t size);
|
|
|
|
static int tei_to_chash_ent(struct tentry_info *tei, struct chashentry *ent);
|
|
|
|
static int ta_find_chash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static void ta_foreach_chash(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
|
|
|
static int ta_prepare_add_chash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_add_chash(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static int ta_prepare_del_chash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_del_chash(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static void ta_flush_chash_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_need_modify_chash(void *ta_state, struct table_info *ti,
|
|
|
|
uint32_t count, uint64_t *pflags);
|
|
|
|
static int ta_prepare_mod_chash(void *ta_buf, uint64_t *pflags);
|
|
|
|
static int ta_fill_mod_chash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t *pflags);
|
|
|
|
static void ta_modify_chash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags);
|
|
|
|
static void ta_flush_mod_chash(void *ta_buf);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
static __inline uint32_t
|
|
|
|
hash_ip(uint32_t addr, int hsize)
|
|
|
|
{
|
|
|
|
|
|
|
|
return (addr % (hsize - 1));
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
static __inline uint32_t
|
|
|
|
hash_ip6(struct in6_addr *addr6, int hsize)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
|
|
|
|
i = addr6->s6_addr32[0] ^ addr6->s6_addr32[1] ^
|
|
|
|
addr6->s6_addr32[2] ^ addr6->s6_addr32[3];
|
|
|
|
|
|
|
|
return (i % (hsize - 1));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static __inline uint16_t
|
|
|
|
hash_ip64(struct in6_addr *addr6, int hsize)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
|
|
|
|
i = addr6->s6_addr32[0] ^ addr6->s6_addr32[1];
|
|
|
|
|
|
|
|
return (i % (hsize - 1));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static __inline uint32_t
|
|
|
|
hash_ip6_slow(struct in6_addr *addr6, void *key, int mask, int hsize)
|
|
|
|
{
|
|
|
|
struct in6_addr mask6;
|
|
|
|
|
|
|
|
ipv6_writemask(&mask6, mask);
|
|
|
|
memcpy(addr6, key, sizeof(struct in6_addr));
|
|
|
|
APPLY_MASK(addr6, &mask6);
|
|
|
|
return (hash_ip6(addr6, hsize));
|
|
|
|
}
|
|
|
|
|
|
|
|
static __inline uint32_t
|
|
|
|
hash_ip6_al(struct in6_addr *addr6, void *key, int mask, int hsize)
|
|
|
|
{
|
|
|
|
uint64_t *paddr;
|
|
|
|
|
|
|
|
paddr = (uint64_t *)addr6;
|
|
|
|
*paddr = 0;
|
|
|
|
*(paddr + 1) = 0;
|
|
|
|
memcpy(addr6, key, mask);
|
|
|
|
return (hash_ip6(addr6, hsize));
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_chash_slow(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct chashbhead *head;
|
|
|
|
struct chashentry *ent;
|
|
|
|
uint16_t hash, hsize;
|
|
|
|
uint8_t imask;
|
|
|
|
|
|
|
|
if (keylen == sizeof(in_addr_t)) {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
head = (struct chashbhead *)ti->state;
|
|
|
|
imask = ti->data >> 24;
|
|
|
|
hsize = 1 << ((ti->data & 0xFFFF) >> 8);
|
|
|
|
uint32_t a;
|
|
|
|
a = ntohl(*((in_addr_t *)key));
|
|
|
|
a = a >> imask;
|
|
|
|
hash = hash_ip(a, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (ent->a.a4 == a) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
} else {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* IPv6: worst scenario: non-round mask */
|
|
|
|
struct in6_addr addr6;
|
|
|
|
head = (struct chashbhead *)ti->xstate;
|
|
|
|
imask = (ti->data & 0xFF0000) >> 16;
|
|
|
|
hsize = 1 << (ti->data & 0xFF);
|
|
|
|
hash = hash_ip6_slow(&addr6, key, imask, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (memcmp(&ent->a.a6, &addr6, 16) == 0) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_chash_aligned(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct chashbhead *head;
|
|
|
|
struct chashentry *ent;
|
|
|
|
uint16_t hash, hsize;
|
|
|
|
uint8_t imask;
|
|
|
|
|
|
|
|
if (keylen == sizeof(in_addr_t)) {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
head = (struct chashbhead *)ti->state;
|
|
|
|
imask = ti->data >> 24;
|
|
|
|
hsize = 1 << ((ti->data & 0xFFFF) >> 8);
|
|
|
|
uint32_t a;
|
|
|
|
a = ntohl(*((in_addr_t *)key));
|
|
|
|
a = a >> imask;
|
|
|
|
hash = hash_ip(a, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (ent->a.a4 == a) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
} else {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* IPv6: aligned to 8bit mask */
|
|
|
|
struct in6_addr addr6;
|
|
|
|
uint64_t *paddr, *ptmp;
|
|
|
|
head = (struct chashbhead *)ti->xstate;
|
|
|
|
imask = (ti->data & 0xFF0000) >> 16;
|
|
|
|
hsize = 1 << (ti->data & 0xFF);
|
|
|
|
|
|
|
|
hash = hash_ip6_al(&addr6, key, imask, hsize);
|
|
|
|
paddr = (uint64_t *)&addr6;
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
ptmp = (uint64_t *)&ent->a.a6;
|
|
|
|
if (paddr[0] == ptmp[0] && paddr[1] == ptmp[1]) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_chash_64(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct chashbhead *head;
|
|
|
|
struct chashentry *ent;
|
|
|
|
uint16_t hash, hsize;
|
|
|
|
uint8_t imask;
|
|
|
|
|
|
|
|
if (keylen == sizeof(in_addr_t)) {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
head = (struct chashbhead *)ti->state;
|
|
|
|
imask = ti->data >> 24;
|
|
|
|
hsize = 1 << ((ti->data & 0xFFFF) >> 8);
|
|
|
|
uint32_t a;
|
|
|
|
a = ntohl(*((in_addr_t *)key));
|
|
|
|
a = a >> imask;
|
|
|
|
hash = hash_ip(a, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (ent->a.a4 == a) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
} else {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* IPv6: /64 */
|
|
|
|
uint64_t a6, *paddr;
|
|
|
|
head = (struct chashbhead *)ti->xstate;
|
|
|
|
paddr = (uint64_t *)key;
|
|
|
|
hsize = 1 << (ti->data & 0xFF);
|
|
|
|
a6 = *paddr;
|
|
|
|
hash = hash_ip64((struct in6_addr *)key, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
paddr = (uint64_t *)&ent->a.a6;
|
|
|
|
if (a6 == *paddr) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-03 09:04:36 +00:00
|
|
|
chash_parse_opts(struct chash_cfg *cfg, char *data)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
{
|
|
|
|
char *pdel, *pend, *s;
|
|
|
|
int mask4, mask6;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
mask4 = cfg->mask4;
|
|
|
|
mask6 = cfg->mask6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
if (data == NULL)
|
|
|
|
return (0);
|
|
|
|
if ((pdel = strchr(data, ' ')) == NULL)
|
|
|
|
return (0);
|
|
|
|
while (*pdel == ' ')
|
|
|
|
pdel++;
|
|
|
|
if (strncmp(pdel, "masks=", 6) != 0)
|
|
|
|
return (EINVAL);
|
|
|
|
if ((s = strchr(pdel, ' ')) != NULL)
|
|
|
|
*s++ = '\0';
|
|
|
|
|
|
|
|
pdel += 6;
|
|
|
|
/* Need /XX[,/YY] */
|
|
|
|
if (*pdel++ != '/')
|
|
|
|
return (EINVAL);
|
|
|
|
mask4 = strtol(pdel, &pend, 10);
|
|
|
|
if (*pend == ',') {
|
|
|
|
/* ,/YY */
|
|
|
|
pdel = pend + 1;
|
|
|
|
if (*pdel++ != '/')
|
|
|
|
return (EINVAL);
|
|
|
|
mask6 = strtol(pdel, &pend, 10);
|
|
|
|
if (*pend != '\0')
|
|
|
|
return (EINVAL);
|
|
|
|
} else if (*pend != '\0')
|
|
|
|
return (EINVAL);
|
|
|
|
|
|
|
|
if (mask4 < 0 || mask4 > 32 || mask6 < 0 || mask6 > 128)
|
|
|
|
return (EINVAL);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->mask4 = mask4;
|
|
|
|
cfg->mask6 = mask6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_print_chash_config(void *ta_state, struct table_info *ti, char *buf,
|
|
|
|
size_t bufsize)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
if (cfg->mask4 != 32 || cfg->mask6 != 128)
|
2014-08-14 21:43:20 +00:00
|
|
|
snprintf(buf, bufsize, "%s masks=/%d,/%d", "addr:hash",
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->mask4, cfg->mask6);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
else
|
2014-08-14 21:43:20 +00:00
|
|
|
snprintf(buf, bufsize, "%s", "addr:hash");
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
static int
|
2014-10-22 21:20:37 +00:00
|
|
|
ta_log2(uint32_t v)
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
{
|
|
|
|
uint32_t r;
|
|
|
|
|
|
|
|
r = 0;
|
|
|
|
while (v >>= 1)
|
|
|
|
r++;
|
|
|
|
|
|
|
|
return (r);
|
|
|
|
}
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* New table.
|
|
|
|
* We assume 'data' to be either NULL or the following format:
|
2014-08-14 21:43:20 +00:00
|
|
|
* 'addr:hash [masks=/32[,/128]]'
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_init_chash(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
char *data, uint8_t tflags)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
{
|
|
|
|
int error, i;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
uint32_t hsize;
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = malloc(sizeof(struct chash_cfg), M_IPFW, M_WAITOK | M_ZERO);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->mask4 = 32;
|
|
|
|
cfg->mask6 = 128;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
if ((error = chash_parse_opts(cfg, data)) != 0) {
|
|
|
|
free(cfg, M_IPFW);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->size4 = 128;
|
|
|
|
cfg->size6 = 128;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->head4 = malloc(sizeof(struct chashbhead) * cfg->size4, M_IPFW,
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
M_WAITOK | M_ZERO);
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->head6 = malloc(sizeof(struct chashbhead) * cfg->size6, M_IPFW,
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
M_WAITOK | M_ZERO);
|
2014-08-03 09:04:36 +00:00
|
|
|
for (i = 0; i < cfg->size4; i++)
|
|
|
|
SLIST_INIT(&cfg->head4[i]);
|
|
|
|
for (i = 0; i < cfg->size6; i++)
|
|
|
|
SLIST_INIT(&cfg->head6[i]);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
*ta_state = cfg;
|
|
|
|
ti->state = cfg->head4;
|
|
|
|
ti->xstate = cfg->head6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
/* Store data depending on v6 mask length */
|
2014-10-22 21:20:37 +00:00
|
|
|
hsize = ta_log2(cfg->size4) << 8 | ta_log2(cfg->size6);
|
2014-08-03 09:04:36 +00:00
|
|
|
if (cfg->mask6 == 64) {
|
|
|
|
ti->data = (32 - cfg->mask4) << 24 | (128 - cfg->mask6) << 16|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
hsize;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
ti->lookup = ta_lookup_chash_64;
|
2014-08-03 09:04:36 +00:00
|
|
|
} else if ((cfg->mask6 % 8) == 0) {
|
|
|
|
ti->data = (32 - cfg->mask4) << 24 |
|
|
|
|
cfg->mask6 << 13 | hsize;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
ti->lookup = ta_lookup_chash_aligned;
|
|
|
|
} else {
|
|
|
|
/* don't do that! */
|
2014-08-03 09:04:36 +00:00
|
|
|
ti->data = (32 - cfg->mask4) << 24 |
|
|
|
|
cfg->mask6 << 16 | hsize;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
ti->lookup = ta_lookup_chash_slow;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_destroy_chash(void *ta_state, struct table_info *ti)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct chashentry *ent, *ent_next;
|
|
|
|
int i;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
for (i = 0; i < cfg->size4; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head4[i], next, ent_next)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
for (i = 0; i < cfg->size6; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head6[i], next, ent_next)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
free(cfg->head4, M_IPFW);
|
|
|
|
free(cfg->head6, M_IPFW);
|
2014-07-30 12:39:49 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
free(cfg, M_IPFW);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
static void
|
|
|
|
ta_dump_chash_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
struct chash_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
|
|
|
|
|
|
|
tinfo->flags = IPFW_TATFLAGS_AFDATA | IPFW_TATFLAGS_AFITEM;
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_HASH;
|
|
|
|
tinfo->size4 = cfg->size4;
|
|
|
|
tinfo->count4 = cfg->items4;
|
|
|
|
tinfo->itemsize4 = sizeof(struct chashentry);
|
|
|
|
tinfo->taclass6 = IPFW_TACLASS_HASH;
|
|
|
|
tinfo->size6 = cfg->size6;
|
|
|
|
tinfo->count6 = cfg->items6;
|
|
|
|
tinfo->itemsize6 = sizeof(struct chashentry);
|
|
|
|
}
|
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
static int
|
|
|
|
ta_dump_chash_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct chashentry *ent;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
ent = (struct chashentry *)e;
|
|
|
|
|
|
|
|
if (ent->type == AF_INET) {
|
2014-08-03 09:04:36 +00:00
|
|
|
tent->k.addr.s_addr = htonl(ent->a.a4 << (32 - cfg->mask4));
|
|
|
|
tent->masklen = cfg->mask4;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tent->subtype = AF_INET;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = ent->value;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
#ifdef INET6
|
|
|
|
} else {
|
|
|
|
memcpy(&tent->k, &ent->a.a6, sizeof(struct in6_addr));
|
2014-08-03 09:04:36 +00:00
|
|
|
tent->masklen = cfg->mask6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tent->subtype = AF_INET6;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = ent->value;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
static uint32_t
|
|
|
|
hash_ent(struct chashentry *ent, int af, int mlen, uint32_t size)
|
|
|
|
{
|
|
|
|
uint32_t hash;
|
|
|
|
|
2014-10-10 18:31:35 +00:00
|
|
|
hash = 0;
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
if (af == AF_INET) {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
2014-07-30 12:39:49 +00:00
|
|
|
hash = hash_ip(ent->a.a4, size);
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
2014-07-30 12:39:49 +00:00
|
|
|
} else {
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
2014-07-30 12:39:49 +00:00
|
|
|
if (mlen == 64)
|
|
|
|
hash = hash_ip64(&ent->a.a6, size);
|
|
|
|
else
|
|
|
|
hash = hash_ip6(&ent->a.a6, size);
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
2014-07-30 12:39:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
tei_to_chash_ent(struct tentry_info *tei, struct chashentry *ent)
|
|
|
|
{
|
|
|
|
int mlen;
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET6
|
|
|
|
struct in6_addr mask6;
|
|
|
|
#endif
|
2014-07-30 12:39:49 +00:00
|
|
|
|
|
|
|
|
|
|
|
mlen = tei->masklen;
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET) {
|
|
|
|
#ifdef INET
|
|
|
|
if (mlen > 32)
|
|
|
|
return (EINVAL);
|
|
|
|
ent->type = AF_INET;
|
|
|
|
|
|
|
|
/* Calculate masked address */
|
|
|
|
ent->a.a4 = ntohl(*((in_addr_t *)tei->paddr)) >> (32 - mlen);
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
} else if (tei->subtype == AF_INET6) {
|
|
|
|
/* IPv6 case */
|
|
|
|
if (mlen > 128)
|
|
|
|
return (EINVAL);
|
|
|
|
ent->type = AF_INET6;
|
|
|
|
|
|
|
|
ipv6_writemask(&mask6, mlen);
|
|
|
|
memcpy(&ent->a.a6, tei->paddr, sizeof(struct in6_addr));
|
|
|
|
APPLY_MASK(&ent->a.a6, &mask6);
|
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
/* Unknown CIDR type */
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
static int
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ta_find_chash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
2014-07-30 12:39:49 +00:00
|
|
|
struct chashbhead *head;
|
|
|
|
struct chashentry ent, *tmp;
|
|
|
|
struct tentry_info tei;
|
|
|
|
int error;
|
|
|
|
uint32_t hash;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
2014-07-30 12:39:49 +00:00
|
|
|
|
|
|
|
memset(&ent, 0, sizeof(ent));
|
|
|
|
memset(&tei, 0, sizeof(tei));
|
|
|
|
|
2014-08-03 09:40:50 +00:00
|
|
|
if (tent->subtype == AF_INET) {
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
tei.paddr = &tent->k.addr;
|
2014-08-03 09:04:36 +00:00
|
|
|
tei.masklen = cfg->mask4;
|
2014-07-30 12:39:49 +00:00
|
|
|
tei.subtype = AF_INET;
|
|
|
|
|
|
|
|
if ((error = tei_to_chash_ent(&tei, &ent)) != 0)
|
|
|
|
return (error);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head4;
|
|
|
|
hash = hash_ent(&ent, AF_INET, cfg->mask4, cfg->size4);
|
2014-07-30 12:39:49 +00:00
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
|
|
|
if (tmp->a.a4 != ent.a.a4)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ta_dump_chash_tentry(ta_state, ti, tmp, tent);
|
|
|
|
return (0);
|
|
|
|
}
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
} else {
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
tei.paddr = &tent->k.addr6;
|
2014-08-03 09:04:36 +00:00
|
|
|
tei.masklen = cfg->mask6;
|
2014-07-30 12:39:49 +00:00
|
|
|
tei.subtype = AF_INET6;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
if ((error = tei_to_chash_ent(&tei, &ent)) != 0)
|
|
|
|
return (error);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head6;
|
|
|
|
hash = hash_ent(&ent, AF_INET6, cfg->mask6, cfg->size6);
|
2014-07-30 12:39:49 +00:00
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
|
|
|
if (memcmp(&tmp->a.a6, &ent.a.a6, 16) != 0)
|
|
|
|
continue;
|
|
|
|
ta_dump_chash_tentry(ta_state, ti, tmp, tent);
|
|
|
|
return (0);
|
|
|
|
}
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
2014-07-30 12:39:49 +00:00
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_foreach_chash(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct chashentry *ent, *ent_next;
|
|
|
|
int i;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
for (i = 0; i < cfg->size4; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head4[i], next, ent_next)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
f(ent, arg);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
for (i = 0; i < cfg->size6; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head6[i], next, ent_next)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
f(ent, arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_prepare_add_chash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_chash *tb;
|
|
|
|
struct chashentry *ent;
|
2014-07-30 12:39:49 +00:00
|
|
|
int error;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
tb = (struct ta_buf_chash *)ta_buf;
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
ent = malloc(sizeof(*ent), M_IPFW_TBL, M_WAITOK | M_ZERO);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
error = tei_to_chash_ent(tei, ent);
|
|
|
|
if (error != 0) {
|
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
return (error);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
2014-07-30 12:39:49 +00:00
|
|
|
tb->ent_ptr = ent;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_add_chash(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct chashbhead *head;
|
|
|
|
struct chashentry *ent, *tmp;
|
|
|
|
struct ta_buf_chash *tb;
|
|
|
|
int exists;
|
2014-08-03 08:32:54 +00:00
|
|
|
uint32_t hash, value;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tb = (struct ta_buf_chash *)ta_buf;
|
|
|
|
ent = (struct chashentry *)tb->ent_ptr;
|
|
|
|
hash = 0;
|
|
|
|
exists = 0;
|
|
|
|
|
2014-08-30 17:18:11 +00:00
|
|
|
/* Read current value from @tei */
|
|
|
|
ent->value = tei->value;
|
|
|
|
|
|
|
|
/* Read cuurrent value */
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
if (tei->subtype == AF_INET) {
|
2014-08-03 09:04:36 +00:00
|
|
|
if (tei->masklen != cfg->mask4)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (EINVAL);
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head4;
|
|
|
|
hash = hash_ent(ent, AF_INET, cfg->mask4, cfg->size4);
|
2014-07-30 12:39:49 +00:00
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
|
|
|
if (tmp->a.a4 == ent->a.a4) {
|
|
|
|
exists = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
2014-08-03 09:04:36 +00:00
|
|
|
if (tei->masklen != cfg->mask6)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (EINVAL);
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head6;
|
|
|
|
hash = hash_ent(ent, AF_INET6, cfg->mask6, cfg->size6);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
2014-07-30 12:39:49 +00:00
|
|
|
if (memcmp(&tmp->a.a6, &ent->a.a6, 16) == 0) {
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
exists = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (exists == 1) {
|
|
|
|
if ((tei->flags & TEI_FLAGS_UPDATE) == 0)
|
|
|
|
return (EEXIST);
|
|
|
|
/* Record already exists. Update value if we're asked to */
|
2014-08-03 08:32:54 +00:00
|
|
|
value = tmp->value;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tmp->value = tei->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
tei->value = value;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
/* Indicate that update has happened instead of addition */
|
|
|
|
tei->flags |= TEI_FLAGS_UPDATED;
|
|
|
|
*pnum = 0;
|
|
|
|
} else {
|
2014-08-01 15:17:46 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_DONTADD) != 0)
|
|
|
|
return (EFBIG);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
SLIST_INSERT_HEAD(&head[hash], ent, next);
|
|
|
|
tb->ent_ptr = NULL;
|
|
|
|
*pnum = 1;
|
2014-07-30 12:39:49 +00:00
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
/* Update counters */
|
|
|
|
if (tei->subtype == AF_INET)
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->items4++;
|
2014-08-02 17:18:47 +00:00
|
|
|
else
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->items6++;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_prepare_del_chash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_chash *tb;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_chash *)ta_buf;
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
return (tei_to_chash_ent(tei, &tb->ent));
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_del_chash(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chash_cfg *cfg;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct chashbhead *head;
|
2014-08-03 09:04:36 +00:00
|
|
|
struct chashentry *tmp, *tmp_next, *ent;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct ta_buf_chash *tb;
|
|
|
|
uint32_t hash;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tb = (struct ta_buf_chash *)ta_buf;
|
2014-08-03 09:04:36 +00:00
|
|
|
ent = &tb->ent;
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
|
|
|
if (tei->subtype == AF_INET) {
|
2014-08-03 09:04:36 +00:00
|
|
|
if (tei->masklen != cfg->mask4)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (EINVAL);
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head4;
|
|
|
|
hash = hash_ent(ent, AF_INET, cfg->mask4, cfg->size4);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
SLIST_FOREACH_SAFE(tmp, &head[hash], next, tmp_next) {
|
|
|
|
if (tmp->a.a4 != ent->a.a4)
|
2014-08-03 08:32:54 +00:00
|
|
|
continue;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
SLIST_REMOVE(&head[hash], tmp, chashentry, next);
|
|
|
|
cfg->items4--;
|
|
|
|
tb->ent_ptr = tmp;
|
|
|
|
tei->value = tmp->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
*pnum = 1;
|
|
|
|
return (0);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
} else {
|
2014-08-03 09:04:36 +00:00
|
|
|
if (tei->masklen != cfg->mask6)
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
return (EINVAL);
|
2014-08-03 09:04:36 +00:00
|
|
|
head = cfg->head6;
|
|
|
|
hash = hash_ent(ent, AF_INET6, cfg->mask6, cfg->size6);
|
|
|
|
SLIST_FOREACH_SAFE(tmp, &head[hash], next, tmp_next) {
|
|
|
|
if (memcmp(&tmp->a.a6, &ent->a.a6, 16) != 0)
|
2014-08-03 08:32:54 +00:00
|
|
|
continue;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
SLIST_REMOVE(&head[hash], tmp, chashentry, next);
|
|
|
|
cfg->items6--;
|
|
|
|
tb->ent_ptr = tmp;
|
|
|
|
tei->value = tmp->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
*pnum = 1;
|
|
|
|
return (0);
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_flush_chash_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_chash *tb;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_chash *)ta_buf;
|
|
|
|
|
|
|
|
if (tb->ent_ptr != NULL)
|
|
|
|
free(tb->ent_ptr, M_IPFW_TBL);
|
|
|
|
}
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
/*
|
|
|
|
* Hash growing callbacks.
|
|
|
|
*/
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
static int
|
2014-08-12 14:09:15 +00:00
|
|
|
ta_need_modify_chash(void *ta_state, struct table_info *ti, uint32_t count,
|
2014-08-02 17:18:47 +00:00
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct chash_cfg *cfg;
|
|
|
|
uint64_t data;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since we don't know exact number of IPv4/IPv6 records in @count,
|
|
|
|
* ignore non-zero @count value at all. Check current hash sizes
|
|
|
|
* and return appropriate data.
|
|
|
|
*/
|
|
|
|
|
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
|
|
|
|
|
|
|
data = 0;
|
|
|
|
if (cfg->items4 > cfg->size4 && cfg->size4 < 65536)
|
|
|
|
data |= (cfg->size4 * 2) << 16;
|
|
|
|
if (cfg->items6 > cfg->size6 && cfg->size6 < 65536)
|
|
|
|
data |= cfg->size6 * 2;
|
|
|
|
|
|
|
|
if (data != 0) {
|
|
|
|
*pflags = data;
|
2014-08-12 14:09:15 +00:00
|
|
|
return (1);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-08-12 14:09:15 +00:00
|
|
|
return (0);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-07-30 12:39:49 +00:00
|
|
|
/*
|
|
|
|
* Allocate new, larger chash.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_prepare_mod_chash(void *ta_buf, uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
struct chashbhead *head;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
|
|
|
|
memset(mi, 0, sizeof(struct mod_item));
|
2014-08-02 17:18:47 +00:00
|
|
|
mi->size = (*pflags >> 16) & 0xFFFF;
|
|
|
|
mi->size6 = *pflags & 0xFFFF;
|
|
|
|
if (mi->size > 0) {
|
|
|
|
head = malloc(sizeof(struct chashbhead) * mi->size,
|
|
|
|
M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
for (i = 0; i < mi->size; i++)
|
|
|
|
SLIST_INIT(&head[i]);
|
|
|
|
mi->main_ptr = head;
|
|
|
|
}
|
2014-07-30 12:39:49 +00:00
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
if (mi->size6 > 0) {
|
|
|
|
head = malloc(sizeof(struct chashbhead) * mi->size6,
|
|
|
|
M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
for (i = 0; i < mi->size6; i++)
|
|
|
|
SLIST_INIT(&head[i]);
|
|
|
|
mi->main_ptr6 = head;
|
|
|
|
}
|
2014-07-30 12:39:49 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy data from old runtime array to new one.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_fill_mod_chash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
|
|
|
|
/* In is not possible to do rehash if we're not holidng WLOCK. */
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Switch old & new arrays.
|
|
|
|
*/
|
2014-08-12 14:09:15 +00:00
|
|
|
static void
|
2014-07-30 12:39:49 +00:00
|
|
|
ta_modify_chash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
2014-08-02 17:18:47 +00:00
|
|
|
struct chash_cfg *cfg;
|
2014-07-30 12:39:49 +00:00
|
|
|
struct chashbhead *old_head, *new_head;
|
|
|
|
struct chashentry *ent, *ent_next;
|
|
|
|
int af, i, mlen;
|
|
|
|
uint32_t nhash;
|
2014-08-02 17:18:47 +00:00
|
|
|
size_t old_size, new_size;
|
2014-07-30 12:39:49 +00:00
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
2014-08-02 17:18:47 +00:00
|
|
|
cfg = (struct chash_cfg *)ta_state;
|
2014-07-30 12:39:49 +00:00
|
|
|
|
|
|
|
/* Check which hash we need to grow and do we still need that */
|
2014-08-02 17:18:47 +00:00
|
|
|
if (mi->size > 0 && cfg->size4 < mi->size) {
|
|
|
|
new_head = (struct chashbhead *)mi->main_ptr;
|
|
|
|
new_size = mi->size;
|
|
|
|
old_size = cfg->size4;
|
2014-07-30 12:39:49 +00:00
|
|
|
old_head = ti->state;
|
2014-08-02 17:18:47 +00:00
|
|
|
mlen = cfg->mask4;
|
2014-07-30 12:39:49 +00:00
|
|
|
af = AF_INET;
|
2014-08-02 17:18:47 +00:00
|
|
|
|
|
|
|
for (i = 0; i < old_size; i++) {
|
|
|
|
SLIST_FOREACH_SAFE(ent, &old_head[i], next, ent_next) {
|
|
|
|
nhash = hash_ent(ent, af, mlen, new_size);
|
|
|
|
SLIST_INSERT_HEAD(&new_head[nhash], ent, next);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ti->state = new_head;
|
|
|
|
cfg->head4 = new_head;
|
|
|
|
cfg->size4 = mi->size;
|
|
|
|
mi->main_ptr = old_head;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mi->size6 > 0 && cfg->size6 < mi->size6) {
|
|
|
|
new_head = (struct chashbhead *)mi->main_ptr6;
|
|
|
|
new_size = mi->size6;
|
|
|
|
old_size = cfg->size6;
|
2014-07-30 12:39:49 +00:00
|
|
|
old_head = ti->xstate;
|
2014-08-02 17:18:47 +00:00
|
|
|
mlen = cfg->mask6;
|
2014-07-30 12:39:49 +00:00
|
|
|
af = AF_INET6;
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
for (i = 0; i < old_size; i++) {
|
|
|
|
SLIST_FOREACH_SAFE(ent, &old_head[i], next, ent_next) {
|
|
|
|
nhash = hash_ent(ent, af, mlen, new_size);
|
|
|
|
SLIST_INSERT_HEAD(&new_head[nhash], ent, next);
|
|
|
|
}
|
2014-07-30 12:39:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ti->xstate = new_head;
|
2014-08-02 17:18:47 +00:00
|
|
|
cfg->head6 = new_head;
|
|
|
|
cfg->size6 = mi->size6;
|
|
|
|
mi->main_ptr6 = old_head;
|
2014-07-30 12:39:49 +00:00
|
|
|
}
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
/* Update lower 32 bits with new values */
|
|
|
|
ti->data &= 0xFFFFFFFF00000000;
|
2014-10-22 21:20:37 +00:00
|
|
|
ti->data |= ta_log2(cfg->size4) << 8 | ta_log2(cfg->size6);
|
2014-07-30 12:39:49 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Free unneded array.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_flush_mod_chash(void *ta_buf)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
if (mi->main_ptr != NULL)
|
|
|
|
free(mi->main_ptr, M_IPFW);
|
2014-08-02 17:18:47 +00:00
|
|
|
if (mi->main_ptr6 != NULL)
|
|
|
|
free(mi->main_ptr6, M_IPFW);
|
2014-07-30 12:39:49 +00:00
|
|
|
}
|
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct table_algo addr_hash = {
|
|
|
|
.name = "addr:hash",
|
|
|
|
.type = IPFW_TABLE_ADDR,
|
2014-08-01 07:35:17 +00:00
|
|
|
.ta_buf_size = sizeof(struct ta_buf_chash),
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
.init = ta_init_chash,
|
|
|
|
.destroy = ta_destroy_chash,
|
|
|
|
.prepare_add = ta_prepare_add_chash,
|
|
|
|
.prepare_del = ta_prepare_del_chash,
|
|
|
|
.add = ta_add_chash,
|
|
|
|
.del = ta_del_chash,
|
|
|
|
.flush_entry = ta_flush_chash_entry,
|
|
|
|
.foreach = ta_foreach_chash,
|
|
|
|
.dump_tentry = ta_dump_chash_tentry,
|
|
|
|
.find_tentry = ta_find_chash_tentry,
|
|
|
|
.print_config = ta_print_chash_config,
|
2014-08-03 12:19:45 +00:00
|
|
|
.dump_tinfo = ta_dump_chash_tinfo,
|
2014-08-12 14:09:15 +00:00
|
|
|
.need_modify = ta_need_modify_chash,
|
2014-07-30 12:39:49 +00:00
|
|
|
.prepare_mod = ta_prepare_mod_chash,
|
|
|
|
.fill_mod = ta_fill_mod_chash,
|
|
|
|
.modify = ta_modify_chash,
|
|
|
|
.flush_mod = ta_flush_mod_chash,
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
/*
|
2014-07-28 19:01:25 +00:00
|
|
|
* Iface table cmds.
|
|
|
|
*
|
|
|
|
* Implementation:
|
|
|
|
*
|
|
|
|
* Runtime part:
|
|
|
|
* - sorted array of "struct ifidx" pointed by ti->state.
|
2014-07-30 14:52:26 +00:00
|
|
|
* Array is allocated with rounding up to IFIDX_CHUNK. Only existing
|
2014-07-28 19:01:25 +00:00
|
|
|
* interfaces are stored in array, however its allocated size is
|
|
|
|
* sufficient to hold all table records if needed.
|
|
|
|
* - current array size is stored in ti->data
|
|
|
|
*
|
|
|
|
* Table data:
|
|
|
|
* - "struct iftable_cfg" is allocated to store table state (ta_state).
|
|
|
|
* - All table records are stored inside namedobj instance.
|
2014-06-14 10:58:39 +00:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ifidx {
|
|
|
|
uint16_t kidx;
|
|
|
|
uint16_t spare;
|
|
|
|
uint32_t value;
|
|
|
|
};
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
#define DEFAULT_IFIDX_SIZE 64
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
struct iftable_cfg;
|
|
|
|
|
|
|
|
struct ifentry {
|
|
|
|
struct named_object no;
|
|
|
|
struct ipfw_ifc ic;
|
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
uint32_t value;
|
|
|
|
int linked;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct iftable_cfg {
|
|
|
|
struct namedobj_instance *ii;
|
|
|
|
struct ip_fw_chain *ch;
|
|
|
|
struct table_info *ti;
|
|
|
|
void *main_ptr;
|
|
|
|
size_t size; /* Number of items allocated in array */
|
|
|
|
size_t count; /* Number of all items */
|
|
|
|
size_t used; /* Number of items _active_ now */
|
|
|
|
};
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
struct ta_buf_ifidx
|
|
|
|
{
|
|
|
|
struct ifentry *ife;
|
|
|
|
uint32_t value;
|
|
|
|
};
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
int compare_ifidx(const void *k, const void *v);
|
2014-10-10 17:24:56 +00:00
|
|
|
static struct ifidx * ifidx_find(struct table_info *ti, void *key);
|
|
|
|
static int ta_lookup_ifidx(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int ta_init_ifidx(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static void ta_change_ti_ifidx(void *ta_state, struct table_info *ti);
|
2016-05-06 03:18:51 +00:00
|
|
|
static int destroy_ifidx_locked(struct namedobj_instance *ii,
|
2014-10-10 17:24:56 +00:00
|
|
|
struct named_object *no, void *arg);
|
|
|
|
static void ta_destroy_ifidx(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_ifidx_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int ta_prepare_add_ifidx(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_add_ifidx(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static int ta_prepare_del_ifidx(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_del_ifidx(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static void ta_flush_ifidx_entry(struct ip_fw_chain *ch,
|
|
|
|
struct tentry_info *tei, void *ta_buf);
|
|
|
|
static void if_notifier(struct ip_fw_chain *ch, void *cbdata, uint16_t ifindex);
|
|
|
|
static int ta_need_modify_ifidx(void *ta_state, struct table_info *ti,
|
|
|
|
uint32_t count, uint64_t *pflags);
|
|
|
|
static int ta_prepare_mod_ifidx(void *ta_buf, uint64_t *pflags);
|
|
|
|
static int ta_fill_mod_ifidx(void *ta_state, struct table_info *ti,
|
|
|
|
void *ta_buf, uint64_t *pflags);
|
|
|
|
static void ta_modify_ifidx(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags);
|
|
|
|
static void ta_flush_mod_ifidx(void *ta_buf);
|
|
|
|
static int ta_dump_ifidx_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static int ta_find_ifidx_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
2016-05-06 03:18:51 +00:00
|
|
|
static int foreach_ifidx(struct namedobj_instance *ii, struct named_object *no,
|
2014-10-10 17:24:56 +00:00
|
|
|
void *arg);
|
|
|
|
static void ta_foreach_ifidx(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
int
|
|
|
|
compare_ifidx(const void *k, const void *v)
|
|
|
|
{
|
2014-10-04 13:57:14 +00:00
|
|
|
const struct ifidx *ifidx;
|
2014-07-28 19:01:25 +00:00
|
|
|
uint16_t key;
|
|
|
|
|
2014-10-04 13:57:14 +00:00
|
|
|
key = *((const uint16_t *)k);
|
|
|
|
ifidx = (const struct ifidx *)v;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
if (key < ifidx->kidx)
|
|
|
|
return (-1);
|
|
|
|
else if (key > ifidx->kidx)
|
|
|
|
return (1);
|
2014-08-03 09:40:50 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Adds item @item with key @key into ascending-sorted array @base.
|
|
|
|
* Assumes @base has enough additional storage.
|
|
|
|
*
|
|
|
|
* Returns 1 on success, 0 on duplicate key.
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
badd(const void *key, void *item, void *base, size_t nmemb,
|
|
|
|
size_t size, int (*compar) (const void *, const void *))
|
|
|
|
{
|
|
|
|
int min, max, mid, shift, res;
|
|
|
|
caddr_t paddr;
|
|
|
|
|
|
|
|
if (nmemb == 0) {
|
|
|
|
memcpy(base, item, size);
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Binary search */
|
|
|
|
min = 0;
|
|
|
|
max = nmemb - 1;
|
|
|
|
mid = 0;
|
|
|
|
while (min <= max) {
|
|
|
|
mid = (min + max) / 2;
|
|
|
|
res = compar(key, (const void *)((caddr_t)base + mid * size));
|
|
|
|
if (res == 0)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
if (res > 0)
|
2014-08-03 09:40:50 +00:00
|
|
|
min = mid + 1;
|
2014-07-28 19:01:25 +00:00
|
|
|
else
|
2014-08-03 09:40:50 +00:00
|
|
|
max = mid - 1;
|
2014-07-28 19:01:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Item not found. */
|
|
|
|
res = compar(key, (const void *)((caddr_t)base + mid * size));
|
|
|
|
if (res > 0)
|
|
|
|
shift = mid + 1;
|
|
|
|
else
|
|
|
|
shift = mid;
|
|
|
|
|
|
|
|
paddr = (caddr_t)base + shift * size;
|
|
|
|
if (nmemb > shift)
|
|
|
|
memmove(paddr + size, paddr, (nmemb - shift) * size);
|
|
|
|
|
|
|
|
memcpy(paddr, item, size);
|
|
|
|
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Deletes item with key @key from ascending-sorted array @base.
|
|
|
|
*
|
|
|
|
* Returns 1 on success, 0 for non-existent key.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
bdel(const void *key, void *base, size_t nmemb, size_t size,
|
|
|
|
int (*compar) (const void *, const void *))
|
|
|
|
{
|
|
|
|
caddr_t item;
|
|
|
|
size_t sz;
|
|
|
|
|
|
|
|
item = (caddr_t)bsearch(key, base, nmemb, size, compar);
|
|
|
|
|
|
|
|
if (item == NULL)
|
|
|
|
return (0);
|
|
|
|
|
|
|
|
sz = (caddr_t)base + nmemb * size - item;
|
|
|
|
|
|
|
|
if (sz > 0)
|
|
|
|
memmove(item, item + size, sz);
|
|
|
|
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ifidx *
|
|
|
|
ifidx_find(struct table_info *ti, void *key)
|
|
|
|
{
|
|
|
|
struct ifidx *ifi;
|
|
|
|
|
|
|
|
ifi = bsearch(key, ti->state, ti->data, sizeof(struct ifidx),
|
|
|
|
compare_ifidx);
|
|
|
|
|
|
|
|
return (ifi);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_ifidx(struct table_info *ti, void *key, uint32_t keylen,
|
2014-06-14 10:58:39 +00:00
|
|
|
uint32_t *val)
|
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ifidx *ifi;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ifi = ifidx_find(ti, key);
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
if (ifi != NULL) {
|
|
|
|
*val = ifi->value;
|
2014-06-14 10:58:39 +00:00
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_init_ifidx(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
char *data, uint8_t tflags)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
|
|
|
|
icfg = malloc(sizeof(struct iftable_cfg), M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
icfg->ii = ipfw_objhash_create(DEFAULT_IFIDX_SIZE);
|
|
|
|
icfg->size = DEFAULT_IFIDX_SIZE;
|
2014-08-03 09:40:50 +00:00
|
|
|
icfg->main_ptr = malloc(sizeof(struct ifidx) * icfg->size, M_IPFW,
|
2014-07-28 19:01:25 +00:00
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
icfg->ch = ch;
|
|
|
|
|
|
|
|
*ta_state = icfg;
|
|
|
|
ti->state = icfg->main_ptr;
|
|
|
|
ti->lookup = ta_lookup_ifidx;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Handle tableinfo @ti pointer change (on table array resize).
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_change_ti_ifidx(void *ta_state, struct table_info *ti)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
icfg->ti = ti;
|
|
|
|
}
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2016-05-06 03:18:51 +00:00
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
destroy_ifidx_locked(struct namedobj_instance *ii, struct named_object *no,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct ifentry *ife;
|
|
|
|
struct ip_fw_chain *ch;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ch = (struct ip_fw_chain *)arg;
|
|
|
|
ife = (struct ifentry *)no;
|
|
|
|
|
|
|
|
ipfw_iface_del_notify(ch, &ife->ic);
|
2015-02-05 13:49:04 +00:00
|
|
|
ipfw_iface_unref(ch, &ife->ic);
|
2014-07-28 19:01:25 +00:00
|
|
|
free(ife, M_IPFW_TBL);
|
2016-05-06 03:18:51 +00:00
|
|
|
return (0);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Destroys table @ti
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static void
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_destroy_ifidx(void *ta_state, struct table_info *ti)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct ip_fw_chain *ch;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
ch = icfg->ch;
|
|
|
|
|
|
|
|
if (icfg->main_ptr != NULL)
|
|
|
|
free(icfg->main_ptr, M_IPFW);
|
|
|
|
|
2015-02-05 13:49:04 +00:00
|
|
|
IPFW_UH_WLOCK(ch);
|
2014-07-28 19:01:25 +00:00
|
|
|
ipfw_objhash_foreach(icfg->ii, destroy_ifidx_locked, ch);
|
2015-02-05 13:49:04 +00:00
|
|
|
IPFW_UH_WUNLOCK(ch);
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
ipfw_objhash_destroy(icfg->ii);
|
|
|
|
|
|
|
|
free(icfg, M_IPFW);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
/*
|
|
|
|
* Provide algo-specific table info
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_dump_ifidx_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
struct iftable_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct iftable_cfg *)ta_state;
|
|
|
|
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_ARRAY;
|
|
|
|
tinfo->size4 = cfg->size;
|
|
|
|
tinfo->count4 = cfg->used;
|
|
|
|
tinfo->itemsize4 = sizeof(struct ifidx);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Prepare state to add to the table:
|
|
|
|
* allocate ifentry and reference needed interface.
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_prepare_add_ifidx(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ta_buf_ifidx *tb;
|
2014-07-09 18:52:12 +00:00
|
|
|
char *ifname;
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ifentry *ife;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
tb = (struct ta_buf_ifidx *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
/* Check if string is terminated */
|
2014-07-09 18:52:12 +00:00
|
|
|
ifname = (char *)tei->paddr;
|
|
|
|
if (strnlen(ifname, IF_NAMESIZE) == IF_NAMESIZE)
|
2014-06-14 10:58:39 +00:00
|
|
|
return (EINVAL);
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ife = malloc(sizeof(struct ifentry), M_IPFW_TBL, M_WAITOK | M_ZERO);
|
|
|
|
ife->ic.cb = if_notifier;
|
|
|
|
ife->ic.cbdata = ife;
|
|
|
|
|
2014-10-07 10:54:53 +00:00
|
|
|
if (ipfw_iface_ref(ch, ifname, &ife->ic) != 0) {
|
|
|
|
free(ife, M_IPFW_TBL);
|
2014-07-28 19:01:25 +00:00
|
|
|
return (EINVAL);
|
2014-10-07 10:54:53 +00:00
|
|
|
}
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/* Use ipfw_iface 'ifname' field as stable storage */
|
|
|
|
ife->no.name = ife->ic.iface->ifname;
|
|
|
|
|
|
|
|
tb->ife = ife;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-07-29 08:00:13 +00:00
|
|
|
ta_add_ifidx(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct ifentry *ife, *tmp;
|
|
|
|
struct ta_buf_ifidx *tb;
|
|
|
|
struct ipfw_iface *iif;
|
|
|
|
struct ifidx *ifi;
|
|
|
|
char *ifname;
|
2014-08-03 08:32:54 +00:00
|
|
|
uint32_t value;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
tb = (struct ta_buf_ifidx *)ta_buf;
|
|
|
|
ifname = (char *)tei->paddr;
|
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
ife = tb->ife;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ife->icfg = icfg;
|
2014-08-30 17:18:11 +00:00
|
|
|
ife->value = tei->value;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
tmp = (struct ifentry *)ipfw_objhash_lookup_name(icfg->ii, 0, ifname);
|
|
|
|
|
|
|
|
if (tmp != NULL) {
|
2014-07-03 22:25:59 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_UPDATE) == 0)
|
|
|
|
return (EEXIST);
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-08-03 08:32:54 +00:00
|
|
|
/* Exchange values in @tmp and @tei */
|
|
|
|
value = tmp->value;
|
|
|
|
tmp->value = tei->value;
|
|
|
|
tei->value = value;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-08-03 08:32:54 +00:00
|
|
|
iif = tmp->ic.iface;
|
2014-07-28 19:01:25 +00:00
|
|
|
if (iif->resolved != 0) {
|
2014-08-03 08:32:54 +00:00
|
|
|
/* We have to update runtime value, too */
|
2014-07-28 19:01:25 +00:00
|
|
|
ifi = ifidx_find(ti, &iif->ifindex);
|
|
|
|
ifi->value = ife->value;
|
2014-07-03 22:25:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Indicate that update has happened instead of addition */
|
|
|
|
tei->flags |= TEI_FLAGS_UPDATED;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 0;
|
2014-07-09 18:52:12 +00:00
|
|
|
return (0);
|
2014-07-03 22:25:59 +00:00
|
|
|
}
|
|
|
|
|
2014-08-01 15:17:46 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_DONTADD) != 0)
|
|
|
|
return (EFBIG);
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/* Link to internal list */
|
|
|
|
ipfw_objhash_add(icfg->ii, &ife->no);
|
|
|
|
|
|
|
|
/* Link notifier (possible running its callback) */
|
|
|
|
ipfw_iface_add_notify(icfg->ch, &ife->ic);
|
|
|
|
icfg->count++;
|
|
|
|
|
|
|
|
tb->ife = NULL;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 1;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Prepare to delete key from table.
|
|
|
|
* Do basic interface name checks.
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_prepare_del_ifidx(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct ta_buf_ifidx *tb;
|
2014-07-28 19:01:25 +00:00
|
|
|
char *ifname;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
tb = (struct ta_buf_ifidx *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
/* Check if string is terminated */
|
2014-07-28 19:01:25 +00:00
|
|
|
ifname = (char *)tei->paddr;
|
|
|
|
if (strnlen(ifname, IF_NAMESIZE) == IF_NAMESIZE)
|
2014-06-14 10:58:39 +00:00
|
|
|
return (EINVAL);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Remove key from both configuration list and
|
|
|
|
* runtime array. Removed interface notification.
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static int
|
2014-07-29 08:00:13 +00:00
|
|
|
ta_del_ifidx(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct ifentry *ife;
|
|
|
|
struct ta_buf_ifidx *tb;
|
|
|
|
char *ifname;
|
|
|
|
uint16_t ifindex;
|
|
|
|
int res;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
tb = (struct ta_buf_ifidx *)ta_buf;
|
|
|
|
ifname = (char *)tei->paddr;
|
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
ife = tb->ife;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ife = (struct ifentry *)ipfw_objhash_lookup_name(icfg->ii, 0, ifname);
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
if (ife == NULL)
|
2014-07-06 18:16:04 +00:00
|
|
|
return (ENOENT);
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
if (ife->linked != 0) {
|
|
|
|
/* We have to remove item from runtime */
|
|
|
|
ifindex = ife->ic.iface->ifindex;
|
|
|
|
|
|
|
|
res = bdel(&ifindex, icfg->main_ptr, icfg->used,
|
|
|
|
sizeof(struct ifidx), compare_ifidx);
|
|
|
|
|
|
|
|
KASSERT(res == 1, ("index %d does not exist", ifindex));
|
|
|
|
icfg->used--;
|
|
|
|
ti->data = icfg->used;
|
|
|
|
ife->linked = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Unlink from local list */
|
|
|
|
ipfw_objhash_del(icfg->ii, &ife->no);
|
2015-02-05 13:49:04 +00:00
|
|
|
/* Unlink notifier and deref */
|
2014-07-28 19:01:25 +00:00
|
|
|
ipfw_iface_del_notify(icfg->ch, &ife->ic);
|
2015-02-05 13:49:04 +00:00
|
|
|
ipfw_iface_unref(icfg->ch, &ife->ic);
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
icfg->count--;
|
2014-08-03 08:32:54 +00:00
|
|
|
tei->value = ife->value;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
|
|
|
tb->ife = ife;
|
2014-07-29 08:00:13 +00:00
|
|
|
*pnum = 1;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Flush deleted entry.
|
|
|
|
* Drops interface reference and frees entry.
|
|
|
|
*/
|
2014-06-14 10:58:39 +00:00
|
|
|
static void
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_flush_ifidx_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ta_buf_ifidx *tb;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
tb = (struct ta_buf_ifidx *)ta_buf;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2015-02-05 13:49:04 +00:00
|
|
|
if (tb->ife != NULL)
|
2014-07-28 19:01:25 +00:00
|
|
|
free(tb->ife, M_IPFW_TBL);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle interface announce/withdrawal for particular table.
|
|
|
|
* Every real runtime array modification happens here.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
if_notifier(struct ip_fw_chain *ch, void *cbdata, uint16_t ifindex)
|
|
|
|
{
|
|
|
|
struct ifentry *ife;
|
|
|
|
struct ifidx ifi;
|
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct table_info *ti;
|
|
|
|
int res;
|
|
|
|
|
|
|
|
ife = (struct ifentry *)cbdata;
|
|
|
|
icfg = ife->icfg;
|
|
|
|
ti = icfg->ti;
|
|
|
|
|
|
|
|
KASSERT(ti != NULL, ("ti=NULL, check change_ti handler"));
|
|
|
|
|
|
|
|
if (ife->linked == 0 && ifindex != 0) {
|
|
|
|
/* Interface announce */
|
|
|
|
ifi.kidx = ifindex;
|
|
|
|
ifi.spare = 0;
|
|
|
|
ifi.value = ife->value;
|
|
|
|
res = badd(&ifindex, &ifi, icfg->main_ptr, icfg->used,
|
|
|
|
sizeof(struct ifidx), compare_ifidx);
|
|
|
|
KASSERT(res == 1, ("index %d already exists", ifindex));
|
|
|
|
icfg->used++;
|
|
|
|
ti->data = icfg->used;
|
|
|
|
ife->linked = 1;
|
|
|
|
} else if (ife->linked != 0 && ifindex == 0) {
|
|
|
|
/* Interface withdrawal */
|
|
|
|
ifindex = ife->ic.iface->ifindex;
|
|
|
|
|
|
|
|
res = bdel(&ifindex, icfg->main_ptr, icfg->used,
|
|
|
|
sizeof(struct ifidx), compare_ifidx);
|
|
|
|
|
|
|
|
KASSERT(res == 1, ("index %d does not exist", ifindex));
|
|
|
|
icfg->used--;
|
|
|
|
ti->data = icfg->used;
|
|
|
|
ife->linked = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Table growing callbacks.
|
|
|
|
*/
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
static int
|
2014-08-12 14:09:15 +00:00
|
|
|
ta_need_modify_ifidx(void *ta_state, struct table_info *ti, uint32_t count,
|
2014-08-02 17:18:47 +00:00
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct iftable_cfg *cfg;
|
2014-08-03 09:04:36 +00:00
|
|
|
uint32_t size;
|
2014-08-02 17:18:47 +00:00
|
|
|
|
|
|
|
cfg = (struct iftable_cfg *)ta_state;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
size = cfg->size;
|
|
|
|
while (size < cfg->count + count)
|
|
|
|
size *= 2;
|
|
|
|
|
|
|
|
if (size != cfg->size) {
|
|
|
|
*pflags = size;
|
2014-08-12 14:09:15 +00:00
|
|
|
return (1);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-08-12 14:09:15 +00:00
|
|
|
return (0);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
/*
|
|
|
|
* Allocate ned, larger runtime ifidx array.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_prepare_mod_ifidx(void *ta_buf, uint64_t *pflags)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct mod_item *mi;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
mi = (struct mod_item *)ta_buf;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
memset(mi, 0, sizeof(struct mod_item));
|
2014-07-28 19:01:25 +00:00
|
|
|
mi->size = *pflags;
|
|
|
|
mi->main_ptr = malloc(sizeof(struct ifidx) * mi->size, M_IPFW,
|
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy data from old runtime array to new one.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_fill_mod_ifidx(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct mod_item *mi;
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
mi = (struct mod_item *)ta_buf;
|
2014-07-28 19:01:25 +00:00
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
|
|
|
|
/* Check if we still need to grow array */
|
|
|
|
if (icfg->size >= mi->size) {
|
|
|
|
*pflags = 0;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(mi->main_ptr, icfg->main_ptr, icfg->used * sizeof(struct ifidx));
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Switch old & new arrays.
|
|
|
|
*/
|
2014-08-12 14:09:15 +00:00
|
|
|
static void
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_modify_ifidx(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct mod_item *mi;
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
void *old_ptr;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
mi = (struct mod_item *)ta_buf;
|
2014-07-28 19:01:25 +00:00
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
|
|
|
|
old_ptr = icfg->main_ptr;
|
|
|
|
icfg->main_ptr = mi->main_ptr;
|
|
|
|
icfg->size = mi->size;
|
|
|
|
ti->state = icfg->main_ptr;
|
|
|
|
|
|
|
|
mi->main_ptr = old_ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Free unneded array.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_flush_mod_ifidx(void *ta_buf)
|
|
|
|
{
|
2014-08-03 09:04:36 +00:00
|
|
|
struct mod_item *mi;
|
2014-07-28 19:01:25 +00:00
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
mi = (struct mod_item *)ta_buf;
|
2014-07-28 19:01:25 +00:00
|
|
|
if (mi->main_ptr != NULL)
|
|
|
|
free(mi->main_ptr, M_IPFW);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
ta_dump_ifidx_tentry(void *ta_state, struct table_info *ti, void *e,
|
2014-07-06 18:16:04 +00:00
|
|
|
ipfw_obj_tentry *tent)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ifentry *ife;
|
|
|
|
|
|
|
|
ife = (struct ifentry *)e;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-06 18:16:04 +00:00
|
|
|
tent->masklen = 8 * IF_NAMESIZE;
|
2014-07-28 19:01:25 +00:00
|
|
|
memcpy(&tent->k, ife->no.name, IF_NAMESIZE);
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = ife->value;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-07-06 18:16:04 +00:00
|
|
|
static int
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ta_find_ifidx_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
2014-07-06 18:16:04 +00:00
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct ifentry *ife;
|
|
|
|
char *ifname;
|
2014-07-06 18:16:04 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ifname = tent->k.iface;
|
2014-07-06 18:16:04 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
if (strnlen(ifname, IF_NAMESIZE) == IF_NAMESIZE)
|
|
|
|
return (EINVAL);
|
2014-07-06 18:16:04 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ife = (struct ifentry *)ipfw_objhash_lookup_name(icfg->ii, 0, ifname);
|
|
|
|
|
|
|
|
if (ife != NULL) {
|
|
|
|
ta_dump_ifidx_tentry(ta_state, ti, ife, tent);
|
2014-07-06 18:16:04 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
struct wa_ifidx {
|
|
|
|
ta_foreach_f *f;
|
|
|
|
void *arg;
|
|
|
|
};
|
|
|
|
|
2016-05-06 03:18:51 +00:00
|
|
|
static int
|
2014-07-28 19:01:25 +00:00
|
|
|
foreach_ifidx(struct namedobj_instance *ii, struct named_object *no,
|
2014-06-14 10:58:39 +00:00
|
|
|
void *arg)
|
|
|
|
{
|
2014-07-28 19:01:25 +00:00
|
|
|
struct ifentry *ife;
|
|
|
|
struct wa_ifidx *wa;
|
2014-06-14 10:58:39 +00:00
|
|
|
|
2014-07-28 19:01:25 +00:00
|
|
|
ife = (struct ifentry *)no;
|
|
|
|
wa = (struct wa_ifidx *)arg;
|
|
|
|
|
|
|
|
wa->f(ife, wa->arg);
|
2016-05-06 03:18:51 +00:00
|
|
|
return (0);
|
2014-07-28 19:01:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_foreach_ifidx(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct iftable_cfg *icfg;
|
|
|
|
struct wa_ifidx wa;
|
|
|
|
|
|
|
|
icfg = (struct iftable_cfg *)ta_state;
|
|
|
|
|
|
|
|
wa.f = f;
|
|
|
|
wa.arg = arg;
|
|
|
|
|
|
|
|
ipfw_objhash_foreach(icfg->ii, foreach_ifidx, &wa);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
* Add new ipfw cidr algorihm: hash table.
Algorithm works with both IPv4 and IPv6 prefixes, /32 and /128
ranges are assumed by default.
It works the following way: input IP address is masked to specified
mask, hashed and searched inside hash bucket.
Current implementation does not support "lookup" method and hash auto-resize.
This will be changed soon.
some examples:
ipfw table mi_test2 create type cidr algo cidr:hash
ipfw table mi_test create type cidr algo "cidr:hash masks=/30,/64"
ipfw table mi_test2 info
+++ table(mi_test2), set(0) +++
type: cidr, kindex: 7
valtype: number, references: 0
algorithm: cidr:hash
items: 0, size: 220
ipfw table mi_test info
+++ table(mi_test), set(0) +++
type: cidr, kindex: 6
valtype: number, references: 0
algorithm: cidr:hash masks=/30,/64
items: 0, size: 220
ipfw table mi_test add 10.0.0.5/30
ipfw table mi_test add 10.0.0.8/30
ipfw table mi_test add 2a02:6b8:b010::1/64 25
ipfw table mi_test list
+++ table(mi_test), set(0) +++
10.0.0.4/30 0
10.0.0.8/30 0
2a02:6b8:b010::/64 25
2014-07-29 19:49:38 +00:00
|
|
|
struct table_algo iface_idx = {
|
2014-07-29 08:00:13 +00:00
|
|
|
.name = "iface:array",
|
2014-07-29 22:44:26 +00:00
|
|
|
.type = IPFW_TABLE_INTERFACE,
|
2014-08-01 07:35:17 +00:00
|
|
|
.flags = TA_FLAG_DEFAULT,
|
|
|
|
.ta_buf_size = sizeof(struct ta_buf_ifidx),
|
2014-07-28 19:01:25 +00:00
|
|
|
.init = ta_init_ifidx,
|
|
|
|
.destroy = ta_destroy_ifidx,
|
|
|
|
.prepare_add = ta_prepare_add_ifidx,
|
|
|
|
.prepare_del = ta_prepare_del_ifidx,
|
|
|
|
.add = ta_add_ifidx,
|
|
|
|
.del = ta_del_ifidx,
|
|
|
|
.flush_entry = ta_flush_ifidx_entry,
|
|
|
|
.foreach = ta_foreach_ifidx,
|
|
|
|
.dump_tentry = ta_dump_ifidx_tentry,
|
|
|
|
.find_tentry = ta_find_ifidx_tentry,
|
2014-08-03 12:19:45 +00:00
|
|
|
.dump_tinfo = ta_dump_ifidx_tinfo,
|
2014-08-12 14:09:15 +00:00
|
|
|
.need_modify = ta_need_modify_ifidx,
|
2014-07-28 19:01:25 +00:00
|
|
|
.prepare_mod = ta_prepare_mod_ifidx,
|
|
|
|
.fill_mod = ta_fill_mod_ifidx,
|
|
|
|
.modify = ta_modify_ifidx,
|
|
|
|
.flush_mod = ta_flush_mod_ifidx,
|
|
|
|
.change_ti = ta_change_ti_ifidx,
|
2014-06-14 10:58:39 +00:00
|
|
|
};
|
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
/*
|
|
|
|
* Number array cmds.
|
|
|
|
*
|
|
|
|
* Implementation:
|
|
|
|
*
|
|
|
|
* Runtime part:
|
|
|
|
* - sorted array of "struct numarray" pointed by ti->state.
|
|
|
|
* Array is allocated with rounding up to NUMARRAY_CHUNK.
|
|
|
|
* - current array size is stored in ti->data
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
struct numarray {
|
|
|
|
uint32_t number;
|
|
|
|
uint32_t value;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct numarray_cfg {
|
|
|
|
void *main_ptr;
|
|
|
|
size_t size; /* Number of items allocated in array */
|
|
|
|
size_t used; /* Number of items _active_ now */
|
|
|
|
};
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
struct ta_buf_numarray
|
|
|
|
{
|
|
|
|
struct numarray na;
|
|
|
|
};
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
int compare_numarray(const void *k, const void *v);
|
2014-10-10 17:24:56 +00:00
|
|
|
static struct numarray *numarray_find(struct table_info *ti, void *key);
|
|
|
|
static int ta_lookup_numarray(struct table_info *ti, void *key,
|
|
|
|
uint32_t keylen, uint32_t *val);
|
|
|
|
static int ta_init_numarray(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static void ta_destroy_numarray(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_numarray_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int ta_prepare_add_numarray(struct ip_fw_chain *ch,
|
|
|
|
struct tentry_info *tei, void *ta_buf);
|
|
|
|
static int ta_add_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static int ta_del_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static void ta_flush_numarray_entry(struct ip_fw_chain *ch,
|
|
|
|
struct tentry_info *tei, void *ta_buf);
|
|
|
|
static int ta_need_modify_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
uint32_t count, uint64_t *pflags);
|
|
|
|
static int ta_prepare_mod_numarray(void *ta_buf, uint64_t *pflags);
|
|
|
|
static int ta_fill_mod_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
void *ta_buf, uint64_t *pflags);
|
|
|
|
static void ta_modify_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
void *ta_buf, uint64_t pflags);
|
|
|
|
static void ta_flush_mod_numarray(void *ta_buf);
|
|
|
|
static int ta_dump_numarray_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
void *e, ipfw_obj_tentry *tent);
|
|
|
|
static int ta_find_numarray_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static void ta_foreach_numarray(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
int
|
|
|
|
compare_numarray(const void *k, const void *v)
|
|
|
|
{
|
2014-10-04 13:57:14 +00:00
|
|
|
const struct numarray *na;
|
2014-07-30 14:52:26 +00:00
|
|
|
uint32_t key;
|
|
|
|
|
2014-10-04 13:57:14 +00:00
|
|
|
key = *((const uint32_t *)k);
|
|
|
|
na = (const struct numarray *)v;
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
if (key < na->number)
|
|
|
|
return (-1);
|
|
|
|
else if (key > na->number)
|
|
|
|
return (1);
|
2014-08-03 09:40:50 +00:00
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct numarray *
|
|
|
|
numarray_find(struct table_info *ti, void *key)
|
|
|
|
{
|
|
|
|
struct numarray *ri;
|
|
|
|
|
|
|
|
ri = bsearch(key, ti->state, ti->data, sizeof(struct numarray),
|
|
|
|
compare_ifidx);
|
|
|
|
|
|
|
|
return (ri);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_numarray(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct numarray *ri;
|
|
|
|
|
|
|
|
ri = numarray_find(ti, key);
|
|
|
|
|
|
|
|
if (ri != NULL) {
|
|
|
|
*val = ri->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_init_numarray(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
char *data, uint8_t tflags)
|
2014-07-30 14:52:26 +00:00
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = malloc(sizeof(*cfg), M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
cfg->size = 16;
|
2014-07-30 14:52:26 +00:00
|
|
|
cfg->main_ptr = malloc(sizeof(struct numarray) * cfg->size, M_IPFW,
|
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
*ta_state = cfg;
|
|
|
|
ti->state = cfg->main_ptr;
|
|
|
|
ti->lookup = ta_lookup_numarray;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Destroys table @ti
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_destroy_numarray(void *ta_state, struct table_info *ti)
|
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
|
|
|
if (cfg->main_ptr != NULL)
|
|
|
|
free(cfg->main_ptr, M_IPFW);
|
|
|
|
|
|
|
|
free(cfg, M_IPFW);
|
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
/*
|
|
|
|
* Provide algo-specific table info
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_dump_numarray_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_ARRAY;
|
|
|
|
tinfo->size4 = cfg->size;
|
|
|
|
tinfo->count4 = cfg->used;
|
|
|
|
tinfo->itemsize4 = sizeof(struct numarray);
|
|
|
|
}
|
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
/*
|
|
|
|
* Prepare for addition/deletion to an array.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_prepare_add_numarray(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_numarray *tb;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_numarray *)ta_buf;
|
|
|
|
|
|
|
|
tb->na.number = *((uint32_t *)tei->paddr);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_add_numarray(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-07-30 14:52:26 +00:00
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
struct ta_buf_numarray *tb;
|
|
|
|
struct numarray *ri;
|
|
|
|
int res;
|
2014-08-03 08:32:54 +00:00
|
|
|
uint32_t value;
|
2014-07-30 14:52:26 +00:00
|
|
|
|
2014-08-03 09:40:50 +00:00
|
|
|
tb = (struct ta_buf_numarray *)ta_buf;
|
2014-07-30 14:52:26 +00:00
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
2014-08-30 17:18:11 +00:00
|
|
|
/* Read current value from @tei */
|
|
|
|
tb->na.value = tei->value;
|
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
ri = numarray_find(ti, &tb->na.number);
|
|
|
|
|
|
|
|
if (ri != NULL) {
|
|
|
|
if ((tei->flags & TEI_FLAGS_UPDATE) == 0)
|
|
|
|
return (EEXIST);
|
|
|
|
|
2014-08-03 08:32:54 +00:00
|
|
|
/* Exchange values between ri and @tei */
|
|
|
|
value = ri->value;
|
|
|
|
ri->value = tei->value;
|
|
|
|
tei->value = value;
|
2014-07-30 14:52:26 +00:00
|
|
|
/* Indicate that update has happened instead of addition */
|
|
|
|
tei->flags |= TEI_FLAGS_UPDATED;
|
|
|
|
*pnum = 0;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
2014-08-01 15:17:46 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_DONTADD) != 0)
|
|
|
|
return (EFBIG);
|
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
res = badd(&tb->na.number, &tb->na, cfg->main_ptr, cfg->used,
|
|
|
|
sizeof(struct numarray), compare_numarray);
|
|
|
|
|
|
|
|
KASSERT(res == 1, ("number %d already exists", tb->na.number));
|
|
|
|
cfg->used++;
|
|
|
|
ti->data = cfg->used;
|
|
|
|
*pnum = 1;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove key from both configuration list and
|
|
|
|
* runtime array. Removed interface notification.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_del_numarray(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
2014-07-30 14:52:26 +00:00
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
struct ta_buf_numarray *tb;
|
|
|
|
struct numarray *ri;
|
|
|
|
int res;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_numarray *)ta_buf;
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
|
|
|
ri = numarray_find(ti, &tb->na.number);
|
|
|
|
if (ri == NULL)
|
|
|
|
return (ENOENT);
|
2014-08-03 08:32:54 +00:00
|
|
|
|
|
|
|
tei->value = ri->value;
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
res = bdel(&tb->na.number, cfg->main_ptr, cfg->used,
|
|
|
|
sizeof(struct numarray), compare_numarray);
|
|
|
|
|
|
|
|
KASSERT(res == 1, ("number %u does not exist", tb->na.number));
|
|
|
|
cfg->used--;
|
|
|
|
ti->data = cfg->used;
|
|
|
|
*pnum = 1;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_flush_numarray_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
/* We don't have any state, do nothing */
|
2014-07-30 14:52:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Table growing callbacks.
|
|
|
|
*/
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
static int
|
2014-08-12 14:09:15 +00:00
|
|
|
ta_need_modify_numarray(void *ta_state, struct table_info *ti, uint32_t count,
|
2014-08-02 17:18:47 +00:00
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
2014-08-03 09:04:36 +00:00
|
|
|
size_t size;
|
2014-08-02 17:18:47 +00:00
|
|
|
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
2014-08-03 09:04:36 +00:00
|
|
|
size = cfg->size;
|
|
|
|
while (size < cfg->used + count)
|
|
|
|
size *= 2;
|
|
|
|
|
|
|
|
if (size != cfg->size) {
|
|
|
|
*pflags = size;
|
2014-08-12 14:09:15 +00:00
|
|
|
return (1);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-08-12 14:09:15 +00:00
|
|
|
return (0);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-07-30 14:52:26 +00:00
|
|
|
/*
|
2014-08-02 17:18:47 +00:00
|
|
|
* Allocate new, larger runtime array.
|
2014-07-30 14:52:26 +00:00
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_prepare_mod_numarray(void *ta_buf, uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
|
|
|
|
memset(mi, 0, sizeof(struct mod_item));
|
|
|
|
mi->size = *pflags;
|
|
|
|
mi->main_ptr = malloc(sizeof(struct numarray) * mi->size, M_IPFW,
|
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy data from old runtime array to new one.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_fill_mod_numarray(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
|
|
|
/* Check if we still need to grow array */
|
|
|
|
if (cfg->size >= mi->size) {
|
|
|
|
*pflags = 0;
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(mi->main_ptr, cfg->main_ptr, cfg->used * sizeof(struct numarray));
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Switch old & new arrays.
|
|
|
|
*/
|
2014-08-12 14:09:15 +00:00
|
|
|
static void
|
2014-07-30 14:52:26 +00:00
|
|
|
ta_modify_numarray(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
void *old_ptr;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
|
|
|
old_ptr = cfg->main_ptr;
|
|
|
|
cfg->main_ptr = mi->main_ptr;
|
|
|
|
cfg->size = mi->size;
|
|
|
|
ti->state = cfg->main_ptr;
|
|
|
|
|
|
|
|
mi->main_ptr = old_ptr;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Free unneded array.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_flush_mod_numarray(void *ta_buf)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
if (mi->main_ptr != NULL)
|
|
|
|
free(mi->main_ptr, M_IPFW);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_dump_numarray_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
|
|
|
struct numarray *na;
|
|
|
|
|
|
|
|
na = (struct numarray *)e;
|
|
|
|
|
|
|
|
tent->k.key = na->number;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = na->value;
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ta_find_numarray_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
2014-07-30 14:52:26 +00:00
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
struct numarray *ri;
|
|
|
|
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ri = numarray_find(ti, &tent->k.key);
|
2014-07-30 14:52:26 +00:00
|
|
|
|
|
|
|
if (ri != NULL) {
|
|
|
|
ta_dump_numarray_tentry(ta_state, ti, ri, tent);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_foreach_numarray(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct numarray_cfg *cfg;
|
|
|
|
struct numarray *array;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
cfg = (struct numarray_cfg *)ta_state;
|
|
|
|
array = cfg->main_ptr;
|
|
|
|
|
|
|
|
for (i = 0; i < cfg->used; i++)
|
|
|
|
f(&array[i], arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct table_algo number_array = {
|
|
|
|
.name = "number:array",
|
|
|
|
.type = IPFW_TABLE_NUMBER,
|
2014-08-01 07:35:17 +00:00
|
|
|
.ta_buf_size = sizeof(struct ta_buf_numarray),
|
2014-07-30 14:52:26 +00:00
|
|
|
.init = ta_init_numarray,
|
|
|
|
.destroy = ta_destroy_numarray,
|
|
|
|
.prepare_add = ta_prepare_add_numarray,
|
|
|
|
.prepare_del = ta_prepare_add_numarray,
|
|
|
|
.add = ta_add_numarray,
|
|
|
|
.del = ta_del_numarray,
|
|
|
|
.flush_entry = ta_flush_numarray_entry,
|
|
|
|
.foreach = ta_foreach_numarray,
|
|
|
|
.dump_tentry = ta_dump_numarray_tentry,
|
|
|
|
.find_tentry = ta_find_numarray_tentry,
|
2014-08-03 12:19:45 +00:00
|
|
|
.dump_tinfo = ta_dump_numarray_tinfo,
|
2014-08-12 14:09:15 +00:00
|
|
|
.need_modify = ta_need_modify_numarray,
|
2014-07-30 14:52:26 +00:00
|
|
|
.prepare_mod = ta_prepare_mod_numarray,
|
|
|
|
.fill_mod = ta_fill_mod_numarray,
|
|
|
|
.modify = ta_modify_numarray,
|
|
|
|
.flush_mod = ta_flush_mod_numarray,
|
|
|
|
};
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
/*
|
|
|
|
* flow:hash cmds
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* ti->data:
|
|
|
|
* [inv.mask4][inv.mask6][log2hsize4][log2hsize6]
|
|
|
|
* [ 8][ 8[ 8][ 8]
|
|
|
|
*
|
|
|
|
* inv.mask4: 32 - mask
|
|
|
|
* inv.mask6:
|
|
|
|
* 1) _slow lookup: mask
|
|
|
|
* 2) _aligned: (128 - mask) / 8
|
|
|
|
* 3) _64: 8
|
|
|
|
*
|
|
|
|
*
|
|
|
|
* pflags:
|
2014-08-02 17:18:47 +00:00
|
|
|
* [hsize4][hsize6]
|
|
|
|
* [ 16][ 16]
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
struct fhashentry;
|
|
|
|
|
|
|
|
SLIST_HEAD(fhashbhead, fhashentry);
|
|
|
|
|
|
|
|
struct fhashentry {
|
|
|
|
SLIST_ENTRY(fhashentry) next;
|
|
|
|
uint8_t af;
|
|
|
|
uint8_t proto;
|
|
|
|
uint16_t spare0;
|
|
|
|
uint16_t dport;
|
|
|
|
uint16_t sport;
|
|
|
|
uint32_t value;
|
|
|
|
uint32_t spare1;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct fhashentry4 {
|
|
|
|
struct fhashentry e;
|
|
|
|
struct in_addr dip;
|
|
|
|
struct in_addr sip;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct fhashentry6 {
|
|
|
|
struct fhashentry e;
|
|
|
|
struct in6_addr dip6;
|
|
|
|
struct in6_addr sip6;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct fhash_cfg {
|
|
|
|
struct fhashbhead *head;
|
|
|
|
size_t size;
|
|
|
|
size_t items;
|
|
|
|
struct fhashentry4 fe4;
|
|
|
|
struct fhashentry6 fe6;
|
|
|
|
};
|
|
|
|
|
2014-08-03 09:40:50 +00:00
|
|
|
struct ta_buf_fhash {
|
|
|
|
void *ent_ptr;
|
2014-08-03 09:04:36 +00:00
|
|
|
struct fhashentry6 fe6;
|
|
|
|
};
|
|
|
|
|
2014-10-10 17:24:56 +00:00
|
|
|
static __inline int cmp_flow_ent(struct fhashentry *a,
|
|
|
|
struct fhashentry *b, size_t sz);
|
|
|
|
static __inline uint32_t hash_flow4(struct fhashentry4 *f, int hsize);
|
|
|
|
static __inline uint32_t hash_flow6(struct fhashentry6 *f, int hsize);
|
|
|
|
static uint32_t hash_flow_ent(struct fhashentry *ent, uint32_t size);
|
|
|
|
static int ta_lookup_fhash(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int ta_init_fhash(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static void ta_destroy_fhash(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_fhash_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int ta_dump_fhash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
void *e, ipfw_obj_tentry *tent);
|
|
|
|
static int tei_to_fhash_ent(struct tentry_info *tei, struct fhashentry *ent);
|
|
|
|
static int ta_find_fhash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static void ta_foreach_fhash(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
|
|
|
static int ta_prepare_add_fhash(struct ip_fw_chain *ch,
|
|
|
|
struct tentry_info *tei, void *ta_buf);
|
|
|
|
static int ta_add_fhash(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static int ta_prepare_del_fhash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_del_fhash(void *ta_state, struct table_info *ti,
|
|
|
|
struct tentry_info *tei, void *ta_buf, uint32_t *pnum);
|
|
|
|
static void ta_flush_fhash_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf);
|
|
|
|
static int ta_need_modify_fhash(void *ta_state, struct table_info *ti,
|
|
|
|
uint32_t count, uint64_t *pflags);
|
|
|
|
static int ta_prepare_mod_fhash(void *ta_buf, uint64_t *pflags);
|
|
|
|
static int ta_fill_mod_fhash(void *ta_state, struct table_info *ti,
|
|
|
|
void *ta_buf, uint64_t *pflags);
|
|
|
|
static void ta_modify_fhash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags);
|
|
|
|
static void ta_flush_mod_fhash(void *ta_buf);
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
static __inline int
|
|
|
|
cmp_flow_ent(struct fhashentry *a, struct fhashentry *b, size_t sz)
|
|
|
|
{
|
|
|
|
uint64_t *ka, *kb;
|
|
|
|
|
|
|
|
ka = (uint64_t *)(&a->next + 1);
|
|
|
|
kb = (uint64_t *)(&b->next + 1);
|
|
|
|
|
|
|
|
if (*ka == *kb && (memcmp(a + 1, b + 1, sz) == 0))
|
|
|
|
return (1);
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static __inline uint32_t
|
|
|
|
hash_flow4(struct fhashentry4 *f, int hsize)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
|
|
|
|
i = (f->dip.s_addr) ^ (f->sip.s_addr) ^ (f->e.dport) ^ (f->e.sport);
|
|
|
|
|
|
|
|
return (i % (hsize - 1));
|
|
|
|
}
|
|
|
|
|
|
|
|
static __inline uint32_t
|
|
|
|
hash_flow6(struct fhashentry6 *f, int hsize)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
|
|
|
|
|
|
|
i = (f->dip6.__u6_addr.__u6_addr32[2]) ^
|
|
|
|
(f->dip6.__u6_addr.__u6_addr32[3]) ^
|
|
|
|
(f->sip6.__u6_addr.__u6_addr32[2]) ^
|
|
|
|
(f->sip6.__u6_addr.__u6_addr32[3]) ^
|
|
|
|
(f->e.dport) ^ (f->e.sport);
|
|
|
|
|
|
|
|
return (i % (hsize - 1));
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint32_t
|
|
|
|
hash_flow_ent(struct fhashentry *ent, uint32_t size)
|
|
|
|
{
|
|
|
|
uint32_t hash;
|
|
|
|
|
|
|
|
if (ent->af == AF_INET) {
|
|
|
|
hash = hash_flow4((struct fhashentry4 *)ent, size);
|
|
|
|
} else {
|
|
|
|
hash = hash_flow6((struct fhashentry6 *)ent, size);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (hash);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_fhash(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
|
|
|
struct fhashbhead *head;
|
|
|
|
struct fhashentry *ent;
|
|
|
|
struct fhashentry4 *m4;
|
|
|
|
struct ipfw_flow_id *id;
|
|
|
|
uint16_t hash, hsize;
|
|
|
|
|
|
|
|
id = (struct ipfw_flow_id *)key;
|
|
|
|
head = (struct fhashbhead *)ti->state;
|
|
|
|
hsize = ti->data;
|
|
|
|
m4 = (struct fhashentry4 *)ti->xstate;
|
|
|
|
|
|
|
|
if (id->addr_type == 4) {
|
|
|
|
struct fhashentry4 f;
|
|
|
|
|
|
|
|
/* Copy hash mask */
|
|
|
|
f = *m4;
|
|
|
|
|
|
|
|
f.dip.s_addr &= id->dst_ip;
|
|
|
|
f.sip.s_addr &= id->src_ip;
|
|
|
|
f.e.dport &= id->dst_port;
|
|
|
|
f.e.sport &= id->src_port;
|
|
|
|
f.e.proto &= id->proto;
|
|
|
|
hash = hash_flow4(&f, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (cmp_flow_ent(ent, &f.e, 2 * 4) != 0) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else if (id->addr_type == 6) {
|
|
|
|
struct fhashentry6 f;
|
|
|
|
uint64_t *fp, *idp;
|
|
|
|
|
|
|
|
/* Copy hash mask */
|
|
|
|
f = *((struct fhashentry6 *)(m4 + 1));
|
|
|
|
|
|
|
|
/* Handle lack of __u6_addr.__u6_addr64 */
|
|
|
|
fp = (uint64_t *)&f.dip6;
|
|
|
|
idp = (uint64_t *)&id->dst_ip6;
|
|
|
|
/* src IPv6 is stored after dst IPv6 */
|
|
|
|
*fp++ &= *idp++;
|
|
|
|
*fp++ &= *idp++;
|
|
|
|
*fp++ &= *idp++;
|
|
|
|
*fp &= *idp;
|
|
|
|
f.e.dport &= id->dst_port;
|
|
|
|
f.e.sport &= id->src_port;
|
|
|
|
f.e.proto &= id->proto;
|
|
|
|
hash = hash_flow6(&f, hsize);
|
|
|
|
SLIST_FOREACH(ent, &head[hash], next) {
|
|
|
|
if (cmp_flow_ent(ent, &f.e, 2 * 16) != 0) {
|
|
|
|
*val = ent->value;
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* New table.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_init_fhash(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
|
|
|
char *data, uint8_t tflags)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashentry4 *fe4;
|
|
|
|
struct fhashentry6 *fe6;
|
|
|
|
|
|
|
|
cfg = malloc(sizeof(struct fhash_cfg), M_IPFW, M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
cfg->size = 512;
|
|
|
|
|
|
|
|
cfg->head = malloc(sizeof(struct fhashbhead) * cfg->size, M_IPFW,
|
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
for (i = 0; i < cfg->size; i++)
|
|
|
|
SLIST_INIT(&cfg->head[i]);
|
|
|
|
|
|
|
|
/* Fill in fe masks based on @tflags */
|
|
|
|
fe4 = &cfg->fe4;
|
|
|
|
fe6 = &cfg->fe6;
|
|
|
|
if (tflags & IPFW_TFFLAG_SRCIP) {
|
|
|
|
memset(&fe4->sip, 0xFF, sizeof(fe4->sip));
|
|
|
|
memset(&fe6->sip6, 0xFF, sizeof(fe6->sip6));
|
|
|
|
}
|
|
|
|
if (tflags & IPFW_TFFLAG_DSTIP) {
|
|
|
|
memset(&fe4->dip, 0xFF, sizeof(fe4->dip));
|
|
|
|
memset(&fe6->dip6, 0xFF, sizeof(fe6->dip6));
|
|
|
|
}
|
|
|
|
if (tflags & IPFW_TFFLAG_SRCPORT) {
|
|
|
|
memset(&fe4->e.sport, 0xFF, sizeof(fe4->e.sport));
|
|
|
|
memset(&fe6->e.sport, 0xFF, sizeof(fe6->e.sport));
|
|
|
|
}
|
|
|
|
if (tflags & IPFW_TFFLAG_DSTPORT) {
|
|
|
|
memset(&fe4->e.dport, 0xFF, sizeof(fe4->e.dport));
|
|
|
|
memset(&fe6->e.dport, 0xFF, sizeof(fe6->e.dport));
|
|
|
|
}
|
|
|
|
if (tflags & IPFW_TFFLAG_PROTO) {
|
|
|
|
memset(&fe4->e.proto, 0xFF, sizeof(fe4->e.proto));
|
|
|
|
memset(&fe6->e.proto, 0xFF, sizeof(fe6->e.proto));
|
|
|
|
}
|
|
|
|
|
|
|
|
fe4->e.af = AF_INET;
|
|
|
|
fe6->e.af = AF_INET6;
|
|
|
|
|
|
|
|
*ta_state = cfg;
|
|
|
|
ti->state = cfg->head;
|
|
|
|
ti->xstate = &cfg->fe4;
|
|
|
|
ti->data = cfg->size;
|
|
|
|
ti->lookup = ta_lookup_fhash;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_destroy_fhash(void *ta_state, struct table_info *ti)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashentry *ent, *ent_next;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
for (i = 0; i < cfg->size; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head[i], next, ent_next)
|
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
|
|
|
|
free(cfg->head, M_IPFW);
|
|
|
|
free(cfg, M_IPFW);
|
|
|
|
}
|
|
|
|
|
2014-08-03 12:19:45 +00:00
|
|
|
/*
|
|
|
|
* Provide algo-specific table info
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_dump_fhash_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
tinfo->flags = IPFW_TATFLAGS_AFITEM;
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_HASH;
|
|
|
|
tinfo->size4 = cfg->size;
|
|
|
|
tinfo->count4 = cfg->items;
|
|
|
|
tinfo->itemsize4 = sizeof(struct fhashentry4);
|
|
|
|
tinfo->itemsize6 = sizeof(struct fhashentry6);
|
|
|
|
}
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
static int
|
|
|
|
ta_dump_fhash_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashentry *ent;
|
|
|
|
struct fhashentry4 *fe4;
|
2014-10-10 17:24:56 +00:00
|
|
|
#ifdef INET6
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
struct fhashentry6 *fe6;
|
2014-10-10 17:24:56 +00:00
|
|
|
#endif
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
struct tflow_entry *tfe;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
ent = (struct fhashentry *)e;
|
|
|
|
tfe = &tent->k.flow;
|
|
|
|
|
|
|
|
tfe->af = ent->af;
|
|
|
|
tfe->proto = ent->proto;
|
|
|
|
tfe->dport = htons(ent->dport);
|
|
|
|
tfe->sport = htons(ent->sport);
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = ent->value;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
tent->subtype = ent->af;
|
|
|
|
|
|
|
|
if (ent->af == AF_INET) {
|
|
|
|
fe4 = (struct fhashentry4 *)ent;
|
|
|
|
tfe->a.a4.sip.s_addr = htonl(fe4->sip.s_addr);
|
|
|
|
tfe->a.a4.dip.s_addr = htonl(fe4->dip.s_addr);
|
|
|
|
tent->masklen = 32;
|
|
|
|
#ifdef INET6
|
|
|
|
} else {
|
|
|
|
fe6 = (struct fhashentry6 *)ent;
|
|
|
|
tfe->a.a6.sip6 = fe6->sip6;
|
|
|
|
tfe->a.a6.dip6 = fe6->dip6;
|
|
|
|
tent->masklen = 128;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
tei_to_fhash_ent(struct tentry_info *tei, struct fhashentry *ent)
|
|
|
|
{
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
struct fhashentry4 *fe4;
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
struct fhashentry6 *fe6;
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
struct tflow_entry *tfe;
|
|
|
|
|
|
|
|
tfe = (struct tflow_entry *)tei->paddr;
|
|
|
|
|
|
|
|
ent->af = tei->subtype;
|
|
|
|
ent->proto = tfe->proto;
|
|
|
|
ent->dport = ntohs(tfe->dport);
|
|
|
|
ent->sport = ntohs(tfe->sport);
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET) {
|
|
|
|
#ifdef INET
|
|
|
|
fe4 = (struct fhashentry4 *)ent;
|
|
|
|
fe4->sip.s_addr = ntohl(tfe->a.a4.sip.s_addr);
|
|
|
|
fe4->dip.s_addr = ntohl(tfe->a.a4.dip.s_addr);
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
} else if (tei->subtype == AF_INET6) {
|
|
|
|
fe6 = (struct fhashentry6 *)ent;
|
|
|
|
fe6->sip6 = tfe->a.a6.sip6;
|
|
|
|
fe6->dip6 = tfe->a.a6.dip6;
|
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
/* Unknown CIDR type */
|
|
|
|
return (EINVAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_find_fhash_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashbhead *head;
|
|
|
|
struct fhashentry *ent, *tmp;
|
|
|
|
struct fhashentry6 fe6;
|
|
|
|
struct tentry_info tei;
|
|
|
|
int error;
|
|
|
|
uint32_t hash;
|
|
|
|
size_t sz;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
ent = &fe6.e;
|
|
|
|
|
|
|
|
memset(&fe6, 0, sizeof(fe6));
|
|
|
|
memset(&tei, 0, sizeof(tei));
|
|
|
|
|
|
|
|
tei.paddr = &tent->k.flow;
|
|
|
|
tei.subtype = tent->subtype;
|
|
|
|
|
|
|
|
if ((error = tei_to_fhash_ent(&tei, ent)) != 0)
|
|
|
|
return (error);
|
|
|
|
|
|
|
|
head = cfg->head;
|
|
|
|
hash = hash_flow_ent(ent, cfg->size);
|
|
|
|
|
|
|
|
if (tei.subtype == AF_INET)
|
|
|
|
sz = 2 * sizeof(struct in_addr);
|
|
|
|
else
|
|
|
|
sz = 2 * sizeof(struct in6_addr);
|
|
|
|
|
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
|
|
|
if (cmp_flow_ent(tmp, ent, sz) != 0) {
|
|
|
|
ta_dump_fhash_tentry(ta_state, ti, tmp, tent);
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_foreach_fhash(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashentry *ent, *ent_next;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
for (i = 0; i < cfg->size; i++)
|
|
|
|
SLIST_FOREACH_SAFE(ent, &cfg->head[i], next, ent_next)
|
|
|
|
f(ent, arg);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_prepare_add_fhash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_fhash *tb;
|
|
|
|
struct fhashentry *ent;
|
|
|
|
size_t sz;
|
|
|
|
int error;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_fhash *)ta_buf;
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET)
|
|
|
|
sz = sizeof(struct fhashentry4);
|
|
|
|
else if (tei->subtype == AF_INET6)
|
|
|
|
sz = sizeof(struct fhashentry6);
|
|
|
|
else
|
|
|
|
return (EINVAL);
|
|
|
|
|
|
|
|
ent = malloc(sz, M_IPFW_TBL, M_WAITOK | M_ZERO);
|
|
|
|
|
|
|
|
error = tei_to_fhash_ent(tei, ent);
|
|
|
|
if (error != 0) {
|
|
|
|
free(ent, M_IPFW_TBL);
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
tb->ent_ptr = ent;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_add_fhash(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashbhead *head;
|
|
|
|
struct fhashentry *ent, *tmp;
|
|
|
|
struct ta_buf_fhash *tb;
|
|
|
|
int exists;
|
2014-08-03 08:32:54 +00:00
|
|
|
uint32_t hash, value;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
size_t sz;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
tb = (struct ta_buf_fhash *)ta_buf;
|
|
|
|
ent = (struct fhashentry *)tb->ent_ptr;
|
|
|
|
exists = 0;
|
|
|
|
|
2014-08-30 17:18:11 +00:00
|
|
|
/* Read current value from @tei */
|
|
|
|
ent->value = tei->value;
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
head = cfg->head;
|
|
|
|
hash = hash_flow_ent(ent, cfg->size);
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET)
|
|
|
|
sz = 2 * sizeof(struct in_addr);
|
|
|
|
else
|
|
|
|
sz = 2 * sizeof(struct in6_addr);
|
|
|
|
|
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
|
|
|
if (cmp_flow_ent(tmp, ent, sz) != 0) {
|
|
|
|
exists = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (exists == 1) {
|
|
|
|
if ((tei->flags & TEI_FLAGS_UPDATE) == 0)
|
|
|
|
return (EEXIST);
|
|
|
|
/* Record already exists. Update value if we're asked to */
|
2014-08-03 08:32:54 +00:00
|
|
|
/* Exchange values between tmp and @tei */
|
|
|
|
value = tmp->value;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
tmp->value = tei->value;
|
2014-08-03 08:32:54 +00:00
|
|
|
tei->value = value;
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
/* Indicate that update has happened instead of addition */
|
|
|
|
tei->flags |= TEI_FLAGS_UPDATED;
|
|
|
|
*pnum = 0;
|
|
|
|
} else {
|
2014-08-01 15:17:46 +00:00
|
|
|
if ((tei->flags & TEI_FLAGS_DONTADD) != 0)
|
|
|
|
return (EFBIG);
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
SLIST_INSERT_HEAD(&head[hash], ent, next);
|
|
|
|
tb->ent_ptr = NULL;
|
|
|
|
*pnum = 1;
|
|
|
|
|
|
|
|
/* Update counters and check if we need to grow hash */
|
|
|
|
cfg->items++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_prepare_del_fhash(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_fhash *tb;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_fhash *)ta_buf;
|
|
|
|
|
|
|
|
return (tei_to_fhash_ent(tei, &tb->fe6.e));
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_del_fhash(void *ta_state, struct table_info *ti, struct tentry_info *tei,
|
2014-08-02 17:18:47 +00:00
|
|
|
void *ta_buf, uint32_t *pnum)
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashbhead *head;
|
|
|
|
struct fhashentry *ent, *tmp;
|
|
|
|
struct ta_buf_fhash *tb;
|
|
|
|
uint32_t hash;
|
|
|
|
size_t sz;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
tb = (struct ta_buf_fhash *)ta_buf;
|
|
|
|
ent = &tb->fe6.e;
|
|
|
|
|
|
|
|
head = cfg->head;
|
|
|
|
hash = hash_flow_ent(ent, cfg->size);
|
|
|
|
|
|
|
|
if (tei->subtype == AF_INET)
|
|
|
|
sz = 2 * sizeof(struct in_addr);
|
|
|
|
else
|
|
|
|
sz = 2 * sizeof(struct in6_addr);
|
|
|
|
|
|
|
|
/* Check for existence */
|
|
|
|
SLIST_FOREACH(tmp, &head[hash], next) {
|
2014-08-03 08:32:54 +00:00
|
|
|
if (cmp_flow_ent(tmp, ent, sz) == 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
SLIST_REMOVE(&head[hash], tmp, fhashentry, next);
|
|
|
|
tei->value = tmp->value;
|
|
|
|
*pnum = 1;
|
|
|
|
cfg->items--;
|
|
|
|
tb->ent_ptr = tmp;
|
|
|
|
return (0);
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return (ENOENT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_flush_fhash_entry(struct ip_fw_chain *ch, struct tentry_info *tei,
|
|
|
|
void *ta_buf)
|
|
|
|
{
|
|
|
|
struct ta_buf_fhash *tb;
|
|
|
|
|
|
|
|
tb = (struct ta_buf_fhash *)ta_buf;
|
|
|
|
|
|
|
|
if (tb->ent_ptr != NULL)
|
|
|
|
free(tb->ent_ptr, M_IPFW_TBL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Hash growing callbacks.
|
|
|
|
*/
|
|
|
|
|
2014-08-02 17:18:47 +00:00
|
|
|
static int
|
2014-08-12 14:09:15 +00:00
|
|
|
ta_need_modify_fhash(void *ta_state, struct table_info *ti, uint32_t count,
|
2014-08-02 17:18:47 +00:00
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
if (cfg->items > cfg->size && cfg->size < 65536) {
|
|
|
|
*pflags = cfg->size * 2;
|
2014-08-12 14:09:15 +00:00
|
|
|
return (1);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
2014-08-12 14:09:15 +00:00
|
|
|
return (0);
|
2014-08-02 17:18:47 +00:00
|
|
|
}
|
|
|
|
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
/*
|
|
|
|
* Allocate new, larger fhash.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_prepare_mod_fhash(void *ta_buf, uint64_t *pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
struct fhashbhead *head;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
|
|
|
|
memset(mi, 0, sizeof(struct mod_item));
|
|
|
|
mi->size = *pflags;
|
|
|
|
head = malloc(sizeof(struct fhashbhead) * mi->size, M_IPFW,
|
|
|
|
M_WAITOK | M_ZERO);
|
|
|
|
for (i = 0; i < mi->size; i++)
|
|
|
|
SLIST_INIT(&head[i]);
|
|
|
|
|
|
|
|
mi->main_ptr = head;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy data from old runtime array to new one.
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
ta_fill_mod_fhash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t *pflags)
|
|
|
|
{
|
|
|
|
|
|
|
|
/* In is not possible to do rehash if we're not holidng WLOCK. */
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Switch old & new arrays.
|
|
|
|
*/
|
2014-08-12 14:09:15 +00:00
|
|
|
static void
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ta_modify_fhash(void *ta_state, struct table_info *ti, void *ta_buf,
|
|
|
|
uint64_t pflags)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
struct fhash_cfg *cfg;
|
|
|
|
struct fhashbhead *old_head, *new_head;
|
|
|
|
struct fhashentry *ent, *ent_next;
|
|
|
|
int i;
|
|
|
|
uint32_t nhash;
|
|
|
|
size_t old_size;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
cfg = (struct fhash_cfg *)ta_state;
|
|
|
|
|
|
|
|
old_size = cfg->size;
|
|
|
|
old_head = ti->state;
|
|
|
|
|
|
|
|
new_head = (struct fhashbhead *)mi->main_ptr;
|
|
|
|
for (i = 0; i < old_size; i++) {
|
|
|
|
SLIST_FOREACH_SAFE(ent, &old_head[i], next, ent_next) {
|
|
|
|
nhash = hash_flow_ent(ent, mi->size);
|
|
|
|
SLIST_INSERT_HEAD(&new_head[nhash], ent, next);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ti->state = new_head;
|
|
|
|
ti->data = mi->size;
|
|
|
|
cfg->head = new_head;
|
|
|
|
cfg->size = mi->size;
|
|
|
|
|
|
|
|
mi->main_ptr = old_head;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Free unneded array.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_flush_mod_fhash(void *ta_buf)
|
|
|
|
{
|
|
|
|
struct mod_item *mi;
|
|
|
|
|
|
|
|
mi = (struct mod_item *)ta_buf;
|
|
|
|
if (mi->main_ptr != NULL)
|
|
|
|
free(mi->main_ptr, M_IPFW);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct table_algo flow_hash = {
|
|
|
|
.name = "flow:hash",
|
|
|
|
.type = IPFW_TABLE_FLOW,
|
2014-08-01 07:35:17 +00:00
|
|
|
.flags = TA_FLAG_DEFAULT,
|
|
|
|
.ta_buf_size = sizeof(struct ta_buf_fhash),
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
.init = ta_init_fhash,
|
|
|
|
.destroy = ta_destroy_fhash,
|
|
|
|
.prepare_add = ta_prepare_add_fhash,
|
|
|
|
.prepare_del = ta_prepare_del_fhash,
|
|
|
|
.add = ta_add_fhash,
|
|
|
|
.del = ta_del_fhash,
|
|
|
|
.flush_entry = ta_flush_fhash_entry,
|
|
|
|
.foreach = ta_foreach_fhash,
|
|
|
|
.dump_tentry = ta_dump_fhash_tentry,
|
|
|
|
.find_tentry = ta_find_fhash_tentry,
|
2014-08-03 12:19:45 +00:00
|
|
|
.dump_tinfo = ta_dump_fhash_tinfo,
|
2014-08-12 14:09:15 +00:00
|
|
|
.need_modify = ta_need_modify_fhash,
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
.prepare_mod = ta_prepare_mod_fhash,
|
|
|
|
.fill_mod = ta_fill_mod_fhash,
|
|
|
|
.modify = ta_modify_fhash,
|
|
|
|
.flush_mod = ta_flush_mod_fhash,
|
|
|
|
};
|
2014-08-03 09:40:50 +00:00
|
|
|
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
/*
|
|
|
|
* Kernel fibs bindings.
|
|
|
|
*
|
|
|
|
* Implementation:
|
|
|
|
*
|
|
|
|
* Runtime part:
|
|
|
|
* - fully relies on route API
|
|
|
|
* - fib number is stored in ti->data
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2014-10-10 17:24:56 +00:00
|
|
|
static int ta_lookup_kfib(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val);
|
|
|
|
static int kfib_parse_opts(int *pfib, char *data);
|
|
|
|
static void ta_print_kfib_config(void *ta_state, struct table_info *ti,
|
|
|
|
char *buf, size_t bufsize);
|
|
|
|
static int ta_init_kfib(struct ip_fw_chain *ch, void **ta_state,
|
|
|
|
struct table_info *ti, char *data, uint8_t tflags);
|
|
|
|
static void ta_destroy_kfib(void *ta_state, struct table_info *ti);
|
|
|
|
static void ta_dump_kfib_tinfo(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_ta_tinfo *tinfo);
|
|
|
|
static int contigmask(uint8_t *p, int len);
|
|
|
|
static int ta_dump_kfib_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent);
|
2016-01-10 06:43:43 +00:00
|
|
|
static int ta_dump_kfib_tentry_int(struct sockaddr *paddr,
|
|
|
|
struct sockaddr *pmask, ipfw_obj_tentry *tent);
|
2014-10-10 17:24:56 +00:00
|
|
|
static int ta_find_kfib_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent);
|
|
|
|
static void ta_foreach_kfib(void *ta_state, struct table_info *ti,
|
|
|
|
ta_foreach_f *f, void *arg);
|
|
|
|
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
|
|
|
static int
|
|
|
|
ta_lookup_kfib(struct table_info *ti, void *key, uint32_t keylen,
|
|
|
|
uint32_t *val)
|
|
|
|
{
|
2016-01-10 06:43:43 +00:00
|
|
|
#ifdef INET
|
|
|
|
struct nhop4_basic nh4;
|
|
|
|
struct in_addr in;
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
struct nhop6_basic nh6;
|
|
|
|
#endif
|
|
|
|
int error;
|
|
|
|
|
2016-01-10 08:37:00 +00:00
|
|
|
error = ENOENT;
|
2016-01-10 06:43:43 +00:00
|
|
|
#ifdef INET
|
|
|
|
if (keylen == 4) {
|
|
|
|
in.s_addr = *(in_addr_t *)key;
|
|
|
|
error = fib4_lookup_nh_basic(ti->data,
|
|
|
|
in, 0, 0, &nh4);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
|
|
|
if (keylen == 6)
|
|
|
|
error = fib6_lookup_nh_basic(ti->data,
|
|
|
|
(struct in6_addr *)key, 0, 0, 0, &nh6);
|
|
|
|
#endif
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
2016-01-10 06:43:43 +00:00
|
|
|
if (error != 0)
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
return (0);
|
|
|
|
|
|
|
|
*val = 0;
|
|
|
|
|
|
|
|
return (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Parse 'fib=%d' */
|
|
|
|
static int
|
|
|
|
kfib_parse_opts(int *pfib, char *data)
|
|
|
|
{
|
|
|
|
char *pdel, *pend, *s;
|
|
|
|
int fibnum;
|
|
|
|
|
|
|
|
if (data == NULL)
|
|
|
|
return (0);
|
|
|
|
if ((pdel = strchr(data, ' ')) == NULL)
|
|
|
|
return (0);
|
|
|
|
while (*pdel == ' ')
|
|
|
|
pdel++;
|
|
|
|
if (strncmp(pdel, "fib=", 4) != 0)
|
|
|
|
return (EINVAL);
|
|
|
|
if ((s = strchr(pdel, ' ')) != NULL)
|
|
|
|
*s++ = '\0';
|
|
|
|
|
|
|
|
pdel += 4;
|
|
|
|
/* Need \d+ */
|
|
|
|
fibnum = strtol(pdel, &pend, 10);
|
|
|
|
if (*pend != '\0')
|
|
|
|
return (EINVAL);
|
|
|
|
|
|
|
|
*pfib = fibnum;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_print_kfib_config(void *ta_state, struct table_info *ti, char *buf,
|
|
|
|
size_t bufsize)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (ti->data != 0)
|
2014-08-14 21:43:20 +00:00
|
|
|
snprintf(buf, bufsize, "%s fib=%lu", "addr:kfib", ti->data);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
else
|
2014-08-14 21:43:20 +00:00
|
|
|
snprintf(buf, bufsize, "%s", "addr:kfib");
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_init_kfib(struct ip_fw_chain *ch, void **ta_state, struct table_info *ti,
|
|
|
|
char *data, uint8_t tflags)
|
|
|
|
{
|
|
|
|
int error, fibnum;
|
|
|
|
|
|
|
|
fibnum = 0;
|
|
|
|
if ((error = kfib_parse_opts(&fibnum, data)) != 0)
|
|
|
|
return (error);
|
|
|
|
|
|
|
|
if (fibnum >= rt_numfibs)
|
|
|
|
return (E2BIG);
|
|
|
|
|
|
|
|
ti->data = fibnum;
|
|
|
|
ti->lookup = ta_lookup_kfib;
|
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Destroys table @ti
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_destroy_kfib(void *ta_state, struct table_info *ti)
|
|
|
|
{
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Provide algo-specific table info
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ta_dump_kfib_tinfo(void *ta_state, struct table_info *ti, ipfw_ta_tinfo *tinfo)
|
|
|
|
{
|
|
|
|
|
|
|
|
tinfo->flags = IPFW_TATFLAGS_AFDATA;
|
|
|
|
tinfo->taclass4 = IPFW_TACLASS_RADIX;
|
|
|
|
tinfo->count4 = 0;
|
|
|
|
tinfo->itemsize4 = sizeof(struct rtentry);
|
|
|
|
tinfo->taclass6 = IPFW_TACLASS_RADIX;
|
|
|
|
tinfo->count6 = 0;
|
|
|
|
tinfo->itemsize6 = sizeof(struct rtentry);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
contigmask(uint8_t *p, int len)
|
|
|
|
{
|
|
|
|
int i, n;
|
|
|
|
|
|
|
|
for (i = 0; i < len ; i++)
|
|
|
|
if ( (p[i/8] & (1 << (7 - (i%8)))) == 0) /* first bit unset */
|
|
|
|
break;
|
|
|
|
for (n= i + 1; n < len; n++)
|
|
|
|
if ( (p[n/8] & (1 << (7 - (n % 8)))) != 0)
|
|
|
|
return (-1); /* mask not contiguous */
|
|
|
|
return (i);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_dump_kfib_tentry(void *ta_state, struct table_info *ti, void *e,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
|
|
|
struct rtentry *rte;
|
2016-01-10 06:43:43 +00:00
|
|
|
|
|
|
|
rte = (struct rtentry *)e;
|
|
|
|
|
|
|
|
return ta_dump_kfib_tentry_int(rt_key(rte), rt_mask(rte), tent);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_dump_kfib_tentry_int(struct sockaddr *paddr, struct sockaddr *pmask,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
struct sockaddr_in *addr, *mask;
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
|
|
|
#ifdef INET6
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
struct sockaddr_in6 *addr6, *mask6;
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
int len;
|
|
|
|
|
|
|
|
len = 0;
|
|
|
|
|
|
|
|
/* Guess IPv4/IPv6 radix by sockaddr family */
|
2014-10-10 18:31:35 +00:00
|
|
|
#ifdef INET
|
2016-01-10 06:43:43 +00:00
|
|
|
if (paddr->sa_family == AF_INET) {
|
|
|
|
addr = (struct sockaddr_in *)paddr;
|
|
|
|
mask = (struct sockaddr_in *)pmask;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
tent->k.addr.s_addr = addr->sin_addr.s_addr;
|
|
|
|
len = 32;
|
|
|
|
if (mask != NULL)
|
|
|
|
len = contigmask((uint8_t *)&mask->sin_addr, 32);
|
|
|
|
if (len == -1)
|
|
|
|
len = 0;
|
|
|
|
tent->masklen = len;
|
|
|
|
tent->subtype = AF_INET;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = 0; /* Do we need to put GW here? */
|
2014-10-10 18:31:35 +00:00
|
|
|
}
|
|
|
|
#endif
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
#ifdef INET6
|
2016-01-10 06:43:43 +00:00
|
|
|
if (paddr->sa_family == AF_INET6) {
|
|
|
|
addr6 = (struct sockaddr_in6 *)paddr;
|
|
|
|
mask6 = (struct sockaddr_in6 *)pmask;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
memcpy(&tent->k, &addr6->sin6_addr, sizeof(struct in6_addr));
|
|
|
|
len = 128;
|
|
|
|
if (mask6 != NULL)
|
|
|
|
len = contigmask((uint8_t *)&mask6->sin6_addr, 128);
|
|
|
|
if (len == -1)
|
|
|
|
len = 0;
|
|
|
|
tent->masklen = len;
|
|
|
|
tent->subtype = AF_INET6;
|
Add support for multi-field values inside ipfw tables.
This is the last major change in given branch.
Kernel changes:
* Use 64-bytes structures to hold multi-value variables.
* Use shared array to hold values from all tables (assume
each table algo is capable of holding 32-byte variables).
* Add some placeholders to support per-table value arrays in future.
* Use simple eventhandler-style API to ease the process of adding new
table items. Currently table addition may required multiple UH drops/
acquires which is quite tricky due to atomic table modificatio/swap
support, shared array resize, etc. Deal with it by calling special
notifier capable of rolling back state before actually performing
swap/resize operations. Original operation then restarts itself after
acquiring UH lock.
* Bump all objhash users default values to at least 64
* Fix custom hashing inside objhash.
Userland changes:
* Add support for dumping shared value array via "vlist" internal cmd.
* Some small print/fill_flags dixes to support u32 values.
* valtype is now bitmask of
<skipto|pipe|fib|nat|dscp|tag|divert|netgraph|limit|ipv4|ipv6>.
New values can hold distinct values for each of this types.
* Provide special "legacy" type which assumes all values are the same.
* More helpers/docs following..
Some examples:
3:41 [1] zfscurr0# ipfw table mimimi create valtype skipto,limit,ipv4,ipv6
3:41 [1] zfscurr0# ipfw table mimimi info
+++ table(mimimi), set(0) +++
kindex: 2, type: addr
references: 0, valtype: skipto,limit,ipv4,ipv6
algorithm: addr:radix
items: 0, size: 296
3:42 [1] zfscurr0# ipfw table mimimi add 10.0.0.5 3000,10,10.0.0.1,2a02:978:2::1
added: 10.0.0.5/32 3000,10,10.0.0.1,2a02:978:2::1
3:42 [1] zfscurr0# ipfw table mimimi list
+++ table(mimimi), set(0) +++
10.0.0.5/32 3000,0,10.0.0.1,2a02:978:2::1
2014-08-31 23:51:09 +00:00
|
|
|
tent->v.kidx = 0;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
2014-10-10 18:31:35 +00:00
|
|
|
#endif
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
ta_find_kfib_tentry(void *ta_state, struct table_info *ti,
|
|
|
|
ipfw_obj_tentry *tent)
|
|
|
|
{
|
2016-01-10 06:43:43 +00:00
|
|
|
struct rt_addrinfo info;
|
|
|
|
struct sockaddr_in6 key6, dst6, mask6;
|
|
|
|
struct sockaddr *dst, *key, *mask;
|
|
|
|
|
|
|
|
/* Prepare sockaddr for prefix/mask and info */
|
|
|
|
bzero(&dst6, sizeof(dst6));
|
|
|
|
dst6.sin6_len = sizeof(dst6);
|
|
|
|
dst = (struct sockaddr *)&dst6;
|
|
|
|
bzero(&mask6, sizeof(mask6));
|
|
|
|
mask6.sin6_len = sizeof(mask6);
|
|
|
|
mask = (struct sockaddr *)&mask6;
|
|
|
|
|
|
|
|
bzero(&info, sizeof(info));
|
|
|
|
info.rti_info[RTAX_DST] = dst;
|
|
|
|
info.rti_info[RTAX_NETMASK] = mask;
|
|
|
|
|
|
|
|
/* Prepare the lookup key */
|
|
|
|
bzero(&key6, sizeof(key6));
|
|
|
|
key6.sin6_family = tent->subtype;
|
|
|
|
key = (struct sockaddr *)&key6;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
|
|
|
if (tent->subtype == AF_INET) {
|
2016-01-10 06:43:43 +00:00
|
|
|
((struct sockaddr_in *)&key6)->sin_addr = tent->k.addr;
|
|
|
|
key6.sin6_len = sizeof(struct sockaddr_in);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
} else {
|
2016-01-10 06:43:43 +00:00
|
|
|
key6.sin6_addr = tent->k.addr6;
|
|
|
|
key6.sin6_len = sizeof(struct sockaddr_in6);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
|
|
|
|
2016-01-10 06:43:43 +00:00
|
|
|
if (rib_lookup_info(ti->data, key, 0, 0, &info) != 0)
|
|
|
|
return (ENOENT);
|
|
|
|
if ((info.rti_addrs & RTA_NETMASK) == 0)
|
|
|
|
mask = NULL;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
2016-01-10 06:43:43 +00:00
|
|
|
ta_dump_kfib_tentry_int(dst, mask, tent);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
|
2016-01-10 06:43:43 +00:00
|
|
|
return (0);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
ta_foreach_kfib(void *ta_state, struct table_info *ti, ta_foreach_f *f,
|
|
|
|
void *arg)
|
|
|
|
{
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
struct rib_head *rh;
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
int error;
|
|
|
|
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rh = rt_tables_get_rnh(ti->data, AF_INET);
|
|
|
|
if (rh != NULL) {
|
|
|
|
RIB_RLOCK(rh);
|
|
|
|
error = rh->rnh_walktree(&rh->head, (walktree_f_t *)f, arg);
|
|
|
|
RIB_RUNLOCK(rh);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
|
|
|
|
MFP r287070,r287073: split radix implementation and route table structure.
There are number of radix consumers in kernel land (pf,ipfw,nfs,route)
with different requirements. In fact, first 3 don't have _any_ requirements
and first 2 does not use radix locking. On the other hand, routing
structure do have these requirements (rnh_gen, multipath, custom
to-be-added control plane functions, different locking).
Additionally, radix should not known anything about its consumers internals.
So, radix code now uses tiny 'struct radix_head' structure along with
internal 'struct radix_mask_head' instead of 'struct radix_node_head'.
Existing consumers still uses the same 'struct radix_node_head' with
slight modifications: they need to pass pointer to (embedded)
'struct radix_head' to all radix callbacks.
Routing code now uses new 'struct rib_head' with different locking macro:
RADIX_NODE_HEAD prefix was renamed to RIB_ (which stands for routing
information base).
New net/route_var.h header was added to hold routing subsystem internal
data. 'struct rib_head' was placed there. 'struct rtentry' will also
be moved there soon.
2016-01-25 06:33:15 +00:00
|
|
|
rh = rt_tables_get_rnh(ti->data, AF_INET6);
|
|
|
|
if (rh != NULL) {
|
|
|
|
RIB_RLOCK(rh);
|
|
|
|
error = rh->rnh_walktree(&rh->head, (walktree_f_t *)f, arg);
|
|
|
|
RIB_RUNLOCK(rh);
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
struct table_algo addr_kfib = {
|
|
|
|
.name = "addr:kfib",
|
|
|
|
.type = IPFW_TABLE_ADDR,
|
* Add cidr:kfib algo type just for fun. It binds kernel fib
of given number to a table.
Example:
# ipfw table fib2 create algo "cidr:kfib fib=2"
# ipfw table fib2 info
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
# ipfw table fib2 list
+++ table(fib2), set(0) +++
10.0.0.0/24 0
127.0.0.1/32 0
::/96 0
::1/128 0
::ffff:0.0.0.0/96 0
2a02:978:2::/112 0
fe80::/10 0
fe80:1::/64 0
fe80:2::/64 0
fe80:3::/64 0
ff02::/16 0
# ipfw table fib2 lookup 10.0.0.5
10.0.0.0/24 0
# ipfw table fib2 lookup 2a02:978:2::11
2a02:978:2::/112 0
# ipfw table fib2 detail
+++ table(fib2), set(0) +++
kindex: 2, type: cidr, locked
valtype: number, references: 0
algorithm: cidr:kfib fib=2
items: 11, size: 288
IPv4 algorithm radix info
items: 0 itemsize: 200
IPv6 algorithm radix info
items: 0 itemsize: 200
2014-08-14 20:17:23 +00:00
|
|
|
.flags = TA_FLAG_READONLY,
|
|
|
|
.ta_buf_size = 0,
|
|
|
|
.init = ta_init_kfib,
|
|
|
|
.destroy = ta_destroy_kfib,
|
|
|
|
.foreach = ta_foreach_kfib,
|
|
|
|
.dump_tentry = ta_dump_kfib_tentry,
|
|
|
|
.find_tentry = ta_find_kfib_tentry,
|
|
|
|
.dump_tinfo = ta_dump_kfib_tinfo,
|
|
|
|
.print_config = ta_print_kfib_config,
|
|
|
|
};
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
void
|
2014-07-29 21:38:06 +00:00
|
|
|
ipfw_table_algo_init(struct ip_fw_chain *ch)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-29 21:38:06 +00:00
|
|
|
size_t sz;
|
|
|
|
|
2014-06-14 10:58:39 +00:00
|
|
|
/*
|
|
|
|
* Register all algorithms presented here.
|
|
|
|
*/
|
2014-07-29 21:38:06 +00:00
|
|
|
sz = sizeof(struct table_algo);
|
2014-08-14 21:43:20 +00:00
|
|
|
ipfw_add_table_algo(ch, &addr_radix, sz, &addr_radix.idx);
|
|
|
|
ipfw_add_table_algo(ch, &addr_hash, sz, &addr_hash.idx);
|
2014-07-29 21:38:06 +00:00
|
|
|
ipfw_add_table_algo(ch, &iface_idx, sz, &iface_idx.idx);
|
2014-07-30 14:52:26 +00:00
|
|
|
ipfw_add_table_algo(ch, &number_array, sz, &number_array.idx);
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ipfw_add_table_algo(ch, &flow_hash, sz, &flow_hash.idx);
|
2014-08-14 21:43:20 +00:00
|
|
|
ipfw_add_table_algo(ch, &addr_kfib, sz, &addr_kfib.idx);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2014-07-29 21:38:06 +00:00
|
|
|
ipfw_table_algo_destroy(struct ip_fw_chain *ch)
|
2014-06-14 10:58:39 +00:00
|
|
|
{
|
2014-07-29 21:38:06 +00:00
|
|
|
|
2014-08-14 21:43:20 +00:00
|
|
|
ipfw_del_table_algo(ch, addr_radix.idx);
|
|
|
|
ipfw_del_table_algo(ch, addr_hash.idx);
|
2014-07-29 21:38:06 +00:00
|
|
|
ipfw_del_table_algo(ch, iface_idx.idx);
|
2014-07-30 14:52:26 +00:00
|
|
|
ipfw_del_table_algo(ch, number_array.idx);
|
* Add new "flow" table type to support N=1..5-tuple lookups
* Add "flow:hash" algorithm
Kernel changes:
* Add O_IP_FLOW_LOOKUP opcode to support "flow" lookups
* Add IPFW_TABLE_FLOW table type
* Add "struct tflow_entry" as strage for 6-tuple flows
* Add "flow:hash" algorithm. Basically it is auto-growing chained hash table.
Additionally, we store mask of fields we need to compare in each instance/
* Increase ipfw_obj_tentry size by adding struct tflow_entry
* Add per-algorithm stat (ifpw_ta_tinfo) to ipfw_xtable_info
* Increase algoname length: 32 -> 64 (algo options passed there as string)
* Assume every table type can be customized by flags, use u8 to store "tflags" field.
* Simplify ipfw_find_table_entry() by providing @tentry directly to algo callback.
* Fix bug in cidr:chash resize procedure.
Userland changes:
* add "flow table(NAME)" syntax to support n-tuple checking tables.
* make fill_flags() separate function to ease working with _s_x arrays
* change "table info" output to reflect longer "type" fields
Syntax:
ipfw table fl2 create type flow:[src-ip][,proto][,src-port][,dst-ip][dst-port] [algo flow:hash]
Examples:
0:02 [2] zfscurr0# ipfw table fl2 create type flow:src-ip,proto,dst-port algo flow:hash
0:02 [2] zfscurr0# ipfw table fl2 info
+++ table(fl2), set(0) +++
kindex: 0, type: flow:src-ip,proto,dst-port
valtype: number, references: 0
algorithm: flow:hash
items: 0, size: 280
0:02 [2] zfscurr0# ipfw table fl2 add 2a02:6b8::333,tcp,443 45000
0:02 [2] zfscurr0# ipfw table fl2 add 10.0.0.92,tcp,80 22000
0:02 [2] zfscurr0# ipfw table fl2 list
+++ table(fl2), set(0) +++
2a02:6b8::333,6,443 45000
10.0.0.92,6,80 22000
0:02 [2] zfscurr0# ipfw add 200 count tcp from me to 78.46.89.105 80 flow 'table(fl2)'
00200 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
0:03 [2] zfscurr0# ipfw show
00200 0 0 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 617 59416 allow ip from any to any
0:03 [2] zfscurr0# telnet -s 10.0.0.92 78.46.89.105 80
Trying 78.46.89.105...
..
0:04 [2] zfscurr0# ipfw show
00200 5 272 count tcp from me to 78.46.89.105 dst-port 80 flow table(fl2)
65535 682 66733 allow ip from any to any
2014-07-31 20:08:19 +00:00
|
|
|
ipfw_del_table_algo(ch, flow_hash.idx);
|
2014-08-14 21:43:20 +00:00
|
|
|
ipfw_del_table_algo(ch, addr_kfib.idx);
|
2014-06-14 10:58:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|