o tcp_input() now handles TCP segment sanity checks and preparations
including the INPCB lookup and syncache.
o tcp_do_segment() handles all data and ACK processing and is IPv4/v6
agnostic.
Change all KASSERT() messages to ("%s: ", __func__).
The changes in this commit are primarily of mechanical nature and no
functional changes besides the function split are made.
Discussed with: rwatson
- Change exca_activate_resource() to call BUS_ACTIVATE_RESOURCE() before
calling exca_(io|mem)_map() since the latter use rman_get_bus(tag|handle)
and the recent changes to nexus(4) mean that you need to activate a
resource before reading the bus tag and handle. This was true before,
but now the nexus(4) drivers on x86 and ia64 are more forceful about it.
Reviewed by: imp
chunks. This allows runs to be any multiple of the page size. The
primary advantage is that large objects are no longer constrained to be
2^n pages, which can dramatically decrease internal fragmentation for
large objects. This also allows the sizes for runs that back small
objects to be more finely tuned.
Free runs are searched for linearly using the chunk page map (with the
help of some heuristic optimizations). This changes the allocation
policy from "first best fit" to "first fit". A prototype red-black tree
implementation for tracking free runs that implemented "first best fit"
did not cause a measurable speed or memory usage difference for
realistic chunk sizes (though of course it is possible to construct
benchmarks that favor one allocation policy over another).
Refine the handling of fullness constraints for small runs to be more
tunable.
Restructure the per chunk page map to contain only two fields per entry,
rather than four. Also, increase each entry from 4 to 8 bytes, since it
allows for 32-bit integers, without increasing the number of chunk
header pages.
Relax the maximum chunk size constraint. This is of no practical
interest; it is merely fallout from the chunk page map restructuring.
Revamp statistics gathering and reporting to be faster, clearer and more
informative. Statistics gathering is fast enough now to have little
to no impact on application speed, but it still requires approximately
two extra pages of memory per arena (per process). This memory overhead
may be acceptable for most systems, but we still need to leave
statistics gathering disabled by default in RELENG branches.
Rename NO_MALLOC_EXTRAS to MALLOC_PRODUCTION in order to make its intent
clearer (i.e. it should be defined in RELENG branches).
calling pru_detach we can be absolutely sure, that we don't have any
references to the socket in the stack.
This closes race between lockless sbdestroy() and data arriving on socket.
Reviewed by: rwatson
sequence. First, if rt_ifa is going to be changed, then call
ifa_rtrequest(RTM_DELETE). Second, if gateway is going to be changed,
then call rt_setgate(). Third, change rt_ifa.
With this change we are able to change a link level route to a
gateway one, that wasn't possible before:
# ifconfig em0 192.168.22.1/24
# arp -s 192.168.22.99 00:11:22:33:44:55
# route change 192.168.22.99 192.168.22.199
# ping 192.168.22.99
db>
Reported by: avatar
instance expiry of the ARP entries. Since we no longer abuse the IPv4
radix head lock, we can now enter arp_rtrequest() with a lock held on
an arbitrary rt_entry.
Reviewed by: bms
argument from a mutex to a lock_object. Add cv_*wait*() wrapper macros
that accept either a mutex, rwlock, or sx lock as the second argument and
convert it to a lock_object and then call _cv_*wait*(). Basically, the
visible difference is that you can now use rwlocks and sx locks with
condition variables using the same API as with mutexes.
macros.
- witness_check() replaces witness_check_mtx() and
witness_check_exclusive_sx() and checks for an exclusive acquire of
either a mutex, rwlock, or sx lock.
- witness_check_shared() replaces witness_check_shared_sx() and checks for
a shared acquire of either a rwlock or sx lock.
until after the call to fdclose(). This closes an obscure race that
could result in the later call to fdclose() actually closing a different
file descriptor if another thread close()'s the file descriptor being
opened before fdrop() is called, so the fdrop() in kern_open() frees the
file object, then the second thread (or a third) creates a new file
descriptor which reuses both the same index and the same file pointer
thus tricking fdclose() in the first thread into thinking that the
original file was still open.
MFC after: 1 week