- reorder structures fields (XX_refs) a bit to group fields modified
same time together. According to my tests it gives up to 10%
SMP performance benefit on real workload due to reduced inter-CPU
cache trashing.
- change q_flags from long to int as long is not really needed there and
it's usage with atomics is argued by some people.
- move NGF_WORKQ flag into the separate field q_flags2 as it protected by
queue mutex instead of node writer protection used by the rest of flags.
- move nd_work queue entry to ng_queue structure to which it is more
related and make it STAILQ instead of TAILQ as now it is a classic FIFO.
- remove q_node pointer from ng_queue structure as it is not really needed.
- reimplement item queue using STAILQ instead of own equal implementation.
As soon as BT subsystem has own item queues using ng_item.el_next update
it also.
- change depth field in ng_item from uintptr_t to u_int. It was made
uintptr_t to keep ABI compatibility.
Reviewed by: julian, emax
Tested with: Netperf cluster
- Do not check destination hook presence, it will be done by netgraph.
- Use u_int instead of int in some places to simplify type conversions.
- Use NG_SEND_DATA_ONLY() macro instead of selfmade equivalent.
will never exit ngintr(), while there is some ready requests on the queue.
It was made years ago with hope of parallel queue processing by several
net threads. But even if we have several threads sometimes, we have no
rights to process queue in parallel as it will break original requests
serialization that is critically important for some setups.
of pptpgre and ksocket nodes for all calls between two peers. This patch
modifies node's API by adding new "session_%04x" hook names support, while
keeping backward compatibility.
Together with appropriate user-level support (by latest mpd5) it gives
huge performance benefits for case of multiple active calls between
two peers because of avoiding data duplication and extra socket processing.
On my benchmarks I have got more then 10 times speedup for the 200
simultaneous PPTP calls between two peers.
In conclusion, it allows now to build effective "clients <=> PAC <=> PNS"
setups.
Before this patch callback returned result of the last finished call chain.
Now it returns last nonzero result from all call chain results in this request.
As soon as this improvement gives reliable error reporting, it is now possible
to remove dirty workaround in ng_socket, made to return ENOBUFS error statuses
of request-response operations. That workaround was responsible for returning
ENOBUFS errors to completely unrelated requests working at the same time
on socket.
if netgraph reported error while delivering to destination.
Reset 'next send' counter to the last requested by peer on ack timeout
to resend all subsequest packets after lost one again without additional hints.
trashing and improve performance.
Remove waitflag argument from ng_ksocket_incoming2(), it means nothing
as function call was queued by netgraph.
Remove node validity check, as node validity guarantied by netgraph.
Update comments.
to avoid terrible unpredicted effects for netgraph operation of their
exhaustion while allocating control messages.
Add separate configurable 512 items limit for data items allocation
for DoS/overload protection.
Discussed with: julian
- Introduce a finit() which is used to initailize the fields of struct file
in such a way that the ops vector is only valid after the data, type,
and flags are valid.
- Protect f_flag and f_count with atomic operations.
- Remove the global list of all files and associated accounting.
- Rewrite the unp garbage collection such that it no longer requires
the global list of all files and instead uses a list of all unp sockets.
- Mark sockets in the accept queue so we don't incorrectly gc them.
Tested by: kris, pho
argument. It allows ppp, mpd or any other node consumer to request
connection to specified access concentrator.
Proposed by: Alexander A. Burylov <burylov@mail.ru>
for that argument. This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.
Assign approximate why values to all current consumers of the
kdb_enter() interface.
Previous value 16 was too small for real LAC as temporal activity
spike cound easily overflow queue demanding tunnel disconnection due
to possible state inconsistency.
removing some copy&pasted code.
- Reduce copy and paste in ng_apply_item().
- Resurrect ng_send_fn() as a valid symbol, not a define.
Reviewed by: mav, julian
zero (0). Actual RFCOMM channel will be assigned after listen(2)
call is done on a RFCOMM socket bound to a ''wildcard'' RFCOMM
channel zero (0).
Address locking issues in ng_btsocket_rfcomm_bind()
Submitted by: Heiko Wundram (Beenic) < wundram at beenic dot net >
MFC after: 1 week
When item forwarded refence counter is incremented, when item
processed, counter decremented. When counter reaches zero,
apply handler is getting called.
Now it allows to report right connect() call status from user-level
at the right time.
simplifies code and should speedup pppoe_findsession() function which is
called for every incoming packet.
Approved by: re (kensmith), glebius (mentor)
Without it some errors may left unnoticed and unhandeled
that will lead to hooks left in half-connected state.
Reviewed by: julian@
Approved by: re (kensmith), glebius (mentor)
previously conditionally acquired Giant based on debug.mpsafenet. As that
has now been removed, they are no longer required. Removing them
significantly simplifies error-handling in the socket layer, eliminated
quite a bit of unwinding of locking in error cases.
While here clean up the now unneeded opt_net.h, which previously was used
for the NET_WITH_GIANT kernel option. Clean up some related gotos for
consistency.
Reviewed by: bz, csjp
Tested by: kris
Approved by: re (kensmith)
64bit counters are needed to simplify traffic accounting and
reduce system load at the big PPP concentrators.
Approved by: re (rwatson), glebius (mentor)
Till now node's transmit path was completely unprotected
and so wasn't thread safe in multilink mode. It's receive path was
declared as WRITER as the simpliest protection method but it
reduces performance when compression or encryption enabled.
Approved by: re (rwatson), glebius (mentor)
stack overflow in complicated traffic filtering setups.
There can be minor performance degradation for the MHLEN < len <= 256 case
due to additional buffer allocation, but it is a rare case.
Approved by: re (rwatson), glebius (mentor)
MFC after: 1 week
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.
Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.
In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported. In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.