Commit Graph

162 Commits

Author SHA1 Message Date
Julian Elischer
ac957cd271 A bunch of formatting fixes brough to light by, or created by the Vimage commit
a few days ago.
2008-08-20 01:05:56 +00:00
Bjoern A. Zeeb
603724d3ab Commit step 1 of the vimage project, (network stack)
virtualization work done by Marko Zec (zec@).

This is the first in a series of commits over the course
of the next few weeks.

Mark all uses of global variables to be virtualized
with a V_ prefix.
Use macros to map them back to their global names for
now, so this is a NOP change only.

We hope to have caught at least 85-90% of what is needed
so we do not invalidate a lot of outstanding patches again.

Obtained from:	//depot/projects/vimage-commit2/...
Reviewed by:	brooks, des, ed, mav, julian,
		jamie, kris, rwatson, zec, ...
		(various people I forgot, different versions)
		md5 (with a bit of help)
Sponsored by:	NLnet Foundation, The FreeBSD Foundation
X-MFC after:	never
V_Commit_Message_Reviewed_By:	more people than the patch
2008-08-17 23:27:27 +00:00
Robert Watson
59dd72d040 Remove NETISR_MPSAFE, which allows specific netisr handlers to be directly
dispatched without Giant, and add NETISR_FORCEQUEUE, which allows specific
netisr handlers to always be dispatched via a queue (deferred).  Mark the
usb and if_ppp netisr handlers as NETISR_FORCEQUEUE, and explicitly
acquire Giant in those handlers.

Previously, any netisr handler not marked NETISR_MPSAFE would necessarily
run deferred and with Giant acquired.  This change removes Giant
scaffolding from the netisr infrastructure, but NETISR_FORCEQUEUE allows
non-MPSAFE handlers to continue to force deferred dispatch so as to avoid
lock order reversals between their acqusition of Giant and any calling
context.

It is likely we will be able to remove NETISR_FORCEQUEUE once
IFF_NEEDSGIANT is removed, as non-MPSAFE usb and if_ppp drivers will no
longer be supported.

Reviewed by:	bz
MFC after:	1 month
X-MFC note:	We can't remove NETISR_MPSAFE from stable/7 for KPI reasons,
		but the rest can go back.
2008-07-04 00:21:38 +00:00
Alexander Motin
a04e98468d ng_address_hook() microoptimization. Use local variables as they should be.
It helps compiller to avoid some extra memory accesses.
2008-04-19 05:30:49 +00:00
Alexander Motin
6aa6d011e4 Use separate UMA zone for data items allocation. It is a partial
rev. 1.149 rework.
It allows to save several percents of CPU time on SMP by using UMA's
internal per-CPU allocation limits instead of own global variable
each time updated with atomics.

Tested with:    Netperf cluster
2008-04-16 19:52:29 +00:00
Alexander Motin
9852972bb5 Several changes breaking netgraph module ABI collected together:
- reorder structures fields (XX_refs) a bit to group fields modified
   same time together. According to my tests it gives up to 10%
   SMP performance benefit on real workload due to reduced inter-CPU
   cache trashing.
 - change q_flags from long to int as long is not really needed there and
   it's usage with atomics is argued by some people.
 - move NGF_WORKQ flag into the separate field q_flags2 as it protected by
   queue mutex instead of node writer protection used by the rest of flags.
 - move nd_work queue entry to ng_queue structure to which it is more
   related and make it STAILQ instead of TAILQ as now it is a classic FIFO.
 - remove q_node pointer from ng_queue structure as it is not really needed.
 - reimplement item queue using STAILQ instead of own equal implementation.
   As soon as BT subsystem has own item queues using ng_item.el_next update
   it also.
 - change depth field in ng_item from uintptr_t to u_int. It was made
   uintptr_t to keep ABI compatibility.

Reviewed by:	julian, emax
Tested with:	Netperf cluster
2008-04-15 21:15:32 +00:00
Alexander Motin
8f9ac44aa7 Add memory barriers to the node locking operations.
Add some comments.
2008-04-09 19:03:19 +00:00
Alexander Motin
394cb30a36 Rewrite node's r/w/q-lock semantics using only atomics instead of mutex
and atomics combination. Mutex is now used only for queue protection.
Also avoid unneded extra swi scheduling calls.
2008-04-06 15:26:32 +00:00
Alexander Motin
018fe3d10e Use new atomic_fetchadd() primitive instead of looping atomic_cmpset(). 2008-03-30 00:27:48 +00:00
Alexander Motin
f573da1a0e There is no need to erase hook->hk_node before freing hook. 2008-03-29 22:53:58 +00:00
Alexander Motin
244586d6f1 Remove ng_setisr() call from ng_dequeue(). It is useless as we any way
will never exit ngintr(), while there is some ready requests on the queue.
It was made years ago with hope of parallel queue processing by several
net threads. But even if we have several threads sometimes, we have no
rights to process queue in parallel as it will break original requests
serialization that is critically important for some setups.
2008-03-27 23:02:30 +00:00
Alexander Motin
e81de8afb0 Remove impossible (hk_peer == NULL) check from ng_address_hook().
Valid hook can't have NULL peer. Even invalid one can't, as it is resets to
deadhook, but not NULL.
2008-03-16 23:12:17 +00:00
Alexander Motin
10e873189c Improve apply callback error reporting:
Before this patch callback returned result of the last finished call chain.
Now it returns last nonzero result from all call chain results in this request.

As soon as this improvement gives reliable error reporting, it is now possible
to remove dirty workaround in ng_socket, made to return ENOBUFS error statuses
of request-response operations. That workaround was responsible for returning
ENOBUFS errors to completely unrelated requests working at the same time
on socket.
2008-03-11 21:58:48 +00:00
Alexander Motin
ed75521f5b Increase default queue items allocation limit from 512 to 4096 items
to avoid terrible unpredicted effects for netgraph operation of their
exhaustion while allocating control messages.
Add separate configurable 512 items limit for data items allocation
for DoS/overload protection.

Discussed with:	julian
2008-03-05 22:12:34 +00:00
Alexander Motin
cfea3f8522 Implement 128 items node name hash for faster name search.
Increase node ID hash size from 32 to 128 items.
2008-03-04 18:22:18 +00:00
Alexander Motin
db3408aed0 Fix incorrect constant used in rev. 1.146 that broke node writer locking. 2008-02-25 21:24:53 +00:00
Alexander Motin
f50597f5f1 Cleanup and tune ng_snd_item() function as it is one of the
most busy netgraph functions.
Tune stack protection constants to avoid division operation.
2008-02-06 18:50:40 +00:00
Dmitry Morozovsky
f9773372c3 Fix one more grammo.
Noticed by:	ru
2008-02-02 08:41:53 +00:00
Dmitry Morozovsky
942fe01f61 Reword recent comment a bit. 2008-02-01 17:35:46 +00:00
Alexander Motin
d4529f987a Add comments about stack protection mechanism. 2008-02-01 11:01:15 +00:00
Alexander Motin
e72a98f4bf Some code reformat. 2008-01-31 10:13:04 +00:00
Alexander Motin
81a253a4ed Implement stack protection based on GET_STACK_USAGE() macro.
This fixes system panics possible with complicated netgraph setups
and allows to avoid unneded extra queueing for stack unwrapping.
2008-01-31 08:51:48 +00:00
Robert Watson
3de213cc00 Add a new 'why' argument to kdb_enter(), and a set of constants to use
for that argument.  This will allow DDB to detect the broad category of
reason why the debugger has been entered, which it can use for the
purposes of deciding which DDB script to run.

Assign approximate why values to all current consumers of the
kdb_enter() interface.
2007-12-25 17:52:02 +00:00
Gleb Smirnoff
b332b91f74 - Merge all the ng_send_fn2* functions into one - ng_send_fn2(),
removing some copy&pasted code.
- Reduce copy and paste in ng_apply_item().
- Resurrect ng_send_fn() as a valid symbol, not a define.

Reviewed by:	mav, julian
2007-11-14 11:25:58 +00:00
Alexander Motin
eb4687d223 Minor debug message fix. 2007-10-28 18:05:59 +00:00
Ruslan Ermilov
857304e6f1 Fix build with NETGRAPH_DEBUG. 2007-10-19 20:09:58 +00:00
Alexander Motin
e088dd4c44 Implement new apply callback mechanism to handle item forwarding.
When item forwarded refence counter is incremented, when item
processed, counter decremented. When counter reaches zero,
apply handler is getting called.
Now it allows to report right connect() call status from user-level
at the right time.
2007-10-19 15:04:17 +00:00
Alexander Motin
3fb87c2411 Add ng_send_fn() error handeling inside ng_con_nodes().
Without it some errors may left unnoticed and unhandeled
that will lead to hooks left in half-connected state.

Reviewed by:	julian@
Approved by:	re (kensmith), glebius (mentor)
2007-08-18 11:59:17 +00:00
David Malone
041b706b2f Despite several examples in the kernel, the third argument of
sysctl_handle_int is not sizeof the int type you want to export.
The type must always be an int or an unsigned int.

Remove the instances where a sizeof(variable) is passed to stop
people accidently cut and pasting these examples.

In a few places this was sysctl_handle_int was being used on 64 bit
types, which would truncate the value to be exported.  In these
cases use sysctl_handle_quad to export them and change the format
to Q so that sysctl(1) can still print them.
2007-06-04 18:25:08 +00:00
Gleb Smirnoff
2775748750 Partially back out rev. 1.127, to restore broken functionality. This
should be redesigned, but better enter RELENG_7 with a working ngctl(8).

Agreed by:	julian
2007-06-01 09:20:57 +00:00
Robert Watson
e1e8f51b85 Universally adopt most conventional spelling of acquire. 2007-05-27 20:50:23 +00:00
Wojciech A. Koszek
4abab3d593 We don't need spinning locks here. Change them to the adaptive mutexes. This
change should bring no performance decrease, as it did not in my tests.

Reviewed by:	julian, glebius
Approved by:	cognet (mentor)
2007-03-31 15:43:06 +00:00
Wojciech A. Koszek
2c8dda8d55 Instead of direct manipulation on queue and worklist mutexes, bring macros
for doing this job. This change will make it easy to migrate from using
spinning locks to adaptive ones.

Reviewed by:	glebius, julian
Approved by:	cognet (mentor)
2007-03-30 14:34:34 +00:00
Robert Watson
64efc707cf Prefer more traditional spellings of some words in comments. 2007-03-18 16:49:50 +00:00
Julian Elischer
a54a69d7f9 oops committed the wrong patch.
try this one..
2007-03-10 01:02:40 +00:00
Julian Elischer
262dfd7fae ng_apply_item should be void. It is called from the interrupt source or
from whoever has dequeued the item from the queue. Generally they have
no interest in the result, and even if it is called by the queuer, it
should still pretend that it was queued. The queuer should be assuming
that the call was queued and giving them the false confidence that they
are getting status leads to hard to find bugs.

Make it a void and remove all the code that tried to return status through it.
2007-03-09 21:04:50 +00:00
Gleb Smirnoff
cf3254aac8 Do not leak hooks in ng_bypass().
Submitted by:	Alexander Motin <mav alkar.net>
2006-10-11 14:33:08 +00:00
Gleb Smirnoff
b96baf0a65 When counting nodes second time, use the same criteria as for
the first time.

PR:		kern/98529
Submitted by:	Michael Heyman
2006-06-07 12:42:15 +00:00
Gleb Smirnoff
27e216594b Use NET_CALLOUT_MPSAFE for netgraph callout initializer. 2006-06-06 08:05:27 +00:00
John Baldwin
b1e30c4c4e Conditionally acquire Giant in netgraph callouts to honor mpsafenet=0.
Reported by:	sekes <gexlie at gmail dot com>
MFC after:	1 week
2006-06-02 20:35:39 +00:00
Gleb Smirnoff
2955ee1802 - Print also node ID in ktr(9) messages. [1]
- Use fixed length for function name, making ktrdump(8) output
  easier to read.

Suggested by:	julian [1]
2006-01-12 22:41:32 +00:00
Gleb Smirnoff
22b286280c Remove old debugging leftover.
Reviewed by:	julian
2006-01-12 21:03:09 +00:00
Gleb Smirnoff
1be0418cbc Fix wording in last commit.
Submitted by:	julian
2006-01-12 10:15:51 +00:00
Gleb Smirnoff
3b33fbe7d4 Add ktr(9) hooks to easier tracing of the netgraph item flow through
netgraph.
2006-01-11 15:29:48 +00:00
Gleb Smirnoff
4c9b591060 Some whitespace and style cleanup. 2005-11-15 10:54:20 +00:00
Gleb Smirnoff
ac5dd14182 Fix two races which happen when netgraph is restructuring:
- Introduce ng_topo_mtx, a mutex to protect topology changes.
  - In ng_destroy_node() protect with ng_topo_mtx the process
    of checking and pointing at ng_deadnode. [1]
  - In ng_con_part2() check that our peer is not a ng_deadnode,
    and protect the check with ng_topo_mtx.
  - Add KASSERTs to ng_acquire_read/write, to make more
    understandible synopsis in case if called on ng_deadnode.

Reported by:	Roselyn Lee [1]
2005-11-02 15:23:47 +00:00
Gleb Smirnoff
4be5933577 Rework the ng_item queueing on nodes:
- Introduce a new flags NGQF_QREADER and NGQF_QWRITER,
    which tell how the item should be actually applied,
    overriding NGQF_READER/NGQF_WRITER flags.
  - Do not differ between pending reader or writer. Use only
    one flag that is raised, when there are pending items.
  - Schedule netgraph ISR in ng_queue_rw(), so that callers
    do not need to do this job.
  - Fix several comments.

Submitted by:	julian
2005-11-02 14:27:24 +00:00
Gleb Smirnoff
eb2405dde8 - When flushing node input queue, check whether item has a callback. If
it does, then call it suppling ENOENT as error value.
- Add assert, that helped to catch the above error.
2005-10-13 11:55:50 +00:00
Gleb Smirnoff
32b33288f7 After rev. 1.103 the oitem and ierror are no longer needed, remove them. 2005-10-12 10:18:44 +00:00
Gleb Smirnoff
714fb86548 Fix a regression introduced in rev. 1.107. If an item once had a writer
semantics, and then was reused for next node, it still would be applied
as writer again.
  To fix the regression the decision is made never to alter item->el_flags
after the item has been allocated. This requires checking for overrides
both in ng_dequeue() and in ng_snd_item().

  Details:
  - Caller of the ng_apply_item() knows what is the current access to
    node and specifies it to ng_apply_item(). The latter drops the
    given access after item has beem applied.
  - ng_dequeue() needs to be supplied with int pointer, where it stores
    the obtained access on node.
  - Check for node/hook access overrides in ng_dequeue().
2005-10-11 13:48:38 +00:00