It seems that current code should pass the check.
This commit should not lead to any changes in compiled code.
From now on a warning shall be produced if kobj method implementation
function has a mismatching signature.
Verified by: md5
Reviewed by: imp
Approved by: jhb (mentor)
use almost anything that uses libufs(3) against a file as an unprivileged user, e.g.
tunefs(8) and dumpfs(8) against a makefs(8)-created image.
Prodded by: kensmith
Big thanks to Christoph Mallon for the idea/code!
This construct has benefit of sticking much stricter to C standard and thus
keeping more compilers happy as Clang doesn't like the current construct
because it doesn't treat FUNC != NULL as a compile-time constant.
The checking version is still under 'notyet'.
Pointed out by: ed
Submitted by: Christoph Mallon <christoph.mallon@gmx.de>
Clang help by: rdivacky
Reviewed by: imp
Approved by: jhb
in symtab_get method symtab parameter is made constant as this reflects
actual intention and usage of the method
Reviewed by: imp, current@
Approved by: jhb (mentor)
network stack when reentering the inbound path from netgraph, and
force queueing of mbufs at the outbound netgraph node.
The mechanism relies on two components. First, in netgraph nodes
where outbound path of the network stack calls into netgraph, the
current thread has to be appropriately marked using the new
NG_OUTBOUND_THREAD_REF() macro before proceeding to call further
into the netgraph topology, and unmarked using the
NG_OUTBOUND_THREAD_UNREF() macro before returning to the caller.
Second, netgraph nodes which can potentially reenter the network
stack in the inbound path have to mark their inbound hooks using
NG_HOOK_SET_TO_INBOUND() macro. The netgraph framework will then
detect when there is a danger of a call graph looping back from
outbound to inbound path via netgraph, and defer handing off the
mbufs to the "inbound" node to a worker thread with a clean stack.
In this first pass only the most obvious netgraph nodes have been
updated to ensure no outbound to inbound calls can occur. Nodes
such as ng_ipfw, ng_gif etc. should be further examined whether a
potential for outbound to inbound call looping exists.
This commit changes the layout of struct thread, but due to
__FreeBSD_version number shortage a version bump has been omitted
at this time, nevertheless kernel and modules have to be rebuilt.
Reviewed by: julian, rwatson, bz
Approved by: julian (mentor)
This allows to append custom rules at the end of the file without
risk of confusion that can result when one misses default !ppp line
and doesn't add another program specification and thus subsequent
selector(s) would belong to ppp program block.
Requested by: marck
Submitted by: marck
Approved by: jhb (mentor)
The amd64-specific bits of msun use an undocumented constraint, which is
less likely to be supported by other compilers (such as Clang). Change
the code to use a more common machine constraint.
Obtained from: /projects/clangbsd/
cksum related function calls, sometimes related to offload features
from what I could see.xi
It would be good if the offload functionality would be properly
#ifdefed but the other calls to cksum related functions are a more
general problem also elswhere in the network stack.
use IPv4/v6 for inter-node communication (according to my reading).
Properly wrap the carp callouts in INET || INET6 and refelect this
in sys/conf/files as well. While in theory this should be ok,
it might be a bit optimistic to think that carp could build with
inet6 only[1].
Discussed with: mlaier [1]
struct bio to store classification information, and a hook
for classifier functions that can be called by g_io_request().
This code is from Fabio Checconi as part of his GSOC work.
we receive the AssocResp, so we can only set ni_txparms properly
at that point. To make this possible make node_setuptxparms public
as ieee80211_node_setuptxparms.
- Preallocate some memory for ACPI tasks early enough. We cannot use
malloc(9) any more because spin mutex may be held here. The reserved
memory can be tuned via debug.acpi.max_tasks tunable or ACPI_MAX_TASKS
in kernel configuration. The default is 32 tasks.
- Implement a custom taskqueue_fast to wrap the new memory allocation.
This implementation is not the fastest in the world but we are being
conservative here.