using a udp6 socket without bind(2)ing.
- fbsd4/430 reported from the FreeBSD team.
- this fix is different from the fix reported in the above PR. i think
this better, but we need some test.
Obtained from: KAME
MFC after: 3 weeks
semicolons from the end of macros:
#define FOO() bar(a,b,c);
becomes
#define FOO() bar(a,b,c)
Thus requiring the semicolon in the invocation of FOO. This is much
cleaner syntax and more consistent with expectations when writing
function-like things in source.
With both peril-sensitive sunglasses and flame-proof undies on, tighten
up some types, and work around some warnings generated by this. There
are some _horrible_ const/non-const issues in this code.
This represents the original standardization of the following functions
and headers:
popen()
<regex.h>: regcomp(), regexec(), regerror(), regfree()
<fnmatch.h>: fnmatch()
getopt(), optarg, optind, opterr, optopt
<glob.h>: glob()
<wordexp.h>: wordexp(), wordfree()
confstr()
and a cluster in one shot.
o Introduce MBP_PERSIST and MBP_PERSISTENT control bits to mb_alloc();
MBP_PERSIST means "if you can allocate, then keep the cache lock
held on exit," and MBP_PERSISTENT means "a cache lock is alredy held
on entry, so allocate from the specified (already locked) cache."
They may be used in combination.
o m_getcl() uses the MBP_PERSIST/MBP_PERSISTENT interface so that it
doesn't drop the cache lock in between the mbuf and cluster allocations.
o m_getm(), which takes a size and allocates an mbuf + cluster "best fit"
chain, has been moved from uipc_mbuf.c to subr_mbuf.c and shown how to
use MBP_PERSIST/MBP_PERSISTENT to attempt to do a grouped allocation
without dropping the cache lock in between.
Why this is good: much less bus-locked lock acquires/drops when they're
not needed. Also, prototype for m_getcl():
struct mbuf * m_getcl(int how, short type, int flags);
"how" and "type" are self-explanatory. "flags" may be M_PKTHDR, in
which case m_getcl() will make the mbuf a pkthdr-mbuf.
While I'm in subr_mbuf.c:
o Every exported routine now has a nice comment with a description of
the expected arguments. Eventually, mbuf(9) needs to be re-vamped
but there's still more code to write/finalize before I get to that.
o internal macros have been changed a bit.
o consistently use 'short' for "type." This somehow slipped through
before (that 'type' was sometimes declared as int).
Alfred has been pushing for the MBP_PERSIST{,ENT} thing for almost a
year now. Luigi asked for m_getcl(), and will probably MFC that
part of this commit.
TODO [Related]: teach mb_free() about MBP_PERSIST{, ENT}.
because the previous interface handle gets freed when the config
number is set. This fixes a problem where memory could be accessed
after it was freed when the interface was ifconfig'd up.
Reviewed by: n_hibma
one out of a block cipher. This has 2 advantages:
1) The code is _much_ simpler
2) We aren't committing our security to one algorithm (much as we
may think we trust AES).
While I'm here, make an explicit reseed do a slow reseed instead
of a fast; this is in line with what the original paper suggested.
just because you leave your session idle.
Also, put in a fix for 64-bit architectures (to be revised).
In detail:
ip_fw.h
* Reorder fields in struct ip_fw to avoid alignment problems on
64-bit machines. This only masks the problem, I am still not
sure whether I am doing something wrong in the code or there
is a problem elsewhere (e.g. different aligmnent of structures
between userland and kernel because of pragmas etc.)
* added fields in dyn_rule to store ack numbers, so we can
generate keepalives when the dynamic rule is about to expire
ip_fw2.c
* use a local function, send_pkt(), to generate TCP RST for Reset rules;
* save about 250 bytes by cleaning up the various snprintf()
in ipfw_log() ...
* ... and use twice as many bytes to implement keepalives
(this seems to be working, but i have not tested it extensively).
Keepalives are generated once every 5 seconds for the last 20 seconds
of the lifetime of a dynamic rule for an established TCP flow. The
packets are sent to both sides, so if at least one of the endpoints
is responding, the timeout is refreshed and the rule will not expire.
You can disable this feature with
sysctl net.inet.ip.fw.dyn_keepalive=0
(the default is 1, to have them enabled).
MFC after: 1 day
(just kidding... I will supply an updated version of ipfw2 for
RELENG_4 tomorrow).
- always reinitialize the rx descriptors, even if the mbuf is kept.
This should fix the hangs on ifconfig that were observed
- on an rx overflow, reinitialize the descriptor so that the interface
will not hang
- correct some bus_dmamap_sync() calls
- correct some debug messages
- some minor nits
1/ don't need to set td_state to TDS_RUNNING in fork_return.
it's already set in choosethread().
2/ Set a child process state to "normal" as opposed to "new"
when we allow it to be put on the run queue.
Allows child to receive signals from the parent if the parent
runs first and tries to immediatly signal he child.
Submitted by: (part 2) Thomas Moestl <tmoestl@gmx.net>
that's binary compatible for -stable. While binary compatibility doesn't
matter much in -current, it is critical for -stable. This change requires
pccardd/pccardc to be recompiled.
formulated. The correct states should be:
IDLE: On the idle KSE list for that KSEG
RUNQ: Linked onto the system run queue.
THREAD: Attached to a thread and slaved to whatever state the thread is in.
This means that most places where we were adjusting kse state can go away
as it is just moving around because the thread is..
The only places we need to adjust the KSE state is in transition to and from
the idle and run queues.
Reviewed by: jhb@freebsd.org
-finstrument-functions instead of -mprofiler-epilogue. The former
works essentially the same as the latter but has a higher overhead
(about 22 more bytes per function for passing unused args to the
profiling functions).
Removed all traces of the IDENT Makefile variable, which had been
reduced to just a place for holding profiling's contribution to CFLAGS
(the IDENT that gives the kernel identity was renamed to KERN_IDENT).
o Assert that the page queues lock is held in vm_page_unwire().
o Make vm_page_lock_queues() and vm_page_unlock_queues() visible
to kernel loadable modules.
(PROFLEVEL) to kern.pre.mk so that it is easier to manage. Bumped config
version to match.
Moved the check for cputype being configured to a less bogus place in
mkmakefile.c.
filedesc is already locked rather than having chroot() unlock the
filedesc so chroot_refuse_vdir_fds() can immediately relock it.
- Reorder chroot() a bitso that we do the namei lookup before checking
the process's struct filedesc. This closes at least one potential race
and allows us to only acquire the filedsec lock once in chroot().
- Push down Giant slightly into chroot().
_vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge
o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup().
(This clears the way for exec_map accesses to move outside of Giant.
The exec_map is not a system map.)
o Remove some premature MPSAFE comments.
Reviewed by: tegge
This was always broken in HEAD (the offending statement was introduced
in rev. 1.123 for HEAD, while RELENG_4 included this fix (in rev.
1.99.2.12 for RELENG_4) and I inadvertently deleted it in 1.99.2.30.
So I am also restoring these two lines in RELENG_4 now.
We might need another few things from 1.99.2.30.
page-zeroing code as well as from the general page-zeroing code and use a
lazy tlb page invalidation scheme based on a callback made at the end
of mi_switch.
A number of people came up with this idea at the same time so credit
belongs to Peter, John, and Jake as well.
Two-way SMP buildworld -j 5 tests (second run, after stabilization)
2282.76 real 2515.17 user 704.22 sys before peter's IPI commit
2266.69 real 2467.50 user 633.77 sys after peter's commit
2232.80 real 2468.99 user 615.89 sys after this commit
Reviewed by: peter, jhb
Approved by: peter
choosethread() in MI C code instead of doing it in in assembly in all the
various cpu_switch() functions. This fixes problems on ia64 and sparc64.
Reviewed by: julian, peter, benno
Tested on: i386, alpha, sparc64
itself; this causes undefined behaviour on UltraSPARCs. In particular,
the interrupt packet data words will not necessarily be delivered
correctly, which would result in a crash.
This bug also caused the cache-flushing work to be done twice on the
triggering CPU (when it did not cause crashes).
Reviewed by: jake
- It actually works this time, honest!
- Fine grained TLB shootdowns for SMP on i386. IPI's are very expensive,
so try and optimize things where possible.
- Introduce ranged shootdowns that can be done as a single IPI.
- PG_G support for i386
- Specific-cpu targeted shootdowns. For example, there is no sense in
globally purging the TLB cache for where we are stealing a page from
the local unshared process on the local cpu. Use pm_active to track
this.
- Add some instrumentation for the tlb shootdown code.
- Rip out SMP code from <machine/cpufunc.h>
- Try and fix some very bogus PG_G and PG_PS interactions that were bad
enough to cause vm86 bios calls to break. vm86 depended on our existing
bugs and this was the cause of the VESA panics last time.
- Fix the silly one-line error that caused the 'panic: bad pte' last time.
- Fix a couple of other silly one-line errors that should have caused more
pain than they did.
Some more work is needed:
- pmap_{zero,copy}_page[_idle]. These can be done without IPI's if we
have a hook in cpu_switch.
- The IPI handlers need some cleanup. I have a bogus %ds load that can
be avoided.
- APTD handling is rather bogus and appears to be a large source of
global TLB IPI shootdowns for no really good reason.
I see speedups of between 1.5% and ~4% on buildworlds in a while 1 loop.
I expect to see a bigger difference when there is significant pageout
activity or the system otherwise has memory shortages.
I have backed out a few optimizations that I had been using over the last
few days in order to be a little more conservative. I'll revisit these
again over the next few days as the dust settles.
New option: DISABLE_PG_G - In case I missed something.
keyword and in the description of rp's hints.
Didn't fix rp's hints being mostly in comments so that they are harder to
use (they don't get linted either way because makeLINT.sh strips them and
there is no compile-time syntax checking of hints anyway).
available from bsd.obj.mk.
The native version was identical (and pretty much unused except in
the -DMODULES_WITH_WORLD case, which it is not for "make release")
except that the "bin" -> "base" change of the default DISTRIBUTION
name did not propagate here.
NOTES. Add some comments about the potential problems associated with NIC
driver modules and changing these options.
Fix sorting problems in sys/conf/options with the MSIZE and MCLSHIFT
options.
Reviewed by: bde
This code does not imply that SBus cards work yet. They hang for me.
But I can't netboot the latest snapshot on my ultra1e, and things
hang at bus_setup_intr time.
Since I'm offline for a while, I thought I'd toss this in in case somebody
else who has a bit better luck wants to fart around with it. Please try
and wait until I get back to check things in.
warn*(), and setproctitle() functions) to buildworld work again. This
can be cleaned up later if/when a new GCC supports the feature (but personally
I think it's a waste of time to keep mod'ing imported GCC sources for this
since only three procedures are involved).
Suggested by: peter
and kmem_free_wakeup(). Previously, kmem_free_wakeup() always
called wakeup(). In general, no one was sleeping.
o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c
for use in vm_kern.c.
the default) is now the only method for i386.
Remove the paraphanalia that supported critmode. Remove td_critnest, clean
up the assembly, and clean up (mostly remove) the old junk from
cpu_critical_enter() and cpu_critical_exit().
This allows accton(1) to be used with an append-only file.
PR: 7169
Reported by: Joao Carlos Mendes Luis <jonny@jonny.eng.br>
Reviewed by: bde
Approved by: sheldonh (mentor)
MFC after: 2 weeks
methodology similar to the vm_map_entry splay and the VM splay that Alan
Cox is working on. Extensive testing has appeared to have shown no
increase in overhead.
Disadvantages
Dirties more cache lines during lookups.
Not as fast as a hash table lookup (but still N log N and optimal
when there is locality of reference).
Advantages
vnode->v_dirtyblkhd is now perfectly sorted, making fsync/sync/filesystem
syncer operate more efficiently.
I get to rip out all the old hacks (some of which were mine) that tried
to keep the v_dirtyblkhd tailq sorted.
The per-vnode splay tree should be easier to lock / SMPng pushdown on
vnodes will be easier.
This commit along with another that Alan is working on for the VM page
global hash table will allow me to implement ranged fsync(), optimize
server-side nfs commit rpcs, and implement partial syncs by the
filesystem syncer (aka filesystem syncer would detect that someone is
trying to get the vnode lock, remembers its place, and skip to the
next vnode).
Note that the buffer cache splay is somewhat more complex then other splays
due to special handling of background bitmap writes (multiple buffers with
the same lblkno in the same vnode), and B_INVAL discontinuities between the
old hash table and the existence of the buffer on the v_cleanblkhd list.
Suggested by: alc