it is marked as RTF_UP. This appears to fix a crash that was sometimes
triggered when dhclient(8) tried to send a packet after an interface
had been detatched.
Reviewed by: sam
a resource leak. Move the resource deallocation code from fifo_close()
to a new function, fifo_cleanup(), and call fifo_cleanup() from
fifo_close() and the appropriate places in fifo_open().
Tested by: Lukas Ertl
Pointy hat to: truckman
because RFNOWAIT was being passed to kproc_create.
The result was that shutdown took quite a bit longer because this
errant "child" would not respond to termination signals from init
at system shutdown.
RFNOWAIT dissassociates itself from the caller by attaching to init
as a parent proc. We could have had the taskqueue proc listen for
SIGKILL, but being able to SIGKILL a potentially critical system
process doesn't seem like a good idea.
comment about this flag in rev.1.61. It is not historical like the
comment said; it is the flag that says that most of what is laboriously
put in the bootinfo struct is actually there. Newer kernels were
bootable by even the broken boot2 without losing anything except the
symbol table, but older kernels need at least the memory sizes.
Restoring the "|" with RB_BOOTINFO that was lost in rev.1.43 costs 5
bytes. The fix can be done in only 4 bytes by fixing some code that
was removed in rev.1.61 (put RB_BOOTINFO back in in the initial value
of "opts" and fix RBX_MASK to not clobber it.)
crashes that had sn0 as the irq that's being serviced, when there was
no sn0 in the system. This seems to prevent them. Also, we want to
wait until after we've registered with the network layer before we
turn on the interrupt spigot to avoid races.
- redo updating.
rijndael-api-fst.[ch]:
- switch to use new low level rijndael api.
- stop using u8, u16 and u32.
- space cleanup.
Tested by: gbde(8) and phk's test program
free one sem_undo with un_cnt == 0 instead of all of them. This is a
temporary workaround until the SLIST_FOREACH_PREVPTR loop gets fixed so
that it doesn't cause cycles in semu_list when removing multiple adjacent
items. It might be easier to just use (doubly-linked) LISTs here instead
of complicated SLIST code to achieve O(1) removals.
This bug manifested itself as a complete lockup under heavy semaphore use
by multiple processes with the SEM_UNDO flag set.
PR: 58984
Only update them in the newly created context to reflect the state
after copying the dirty registers onto the user stack. If we were to
update the trapframe, we lose the state at entry into the kernel. We
may need that after we create the context, such as for KSE upcalls.
We have to update the trapframe after writing the dirty registers to
the user stack for signal delivery to work. But this is best done in
sendsig() itself where it applies, not in get_mcontext() where it's
done unconditionally.
with debugger, so testing P_TRACED in SIGPENDING is useless. This test also
is the culprit which causes lots of 'failed to set signal flags properly for
ast()' to be printed on console which is just a false complaint.
must return EINVAL if size is zero. Submitted by: tegge
- In order to avoid a race condition in multithreaded applications, the
check and removal operations by munmap(2) must be in the same critical
section. To accomodate this, vm_map_check_protection() is modified to
require its caller to obtain at least a read lock on the map.
revision 1.142
date: 2003/10/11 03:04:26; author: toshii
Fix a done list handling bug which exhibits under high shared
interrupt rate and bus traffic. As the interrupt register is
read after checking hcca_done_head, there was a small chance
of dropping a done list. Ignore OHCI_WDH interrupt bit if
hcca_done_head is zero so that OHCI_WDH is processed later.
revision 1.141
date: 2003/09/10 20:08:29; author: mycroft;
Update actlen even in the case where a TD returns an error --
this is critical for the umass bulk-only STALL case.
revision 1.176
date: 2003/11/04 19:11:21; author: mycroft;
Ignore a CRCTO error on a SETUP transaction in combination with
STALLED or NAK. This fixes problems with the GL641.
- remove the unnecessary elm arg from SIMPLEQ_REMOVE_HEAD().
this mirrors the functionality of SLIST_REMOVE_HEAD() (the other
singly-linked list type) and FreeBSD's STAILQ_REMOVE_HEAD()
use set_mcontext() to restore the context in sigreturn(). Since we
put the syscall number and the syscall arguments in the trapframe
(we don't save the scratch registers for syscalls, which allows us
to reuse the space to our advantage), create a MD specific flag so
that we save the scratch registers even for syscalls. We would not
be able to restart a syscall otherwise.
The signal trampoline does not need to flush the regiters anymore,
because get_mcontext() already handles that. In fact, if we set up
the context correctly, we do not need to have a trampoline at all.
This change however only minimally changes the trampoline code. In
follow-up commits this can be further optimized.
Note that normally we preserve cfm and iip in the trapframe created
by the EPC syscall path when we restore a context in set_mcontext()
because those fields are not normally set for a synchronuous context.
The kernel puts the return address and frame info of the syscall
stub in there. By preserving these fields we hide this detail from
userland which allows us to use setcontext(2) for user created
contexts. However, sigreturn() is commonly called from the trampoline,
which means that if we preserve cfm and iip in all cases, we would
return to the trampoline after the sigreturn(), which means we hit
the safety net: we call exit(2). So, we do not preserve cfm and iip
when we have a synchronous context that also has scratch registers
(the uncommon context created by sendsig() only), under the assumption
that if such a context is created in userland, something special is
going on and the use of cfm and iip is then just another quirk. All
this is invisible in the common case.
if we drop into the pmap or vnode layers.
- Migrate the handling of zero-length msync(2)s into vm_map_sync() so that
multithread applications can't change the map between implementing the
zero-length hack in msync(2) and reacquiring the map lock in
vm_map_sync().
Reviewed by: tegge
Since all callers either passed 0 or 1 for clear_ret, define bit 0 in
the flags for use as clear_ret. Reserve bits 1, 2 and 3 for use by MI
code for possible (but unlikely) future use. The remaining bits are for
use by MD code.
This change is triggered by a need on ia64 to have another knob for
get_mcontext().
o When we're resetting the board, make sure that we error out the pending
CCBs first. Otherwise the aha_cmd won't accept further commands, such
as those that are used to reset the card (AOP_INITIALIZE_MBOX). This
appears to cause a cascade failure where no more commands are possible
to the card.
o Reduce from 10s down to 1s the amount of time we're willing to tolerate
the card being awol. This helps the above case.
o Add some error checking to two commands issued in the probe.
I have a dim memory of gibbs@ trying to tell me about this problem a
few years ago, so pointy hat to imp@ for sitting on it so long.
in the log message for kern_sched.c 1.83 (which should have been
repo-copied to preserve history for this file), the (4BSD) scheduler
algorithm only works right if stathz is nearly 128 Hz. The old
commit lock said 64 Hz; the scheduler actually wants nearly 16 Hz
but there was a scale factor of 4 to give the requirement of 64 Hz,
and rev.1.83 changed the scale factor so that the requirement became
128 Hz. The change of the scale factor was incomplete in the SMP
case. Then scheduling ticks are provided by smp_ncpu CPUs, and the
scheduler cannot tell the difference between this and 1 CPU providing
scheduling ticks smp_ncpu times faster, so we need another scale
factor of smp_ncp or an algorithm change.
This quick fix uses the scale factor without even trying to optimize
the runtime divisions required for this as is done for the other
scale factor.
The main algorithmic problem is the clamp on the scheduling tick counts.
This was 295; it is now approximately 295 * smp_ncpu. When the limit
is reached, threads get free timeslices and scheduling becomes very
unfair to the threads that don't hit the limit. The limit can be
reached and maintained in the worst case if the load average is larger
than (limit / effective_stathz - 1) / 2 = 0.65 now (was just 0.08 with
2 CPUs before this change), so there are algorithmic problems even for
a load average of 1. Fortunately, the worst case isn't common enough
for the problem to be very noticeable (it is mainly for niced CPU hogs
competing with less nice CPU hogs).
thread being waken up. The thread waken up can run at a priority as
high as after tsleep().
- Replace selwakeup()s with selwakeuppri()s and pass appropriate
priorities.
- Add cv_broadcastpri() which raises the priority of the broadcast
threads. Used by selwakeuppri() if collision occurs.
Not objected in: -arch, -current
that msync(2) is its only caller.
- Migrate the parts of the old vm_map_clean() that examined the internals
of a vm object to a new function vm_object_sync() that is implemented in
vm_object.c. At the same, introduce the necessary vm object locking so
that vm_map_sync() and vm_object_sync() can be called without Giant.
Reviewed by: tegge