the in-kernel linker to access the _DYNAMIC data for doing loadable elf
modules. The alpha kernel is already done this way, I've borrowed some of
the hacks from there.
This is primarily aimed at the 3-stage boot process which is intended to
be able to do pre-loading of kernel modules.
Note that the entry point isn't 0xf0100000 any more, it'll be a little
further on - but this value is stored in the headers. I don't think this
will be a problem, but I'm sure somebody will tell me if it is. :-)
I'm not sure if btxboot is going to like this, it doesn't do proper ELF
header checking and assumes that there are exactly two program header
entries and that they are both PT_LOAD entries - a bad assumption.
the thread kernel into a garbage collector thread which is started when
the fisrt thread is created (other than the initial thread). This
removes the window of opportunity where a context switch will cause a
thread that has locked the malloc spinlock, to enter the thread kernel,
find there is a dead thread and try to free memory, therefore trying
to lock the malloc spinlock against itself.
The garbage collector thread acts just like any other thread, so
instead of having a spinlock to control accesses to the dead thread
list, it uses a mutex and a condition variable so that it can happily
wait to be signalled when a thread exists.
launching an application into space when someone tries to debug it.
The dead thread list now has it's own link pointer, so use that when
reporting the grateful dead.
- Add support of a thread being listed in the dead thread list as well
as the thread list.
- Add a new thread state to make sigwait work properly. (Submitted by
Daniel M. Eischen <eischen@vigrid.com>)
- Add global variable for the garbage collector mutex and condition
variable.
- Delete a couple of prototypes that are no longer required.
- Add a prototype for the garbage collector thread.
realloc functions check for recursion within the malloc code itself. In
a thread-safe library, the single spinlock ensures that no two threads
go inside the protected code at the same time. The thread implementation
is responsible for ensuring that the spinlock does in fact protect malloc.
There was a window of opportunity in which this was not the case. I'll fix
that with a commit RSN.
happen when an NFS exported filesystem tries to remove a locally
mounted on directory.
PR: kern/7272
Submitted by: Andre Albsmeier <andre.albsmeier@mchp.siemens.de>
for some of the fsinfo RPC fields. It is strictly speaking not
wrong to do this, as the spec says that "it is expected that a
server will make a best effort at supporting all the attributes",
but pretty unusual. You guessed it, it's NT servers that do it.'
Obtained from: Frank van der Linden <frank@wins.uva.nl>
is being deleted due to an forcible unmount. The problem is
that vgone calls vclean() which then calls calls nfs_inactive()
with VXLOCK set on the vnode. Nfs_inactive() was calling vget()
to get a reference on the vnode, which in turn hung on VXLOCK.
Nfs_inactive() now checks v_usecount to make sure that the vnode
is not coming from vclean() before it does a vget().
is less than NFS_MINPACKET or greater than NFS_MAXPACKET in size, it
barfs and, I think, drops the connection.
However, there's no guarantee that in a multi-fragment RPC, all the
fragments will be at least as large as NFS_MINPACKET.
In fact, with the version of "tclnfs" we have here, which supports NFS
over TCP, at least when built under SunOS 4.1.3 (i.e., with 4.1.3's
user-mode ONC RPC library), I can *repeatably* cause "tclnfs" to send a
request with more than one fragment, one of which is only 8 bytes long.
I just do a 3877-byte write to a file, at an offset of 0.
The check that "slp->ns_reclen" is greater than or equal to
NFS_MINPACKET serves no useful purpose - if the NFS server code can't
handle packets < NFS_MINPACKET bytes, it can't handle them over *any*
protocol, so the check has to be done above the RPC-over-TCP layer - and
should be removed.
Obtained from: Fix from Guy Harris, forwarded by Rick Macklem.