MI API with empty cpu_pause() functions on other arch's, but this
functionality is definitely unique to IA-32, so I decided to leave it
as i386-only and wrap it in #ifdef's. I should have dropped the cpu_
prefix when I made that decision.
Requested by: bde
byte offset of the directory entry for the inode number for all types
of files except directories, although this breaks hard links for
non-directories even if it doesn't cause overflow. Just ignore this
broken inode number for stat() and readdir() and return a less broken
one (the block offset of the file), so that applications normally can't
see the brokenness.
This leaves at least the following brokenness:
- extra inodes, vnodes and caching for hard links.
- various overflow bugs. cd9660 supports 64-bit block numbers, but we
silently ignore the top 32 bits in isonum_733() and then drop another
10 bits for our broken inode numbers. We may also have sign extension
bugs from storing 32-bit extents in ints and longs even if ints are
32-bits. These bugs affect DVDs. mkisofs apparently limits them
by writing directory entries first.
Inode numbers were broken mainly in 4.4BSD-Lite2. FreeBSD-1.1.5 seems
to have a correct implementation modulo the overflow bugs. We need
to look up directory entries from inodes for symlinks only. FreeBSD-1.1.5
use separate fields (iso_parent_extent, iso_parent) to point to the
directory entry. 4.4BSD-Lite doesn't have these, and abuses i_ino to
point to the directory entry. Correct pointers are impossible for
hard links, but symlinks can't be hard links.
Pentium 4's and newer IA32 processors. The "pause" instruction has been
verified by Intel to be a NOP on all currently existing IA32 processors
prior to the Pentium 4.
option is used (not on by default).
- In the case of trying to lock a mutex, if the MTX_CONTESTED flag is set,
then we can safely read the thread pointer from the mtx_lock member while
holding sched_lock. We then examine the thread to see if it is currently
executing on another CPU. If it is, then we keep looping instead of
blocking.
- In the case of trying to unlock a mutex, it is now possible for a mutex
to have MTX_CONTESTED set in mtx_lock but to not have any threads
actually blocked on it, so we need to handle that case. In that case,
we just release the lock as if MTX_CONTESTED was not set and return.
- We do not adaptively spin on Giant as Giant is held for long times and
it slows SMP systems down to a crawl (it was taking several minutes,
like 5-10 or so for my test alpha and sparc64 SMP boxes to boot up when
they adaptively spinned on Giant).
- We only compile in the code to do this for SMP kernels, it doesn't make
sense for UP kernels.
Tested on: i386, alpha, sparc64
the relevant classes.
Some methods may implement various "magic spaces", this is reserved
or magic areas on the disk, set a side for various and sundry purposes.
A good example is the BSD disklabel and boot code on i386 which occupies
a total of four magic spaces: boot1, the disklabel, the padding behind
the disklabel and boot2. The reason we don't simply tell people to
write the appropriate stuff on the underlying device is that (some of)
the magic spaces might be real-time modifiable. It is for instance
possible to change a disklabel while partitions are open, provided
the open partitions do not get trampled in the process.
Sponsored by: DARPA & NAI Labs.
value of the tag or data field.
Add macros for getting the page shift, size and mask for the physical page
that a tte maps (which may be one of several sizes).
Use the new cache functions for invalidating single pages.
that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls
uma we can probably remove the check in malloc() for this now. Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
yet. We just return without performing any checks.
- Don't explicitly enter and exit critical sections when walking lock
lists. We don't need a critical section to walk the list of sleep
locks for a thread. We check to see if a spin lock list is empty
before we walk it. If the list is empty we don't need to walk it. If
it isn't then we already hold at least one spin lock and are already in
a critical section and thus don't need our own explicit critical
section.
initialized socket with no qlimit was being passed in. In order
to handle this case properly, we must not use >= when comparing
queue sizes to qlimit. As a result of this improper handling,
a panic could result in certain cases.
PR: 38325
MFC after: 3 days
o Add a mutex (sb_mtx) to struct sockbuf. This protects the data in a
socket buffer. The mutex in the receive buffer also protects the data
in struct socket.
o Determine the lock strategy for each members in struct socket.
o Lock down the following members:
- so_count
- so_options
- so_linger
- so_state
o Remove *_locked() socket APIs. Make the following socket APIs
touching the members above now require a locked socket:
- sodisconnect()
- soisconnected()
- soisconnecting()
- soisdisconnected()
- soisdisconnecting()
- sofree()
- soref()
- sorele()
- sorwakeup()
- sotryfree()
- sowakeup()
- sowwakeup()
Reviewed by: alfred