or any other bio chopping geom a reasonable size of work.
Check for delivered signals between chunks, because the request size
and service time is unbounded.
with the same name exists, delete that directory first, before performing
the copy. This ensures that mv(1) across devices follows the semantics
of rename(2), as required by POSIX.
This change could introduce the potential of data loss, even if the
copy fails, violating the atomicity properties of rename(2). This is
(mostly) mitigated by first renaming the destination and obliterating
it only after a succesfull copy.
The above logic also led to the introduction of code that will cleanup
the results of a partial copy, if a cross-device copy fails.
PR: bin/118367
MFC after: 1 month
details from consumers.
- Track individual selecters on a per-descriptor basis such that there
are no longer collisions and after sleeping for events only those
descriptors which triggered events must be rescaned.
- Protect the selinfo (per descriptor) structure with a mtx pool mutex.
mtx pool mutexes were chosen to preserve api compatibility with
existing code which does nothing but bzero() to setup selinfo
structures.
- Use a per-thread wait channel rather than a global wait channel.
- Hide select implementation details in a seltd structure which is
opaque to the rest of the kernel.
- Provide a 'selsocket' interface for those kernel consumers who wish to
select on a socket when they have no fd so they no longer have to
be aware of select implementation details.
Tested by: kris
Reviewed on: arch
processors (it's the PowerPC Operating Environment Architecture).
AIM designates the processors made by the Apple-IBM-Motorola
alliance and those we typically support.
While here, remove the NetBSD option IPKDB. It's not an option
used by us. Also, PPC_HAVE_FPU is not used by us either. Remove
that too.
Obtained from: Juniper, Semihalf
the ABI when enabled. There is no longer an embedded lock_profile_object
in each lock. Instead a list of lock_profile_objects is kept per-thread
for each lock it may own. The cnt_hold statistic is now always 0 to
facilitate this.
- Support shared locking by tracking individual lock instances and
statistics in the per-thread per-instance lock_profile_object.
- Make the lock profiling hash table a per-cpu singly linked list with a
per-cpu static lock_prof allocator. This removes the need for an array
of spinlocks and reduces cache contention between cores.
- Use a seperate hash for spinlocks and other locks so that only a
critical_enter() is required and not a spinlock_enter() to modify the
per-cpu tables.
- Count time spent spinning in the lock statistics.
- Remove the LOCK_PROFILE_SHARED option as it is always supported now.
- Specifically drop and release the scheduler locks in both schedulers
since we track owners now.
In collaboration with: Kip Macy
Sponsored by: Nokia