least one execute bit set, otherwise execve(2) will return EACCES even
for an user with PRIV_VFS_EXEC privilege.
Add the check also to vaccess(9), vaccess_acl_nfs4(9) and
vaccess_acl_posix1e(9). This makes access(2) to better agree with
execve(2). Because ZFS doesn't use vaccess(9) for VEXEC, add the check
to zfs_freebsd_access() too. There may be other file systems which are
not using vaccess*() functions and need to be handled separately.
PR: kern/125009
Reviewed by: bde, trasz
Approved by: pjd (ZFS part)
for socket, when specified POLLIN|POLLOUT in events, you would have one
selfd registered for receiving socket buffer, and one for sending. Now,
if both events are not ready to fire at the time of the initial scan,
but are simultaneously ready after the sleep, pollrescan() would iterate
over the pollfd struct twice. Since both times revents is not zero,
returned value would be off by one.
Fix this by recalculating the return value in pollout().
PR: kern/143029
MFC after: 2 weeks
Actually it is hard to properly handle such a failure, especially in MNT_UPDATE
case. The only reason for the vfs_allocate_syncvnode() function to fail is
getnewvnode() failure. Fortunately it is impossible for current implementation
of getnewvnode() to fail, so we can assert this and make
vfs_allocate_syncvnode() void. This in turn free us from handling its failures
in the mount code.
Reviewed by: kib
MFC after: 1 month
rather than forging ahead and interpreting garbage buffer content
and dirent structures.
This change backs out r211684 which was essentially a no-op.
MFC after: 1 week
standard kill(). On other systems, SI_LWP is generated by lwp_kill().
This will allow conforming applications to differentiate between
signals generated by standard events and those generated by other
implementation events in a manner compatible with existing practice.
- Bump __FreeBSD_version
the uio_offset adjustment instead to calculate a correct *len.
Without this change, we run off the end of the directory data
we're reading and panic horribly for nfs filesystems.
MFC after: 1 week
use '-' in probe names, matching the probe names in Solaris.[1]
Add userland SDT probes definitions to sys/sdt.h.
Sponsored by: The FreeBSD Foundation
Discussed with: rwaston [1]
LK_CANRECURSE after a lock is created. Use them to implement macros that
otherwise manipulated the flags directly. Assert that the associated
lockmgr lock is exclusively locked by the current thread when manipulating
these flags to ensure the flag updates are safe. This last change required
some minor shuffling in a few filesystems to exclusively lock a brand new
vnode slightly earlier.
Reviewed by: kib
MFC after: 3 days
of p_traceflag that is stored in the kinfo_proc structure. It is still
racey even with the lock and the code will read a consistent snapshot of
the flag without the lock.
In particular, provide pagesize and pagesizes array, the canary value
for SSP use, number of host CPUs and osreldate.
Tested by: marius (sparc64)
MFC after: 1 month
Interrupt driven configuration hooks serve two purposes: they are a
mechanism for registering for a callback that is invoked once interrupt
services are available, and they hold off root device selection so long
as any configuration hooks are still active. Before this change, it was
not possible to safely register additional hooks from the context of a
configuration hook callback. The need for this feature arises when
interrupts are required to discover new devices (e.g. access to the XenStore
to find para-virtualized devices) which in turn also require the ability
to hold off root device selection until some lengthy, interrupt driven,
configuration task has completed (e.g. Xen front/back device driver
negotiation).
More specifically, the mutex protecting the list of active configuration
hooks is never held during a callback, and static information is used
to ensure proper ordering and only a single callback to each hook even
when faced with registration or removal of a hook during an active run.
Sponsored by: Spectra Logic Corporation
MFC after: 1 week.
time-of-day clock or vice versa. For x86 systems, RTC resolution is one
second and we used to lose up to one second whenever we initialize system
time from RTC or write system time back to RTC. With this change, margin
of error per conversion is roughly between -0.5 and +0.5 second rather
than between -1 and 0 second. Note that it does not take care of errors
from getnanotime(9) (which is up to 1/hz second) or CLOCK_GETTIME() latency.
These are just too expensive to correct and it is not worthy of the cost.
bufobj lock. If b_bufobj is not NULL, then bufobj lock should be
held when manipulating the flags. Not doing this sometimes leaves
BV_BKGRDINPROG to be erronously set, causing softdep' getdirtybuf() to
stuck indefinitely in "getbuf" sleep, waiting for background write to
finish which is not actually performed.
Add BO_LOCK() in the cases where it was missed.
In collaboration with: pho
Tested by: bz
Reviewed by: jeff
MFC after: 1 month
use-after-free over a longer time. Also release the backing pages of
a guarded allocation at free(9) time to reduce the overhead of using
memguard(9). Allow setting and varying the malloc type at run-time.
Add knobs to allow:
- randomly guarding memory
- adding un-backed KVA guard pages to detect underflow and overflow
- a lower limit on the size of allocations that are guarded
Reviewed by: alc
Reviewed by: brueffer, Ulrich Spörlein <uqs spoerlein net> (man page)
Silence from: -arch
Approved by: zml (mentor)
MFC after: 1 month
most system, based on benchmark results on a low-end fibre channel SAN
under VMWare:
vfs.read_max read performance
8 (historical default) 83 MB/s
16 (recent bump) 131 MB/s
32 (this version) 152 MB/s
64 157 MB/s
(results are +/- 3 MB/s)
As read-ahead is heuristic, based on past IO requests, it shouldn't be
problematic. The new default is still smaller then in other OSes.
number of CPUs detection.
However, that was not mention at all, the problem was not reported, the
patch has not been MFCed and the fix is mostly improper.
Fix the original overflow (caused when 32 CPUs must be detected) by
just using a different mathematical computation (it also makes more
explicit the size of operands involved, which is good in the moment
waiting for a more complete support for a large number of CPUs).
PR: kern/148698
Submitted by: Joe Landers <jlanders at vmware dot com>
Tested by: gianni
MFC after: 10 days
the vfs.read_max default. For most systems this means going from 128 KiB
to 256 KiB, which is still very conservative and lower than what most
other operating systems use, but as a sane default should not
interfere much with existing systems.
For systems with RAID volumes and/or virtualization envirnments, where
read performance is very important, increasing this sysctl tunable to 32
or even more will demonstratively yield additional performance benefits.
If MAXPHYS ever gets bumped up, it will probably be a good idea to slave
read_max to it.
SOCK_DGRAM socket. MSG_TRUNC was only returned when some mbufs could
not be copied to the application. If some data was left in the last
mbuf, it was correctly discarded, but MSG_TRUNC was not set.
Reviewed by: bz
MFC after: 3 weeks
IPI to a specific CPU by its cpuid. Replace calls to ipi_selected() that
constructed a mask for a single CPU with calls to ipi_cpu() instead. This
will matter more in the future when we transition from cpumask_t to
cpuset_t for CPU masks in which case building a CPU mask is more expensive.
Submitted by: peter, sbruno
Reviewed by: rookie
Obtained from: Yahoo! (x86)
MFC after: 1 month
the virtualization detection successfully disabling the clflush instruction.
This fixes insta-panics for XEN hvm users when the hw.clflush_disable
tunable is -1 or 0 (-1 by default).
Discussed with: jhb
cdev will never be destroyed. Propagate the flag to devfs vnodes as
VV_ETERNVALDEV. Use the flags to avoid acquiring devmtx and taking a
thread reference on such nodes.
In collaboration with: pho
MFC after: 1 month
the calculation that is based on the kernel's heap size more conservative.
Hopefully, this will eliminate the need for MAXVNODES_MAX, but for the
time being set MAXVNODES_MAX to a large value.
Reviewed by: jhb@
MFC after: 6 weeks
zones for each malloc bucket size. The purpose is to isolate
different malloc types into hash classes, so that any buffer overruns
or use-after-free will usually only affect memory from malloc types in
that hash class. This is purely a debugging tool; by varying the hash
function and tracking which hash class was corrupted, the intersection
of the hash classes from each instance will point to a single malloc
type that is being misused. At this point inspection or memguard(9)
can be used to catch the offending code.
Add MALLOC_DEBUG_MAXZONES=8 to -current GENERIC configuration files.
The suggestion to have this on by default came from Kostik Belousov on
-arch.
This code is based on work by Ron Steinke at Isilon Systems.
Reviewed by: -arch (mostly silence)
Reviewed by: zml
Approved by: zml (mentor)
details of the string buffer allocation in one place.
Eliminate the portion of the string buffer that was dedicated to storing
the interpreter name. The pointer to the interpreter name can simply be
made to point to the appropriate argument string.
Reviewed by: kib
shell command are stored in exec*()'s demand-paged string buffer. For
a "buildworld" on an 8GB amd64 multiprocessor, the new order reduces
the number of global TLB shootdowns by 31%. It also eliminates about
330k page faults on the kernel address space.
Change exec_shell_imgact() to use "args->begin_argv" consistently as
the start of the argument and environment strings. Previously, it
would sometimes use "args->buf", which is the start of the overall
buffer, but no longer the start of the argument and environment
strings. While I'm here, eliminate unnecessary passing of "&length"
to copystr(), where we don't actually care about the length of the
copied string.
Clean up the initialization of the exec map. In particular, use the
correct size for an entry, and express that size in the same way that
is used when an entry is allocated. The old size was one page too
large. (This discrepancy originated in 2004 when I rewrote
exec_map_first_page() to use sf_buf_alloc() instead of the exec map
for mapping the first page of the executable.)
Reviewed by: kib
Current code doesn't check size of elf sections and may perform needless
actions of zero-sized memory allocation and similar.
The bigger issue is that alignment requirement of a zero-sized section
gets effectively applied to the next section if it has smaller alignment
requirement. But other tools, like gdb and consequently kgdb,
completely ignore zero-sized sections and thus may map symbols to
addresses differently.
Zero-sized sections are not typical in general.
Their typical (only, even) cause in FreeBSD modules is inline assembly that
creates custom sections which is found in pcpu.h and vnet.h. Mere inclusion
of one of those header files produces a custom section in elf output.
If there is no actual use for the section in a given module, then the
section remains empty.
Better solution is to avoid creating zero-sized sections altogether,
which is in plans.
Preloaded modules are handled in boot code (load_elf_obj.c), while
dynamically loaded modules are handled by kernel (link_elf_obj.c).
Based on code by: np
MFC after: 3 weeks
At present the cpufreq sysctl handler for current level setting would
allocate and deallocate a temporary buffer of 24KB even to handle a
read-only query. This puts unnecessary load on memory subsystem when
current level is checked frequently, e.g. when the likes of powerd
and system monitoring software are running.
Change the strategy to allocating a long-lived buffer for handling the
requests.
Reviewed by: njl
MFC after: 2 weeks
taskqueues, more than one task can be running simultaneously.
Also make taskqueue_run(9) static to the file, since there are no
consumers in the base kernel and the function signature needs to change
with this fix.
Remove mention of taskqueue_run(9) and taskqueue_run_fast(9) from the
taskqueue(9) man page.
Reviewed by: jhb
Approved by: zml (mentor)
name of 32bit sibling architecture instead of the host one. Do the
same for hw.machine on amd64.
Add a safety belt debug.adaptive_machine_arch sysctl, to turn the
substitution off.
Reviewed by: jhb, nwhitehorn
MFC after: 2 weeks
is not defined at all because KTR_GEN is still a valid class and some
traces may fit in. Default to 0, instead, and block any tracing.
As long as this is a POLA violation (some thirdy-part code, even if
that may be a questionable choice, could be rely on that feature) a
MFC possibility might be carefully evaluated.
Sponsored by: Sandvine Incorporated
lengths. Make MI wrapper code to validate periods in request. Make kernel
clock management code to honor these hardware limitations while choosing hz,
stathz and profhz values.
machines which can clearly afford the memory.
This is a somewhat conservative version of the patch - more fine tuning may be
necessary.
Idea from: Thread on hackers@
Discussed with: alc
- Simplify ktrstruct() calling convention by having ktrstruct() use
strlen() rather than requiring the caller to hand-code the length of
constant strings.
MFC after: 1 month
most one call to pmap_qremove(), and thus one TLB shootdown, instead of one
call and TLB shootdown per page.
Simplify the interface to vm_hold_free_pages().
MFC after: 3 weeks
was needed at preliminary version of the patch, where number of CPU ticks
was divided strictly on 16 seconds. Final code instead uses real interval
duration, so precise interval should not be important. Same time aliasing
issues around second boundary causes false positives, periodically logging
useless "t_delta ... too long/short" messages when HZ set below 256.
the maintenance of vm_pageout_deficit can be localized to just two places:
vm_page_alloc() and vm_pageout_scan().
This change also corrects an off-by-one error in the maintenance of
vm_pageout_deficit. Historically, the buffer cache functions, allocbuf()
and vm_hold_load_pages(), have not taken into account that vm_page_alloc()
already increments vm_pageout_deficit by one.
Reviewed by: kib
- Fix a bug where thread may be in sleeping state but the wchan won't
be set, leading to an empty container for sleepq_type(). [0]
Sponsored by: Sandvine Incorporated
[0] Submitted by: Bryan Venteicher
<bryanv at daemoninthecloset dot org>
MFC after: 3 days
X-MFC: 209577
There are special cases where tty_rel_free() can be called twice in a
row, namely when closing and revoking the TTY at the same moment. Only
call destroy_dev_sched_cb() once.
Reported by: Jeremie Le Hen
MFC after: 1 week
specify the increment of vm_pageout_deficit when sleeping due to page
shortage. Then, in allocbuf(), the code to allocate pages when extending
vmio buffer can be replaced by a call to vm_page_grab().
Suggested and reviewed by: alc
MFC after: 2 weeks
numbers. This change adds a new function alloc_unr_specific() which
returns the requested unit number if it is free. If the number is
already allocated or out of the range, -1 is returned.
Update alloc_unr(9) manual page accordingly and add a MLINK for
alloc_unr_specific(9).
Discussed on: freebsd-hackers
one or more mappings to the bogus page must be replaced, call pmap_qenter()
just once. Previously, pmap_qenter() was called for each mapping to the
bogus page.
MFC after: 3 weeks
- Rename tdsignal() to tdsendsignal() and make it private to kern_sig.c.
- Add tdsignal() and tdksignal() routines that mirror psignal() and
pksignal() except that they accept a thread as an argument instead of
a process. They send a signal to a specific thread rather than to an
individual process.
Reviewed by: kib
shm syscalls, and initial check for the number of allocated segments
in the module deinitialization code, the following might happen:
after the check for active segment, while waiting for threads to
leave some other syscall, shmget(2) is called. Then, we can end
up with the shared segment that cannot be detached since sysvshm
module is unloaded.
Prevent the leak by rechecking and disclaiming a reference to the vm
object owned by sysvshm module, that might have grown during the drain.
Tested by: pho
Reviewed by: jhb
MFC after: 1 month
syscalls. On the dynamic syscall deregistration, wait until all
threads leave the syscall code. This somewhat increases the safety
of the loadable modules unloading.
Reviewed by: jhb
Tested by: pho
MFC after: 1 month
not providing a destination address and using ktrace.
* Do not copy out kernel memory when providing sinfo for sctp_recvmsg().
Both bug where reported by Valentin Nechayev.
The first bug results in a kernel panic.
MFC after: 3 days.
believed that all 486-class CPUs FreeBSD is capable to run on, either
have no FPU and cannot use external coprocessor, or have FPU on the
package and can use #MF.
Reviewed by: bde
Tested by: pho (previous version)
writing event timer drivers, for choosing best possible drivers by machine
independent code and for operating them to supply kernel with hardclock(),
statclock() and profclock() events in unified fashion on various hardware.
Infrastructure provides support for both per-CPU (independent for every CPU
core) and global timers in periodic and one-shot modes. MI management code
at this moment uses only periodic mode, but one-shot mode use planned for
later, as part of tickless kernel project.
For this moment infrastructure used on i386 and amd64 architectures. Other
archs are welcome to follow, while their current operation should not be
affected.
This patch updates existing drivers (i8254, RTC and LAPIC) for the new
order, and adds event timers support into the HPET driver. These drivers
have different capabilities:
LAPIC - per-CPU timer, supports periodic and one-shot operation, may
freeze in C3 state, calibrated on first use, so may be not exactly precise.
HPET - depending on hardware can work as per-CPU or global, supports
periodic and one-shot operation, usually provides several event timers.
i8254 - global, limited to periodic mode, because same hardware used also
as time counter.
RTC - global, supports only periodic mode, set of frequencies in Hz
limited by powers of 2.
Depending on hardware capabilities, drivers preferred in following orders,
either LAPIC, HPETs, i8254, RTC or HPETs, LAPIC, i8254, RTC.
User may explicitly specify wanted timers via loader tunables or sysctls:
kern.eventtimer.timer1 and kern.eventtimer.timer2.
If requested driver is unavailable or unoperational, system will try to
replace it. If no more timers available or "NONE" specified for second,
system will operate using only one timer, multiplying it's frequency by few
times and uing respective dividers to honor hz, stathz and profhz values,
set during initial setup.
New code that creates character devices shouldn't use device unit
numbers, but only si_drv[12] to hold pointer to per-device data. Make
this function more future proof by removing the unit number argument.
Discussed with: kib
- Allow setting format, resolution and accuracy of BPF time stamps per
listener. Previously, we were only able to use microtime(9). Now we can
set various resolutions and accuracies with ioctl(2) BIOCSTSTAMP command.
Similarly, we can get the current resolution and accuracy with BIOCGTSTAMP
command. Document all supported options in bpf(4) and their uses.
- Introduce new time stamp 'struct bpf_ts' and header 'struct bpf_xhdr'.
The new time stamp has both 64-bit second and fractional parts. bpf_xhdr
has this time stamp instead of 'struct timeval' for bh_tstamp. The new
structures let us use bh_tstamp of same size on both 32-bit and 64-bit
platforms without adding additional shims for 32-bit binaries. On 64-bit
platforms, size of BPF header does not change compared to bpf_hdr as its
members are already all 64-bit long. On 32-bit platforms, the size may
increase by 8 bytes. For backward compatibility, struct bpf_hdr with
struct timeval is still the default header unless new time stamp format is
explicitly requested. However, the behaviour may change in the future and
all relevant code is wrapped around "#ifdef BURN_BRIDGES" for now.
- Add experimental support for tagging mbufs with time stamps from a lower
layer, e.g., device driver. Currently, mbuf_tags(9) is used to tag mbufs.
The time stamps must be uptime in 'struct bintime' format as binuptime(9)
and getbinuptime(9) do.
Reviewed by: net@
flags to specify M_WAITOK/M_NOWAIT. M_WAITOK allows devctl to sleep for
the memory allocation.
As Warner noted, allowing the functions to sleep might cause
reordering of the queued notifications.
Reviewed by: imp, jh
MFC after: 3 weeks
via %s
Most of the cases looked harmless, but this is done for the sake of
correctness. In one case it even allowed to drop an intermediate buffer.
Found by: clang
MFC after: 2 week
per-buf flag to catch if a buf is double-counted in the free count.
This code was useful to debug an instance where a local patch at Isilon
was incorrectly managing numfreebufs for a new buf state.
Reviewed by: jeff
Approved by: zml (mentor)
shared resources defaults beyond absolute minimums.
The new values are chosen mostly by magic. They are still fairly
small and will need increasing for large installations (especially
SHMMAX). However, they are now enough to e.g. start PostgreSQL
installations with ~~300 users and nearly 512 MB of shared buffers.
Reviewed by: A short discussion on hackers@
to obtain both trap frame and opaque argument submitted on registrction.
After kernel and all drivers get used to it, legacy hack can be removed.
Reviewed by: jhb@
from the buffer pages to buffer. Combine the code to set buffer
dirty range (previously in vfs_setdirty()) and to clean the pages
(vfs_clean_pages()) into new function vfs_clean_pages_dirty_buf(). Now
the vm object lock is acquired only once.
Drain the VPO_BUSY bit of the buffer pages before setting valid
and clean bits in vfs_clean_pages_dirty_buf() with new helper
vfs_drain_busy_pages(). pmap_clear_modify() asserts that page is not
busy.
In vfs_busy_pages(), move the wait for draining of VPO_BUSY before
the dirtyness handling, to follow the structure of
vfs_clean_pages_dirty_buf().
Reported and tested by: pho
Suggested and reviewed by: alc
MFC after: 2 weeks
different mount points, e.g. the nullfs vnode and the covered vnode
from the lower filesystem. In this case, existing assertion in
vop_rename_pre() may be triggered.
Check for vnode locks equiality instead of the vnodes itself to
not trip over the situation.
Submitted by: Mikolaj Golub <to.my.trociny@gmail.com>
Tested by: pho
MFC after: 2 weeks
Use it to allow to tune sem_nsem_max at runtime, only when sem.ko
module is present in kernel.
Requested and tested by: amdmi3
Reviewed by: jhb
MFC after: 3 days
ta_func may free the task structure, so no references to its members
are valid after the handler has been called. Using a per-queue member
and having waits longer than strictly necessary was suggested by jhb.
Submitted by: Matthew Fleming <matthew.fleming@isilon.com>
Reviewed by: zml, jhb
set but be cleared before the call to sodisconnect(). In this case,
ENOTCONN is returned: suppress this error rather than returning it to
userspace so that close() doesn't report an error improperly.
PR: kern/144061
Reported by: Matt Reimer <mreimer at vpop.net>,
Nikolay Denev <ndenev at gmail.com>,
Mikolaj Golub <to.my.trociny at gmail.com>
MFC after: 3 days
independent code. Move this code into mincore(), and eliminate the
page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(),
pmap_clear_reference(), and pmap_is_modified(). Assert that these
functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m:
Contrary to what the comment says, pmap_mincore() is not simply an
optimization. Without a complete pmap_mincore() implementation,
mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED
because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(),
vm_pageout_clean(), vm_object_page_collect_flush(), and
vm_object_page_clean(). Generally speaking, these are all accesses
to the page's dirty field, which are synchronized by the containing
vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and
vm_page_dontneed().
Reviewed by: kib (an earlier version)
arbitrary frequencies into hardclock(), statclock() and profclock() calls.
Same code with minor variations duplicated several times over the tree for
different timer drivers and architectures.
- Switch all x86 archs to new functions, simplifying the code and removing
extra logic from timer drivers. Other archs are also welcome.
on exit, that is done once in thread_exit() and the second time in
proc_reap(), by clearing td_incruntime.
Use the opportunity to revert to the pre-RUSAGE_THREAD exporting of ruxagg()
instead of ruxagg_locked() and use it from thread_exit().
Diagnosed and tested by: neel
MFC after: 3 days
Extend struct sysvec with three new elements:
sv_fetch_syscall_args - the method to fetch syscall arguments from
usermode into struct syscall_args. The structure is machine-depended
(this might be reconsidered after all architectures are converted).
sv_set_syscall_retval - the method to set a return value for usermode
from the syscall. It is a generalization of
cpu_set_syscall_retval(9) to allow ABIs to override the way to set a
return value.
sv_syscallnames - the table of syscall names.
Use sv_set_syscall_retval in kern_sigsuspend() instead of hardcoding
the call to cpu_set_syscall_retval().
The new functions syscallenter(9) and syscallret(9) are provided that
use sv_*syscall* pointers and contain the common repeated code from
the syscall() implementations for the architecture-specific syscall
trap handlers.
Syscallenter() fetches arguments, calls syscall implementation from
ABI sysent table, and set up return frame. The end of syscall
bookkeeping is done by syscallret().
Take advantage of single place for MI syscall handling code and
implement ptrace_lwpinfo pl_flags PL_FLAG_SCE, PL_FLAG_SCX and
PL_FLAG_EXEC. The SCE and SCX flags notify the debugger that the
thread is stopped at syscall entry or return point respectively. The
EXEC flag augments SCX and notifies debugger that the process address
space was changed by one of exec(2)-family syscalls.
The i386, amd64, sparc64, sun4v, powerpc and ia64 syscall()s are
changed to use syscallenter()/syscallret(). MIPS and arm are not
converted and use the mostly unchanged syscall() implementation.
Reviewed by: jhb, marcel, marius, nwhitehorn, stas
Tested by: marcel (ia64), marius (sparc64), nwhitehorn (powerpc),
stas (mips)
MFC after: 1 month
DDB so that all the fields line up.
- Print out the tid of the per-CPU idlethread instead of the pid since
the idle process is now shared across all idle threads.
MFC after: 1 month
fd_set bits in select(2). It seems that historical behaviour is to not
reporting exception on EOF, and several applications are broken.
Reported by: Yoshihiko Sarumaru <ysarumaru gmail com>
Discussed with: bde
PR: ports/140934
MFC after: 2 weeks
eliminate it.
Assert that the object containing the page is locked in
vm_page_test_dirty(). Perform some style clean up while I'm here.
Reviewed by: kib
am now able to run 32 cores ok.. but I still will hang
on buildworld with a NFS problem. I suspect I am missing
a patch for the netlogic rge driver.
JC check and see if I am missing anything except your
core-mask changes
Obtained from: JC
We cannot expect that modspace is the last entry in the linker
set and thus that modspace + possible extra space up to PAGE_SIZE
would be contiguous. For the moment do not support more than
*_MODMIN space and ignore the extra space (*).
(*) We know how to get it back but it'll need testing.
Discussed with: jeff, rwatson (briefly)
Reviewed by: jeff
Sponsored by: The FreeBSD Foundation
Sponsored by: CK Software GmbH
MFC after: 4 days
(TOCONS | TOLOG) mask even when called from DDB points.
That breaks several output, where the most notable is textdump output.
Fix this by having configurable callbacks passed to witness_list_locks()
and witness_display_spinlock() for printing out datas.
Reported by: several broken textdump outputs
Tested by: Giovanni Trematerra
<giovanni dot trematerra at gmail dot com>
MFC after: 7 days
X-MFC: r207922
vm_page_try_to_free(). Consequently, push down the page queues lock into
pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and
pmap_remove_write().
Push down the page queues lock into Xen's pmap_page_is_mapped(). (I
overlooked the Xen pmap in r207702.)
Switch to a per-processor counter for the total number of pages cached.
in a no-sleep context. If resource allocation cannot be done without
sleep, make_dev_credf() fails and returns NULL.
Reviewed by: jh
MFC after: 2 weeks
kern_sendfile() is running, the file's vm object can't be destroyed
because kern_sendfile() increments the vm object's reference count. (Once
kern_sendfile() decrements the reference count and returns, the vm object
can, however, be destroyed. So, sf_buf_mext() must handle the case where
the vm object is destroyed.)
Reviewed by: kib
managed pages that didn't already have that lock held. (Freeing an
unmanaged page, such as the various pmaps use, doesn't require the page
lock.)
This allows a change in vm_page_remove()'s locking requirements. It now
expects the page lock to be held instead of the page queues lock.
Consequently, the page queues lock is no longer required at all by callers
to vm_page_rename().
Discussed with: kib
information for thread to allow calcru1() (re)use.
Rename ruxagg()->ruxagg_locked(), ruxagg_tlock()->ruxagg() [1].
The ruxagg_locked() function no longer clears thread ticks nor
td_incruntime.
Requested by: attilio [1]
Discussed with: attilio, bde
Reviewed by: bde
Based on submission by: Alexander Krizhanovsky <ak natsys-lab com>
MFC after: 1 week
X-MFC-Note: td_rux shall be moved to the end of struct thread
to unconditionally set PG_REFERENCED on a page before sleeping. In many
cases, it's perfectly ok for the page to disappear, i.e., be reclaimed by
the page daemon, before the caller to vm_page_sleep() is reawakened.
Instead, we now explicitly set PG_REFERENCED in those cases where having
the page persist until the caller is awakened is clearly desirable. Note,
however, that setting PG_REFERENCED on the page is still only a hint,
and not a guarantee that the page should persist.
function ruxagg_tlock().
Convert the definition of kern_getrusage() to ANSI C.
Submitted by: Alexander Krizhanovsky <ak natsys-lab com>
MFC after: 1 week
taskqueue_drain(9) will not correctly detect whether a task is
currently running. The check is against a field in the taskqueue
struct, but for a threaded queue with more than one thread, multiple
threads can simultaneously be running a task, thus stomping over the
tq_running field.
Submitted by: Matthew Fleming <matthew.fleming@isilon.com>
Reviewed by: jhb
Approved by: dfr (mentor)
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.
Supported by: Bitgravity Inc.
Discussed with: alc, jeffr, and kib