processes. It did not return an error but instead
just let garbage be passed back. This I fix so
it actually properly translates the priority the
process is at to a posix's high means more priority.
I also fix it so that if the ULE scheduler has bumped
it up to a realtime process you get back a sane value
i.e. the highest priority (63 for time-share).
sched_setscheduler() had the setting of the
timeshare class priority disabled. With some notes
about rejecting the posix high numbers is greater
priority and use nice instead. This fix also
adjusts that to work, with the cavet that a t-s
process may well get bumped up or down i.e. the
setscheduler() will NOT change the nice value only
the current priority. I think this is reasonable
considering if the user wants to play with nice then
he can. At least all the posix'ish interfaces now
respond sanely.
MFC after: 3 weeks
Enhanced process coredump routines.
This brings in the following features:
1) Limit number of cores per process via the %I coredump formatter.
Example:
if corefilename is set to %N.%I.core AND num_cores = 3, then
if a process "rpd" cores, then the corefile will be named
"rpd.0.core", however if it cores again, then the kernel will
generate "rpd.1.core" until we hit the limit of "num_cores".
this is useful to get several corefiles, but also prevent filling
the machine with corefiles.
2) Encode machine hostname in core dump name via %H.
3) Compress coredumps, useful for embedded platforms with limited space.
A sysctl kern.compress_user_cores is made available if turned on.
To enable compressed coredumps, the following config options need to be set:
options COMPRESS_USER_CORES
device zlib # brings in the zlib requirements.
device gzio # brings in the kernel vnode gzip output module.
4) Eventhandlers are fired to indicate coredumps in progress.
5) The imgact sv_coredump routine has grown a flag to pass in more
state, currently this is used only for passing a flag down to compress
the coredump or not.
Note that the gzio facility can be used for generic output of gzip'd
streams via vnodes.
Obtained from: Juniper Networks
Reviewed by: kan
It is the case, however, that the uidinfo of the temporary credential
set up for access(2) is not properly updated when its effective uid is
changed.
MFC after: 3 days
that the virtual machine monitor has enabled machine check exceptions.
Unfortunately, on AMD Family 10h processors the machine check hardware
has a bug (Erratum 383) that can result in a false machine check exception
when a superpage promotion occurs. Thus, I am disabling superpage
promotion when the FreeBSD kernel is running as a guest operating system
on an AMD Family 10h processor.
Reviewed by: jhb, kib
MFC after: 3 days
in the process queue when gathering information for the process, and set
of signals pending for the thread, when gathering information for the
thread. Previously, the sysctl returned a union of the process and some
arbitrary thread pending set for the process, and union of the process
and the thread pending set for the thread.
MFC after: 1 week
too high due to several overflows. The actual limit is somewhere in the
neighborhood of INT_MAX/4 on 64-bit machines, but most systems could not
support such a limit due to a lack of memory and the cost of duplicate
credentials.
Reported by: bde
Although this file has historically been used as a dumping ground for
random functions, nowadays it only contains functions related to copying
bits {from,to} userspace and hash table utility functions.
Behold, subr_uio.c and subr_hash.c.
o Incorporate review comments:
- Properly reference and lock the map
- Take into account that the VM map can change inbetween requests
- Add the fileid and fsid attributes
Credits: kib@
Reviewed by: kib@
obtain the memory map of the traced process. PT_VM_TIMESTAMP can be
used to check if the memory map changed since the last time to avoid
iterating over all the VM entries unnecesarily.
MFC after: 1 month
I added counters to see how often fast copying to userspace was actually
performed, which was only useful during development. Remove these
statistics now we know it to be effective.
Make CAM to stop all attached devices on system shutdown.
It allows devices to park heads, reducing stress on power loss.
Add `kern.cam.power_down` tunable and sysctl to controll it.
successfully. Continue to do this before the empty path check so that the
ENOENT returned in that case gets an empty string token in the BSM record.
MFC after: 3 days
In the current code, the locking is completely broken and may lead
easilly to deadlocks. Fix it by using the proc_mtx, linked to the
suspending thread, as lock for the operation. Keep using the
thread_lock for setting and reading the flag even if it is not entirely
necessary (atomic ops may do it as well, but this way the code is more
readable).
- Fix a deadlock within kthread_suspend().
The suspender should not sleep on a different channel wrt the suspended
thread, or, otherwise, the awaker should wakeup both. Uniform the
interface to what the kproc_* counterparts do (sleeping on the same
channel).
- Change the kthread_suspend_check() prototype.
kthread_suspend_check() always assumes curthread and must only refer to
it, so skip the thread pointer as it may be easilly mistaken.
If curthread is not a kthread, the system will panic.
In collabouration with: jhb
Tested by: Giovanni Trematerra
<giovanni dot trematerra at gmail dot com>
MFC: 2 weeks
In the case of the thread being on a sleepqueue or a turnstile, the
sched_lock was acquired (without the aid of the td_lock interface) and
the td_lock was dropped. This was going to break locking rules on other
threads willing to access to the thread (via the td_lock interface) and
modify his flags (allowed as long as the container lock was different
by the one used in sched_switch).
In order to prevent this situation, while sched_lock is acquired there
the td_lock gets blocked. [0]
- Merge the ULE's internal function thread_block_switch() into the global
thread_lock_block() and make the former semantic as the default for
thread_lock_block(). This means that thread_lock_block() will not
disable interrupts when called (and consequently thread_unlock_block()
will not re-enabled them when called). This should be done manually
when necessary.
Note, however, that ULE's thread_unblock_switch() is not reaped
because it does reflect a difference in semantic due in ULE (the
td_lock may not be necessarilly still blocked_lock when calling this).
While asymmetric, it does describe a remarkable difference in semantic
that is good to keep in mind.
[0] Reported by: Kohji Okuno
<okuno dot kohji at jp dot panasonic dot com>
Tested by: Giovanni Trematerra
<giovanni dot trematerra at gmail dot com>
MFC: 2 weeks
syscall arguments are collected before ptracestop() is called. As a
consequence, debugger cannot modify syscall or its arguments.
For i386, amd64 and ia32 on amd64 MD syscall(), reread syscall number
and arguments after ptracestop(), if debugger modified anything in the
process environment. Since procfs stopeven requires number of syscall
arguments in p_xstat, this cannot be solved by moving stop/trace point
before argument fetching.
Move the code to read arguments into separate function
fetch_syscall_args() to avoid code duplication. Note that ktrace point
for modified syscall is intentionally recorded twice, once with original
arguments, and second time with the arguments set by debugger.
PT_TO_SCX stop is executed after cpu_syscall_set_retval() already.
Reported by: Ali Polatel <alip exherbo org>
Briefly discussed with: jhb
MFC after: 3 weeks
stopped and debugger may modify or drop the signal. After the changes to
keep process-targeted signals on the process sigqueue, another thread
may note the old signal on the queue and act before the thread removes
changed or dropped signal from the process queue. Since process is
traced, it usually gets stopped. Or, if the same signal is delivered
while process was stopped, the thread may erronously remove it,
intending to remove the original signal.
Remove the signal from the queue before notifying the debugger. Restore
the siginfo to the head of sigqueue when signal is allowed to be
delivered to the debugee, using newly introduced KSI_HEAD ksiginfo_t
flag. This preserves required order of delivery. Always restore the
unchanged signal on the curthread sigqueue, not to the process queue,
since the thread is about to get it anyway, because sigmask cannot be
changed.
Handle failure of reinserting the siginfo into the queue by falling
back to sq_kill method, calling sigqueue_add with NULL ksi.
If debugger changed the signal to be delivered, use sigqueue_add()
with NULL ksi instead of only setting sq_signals bit.
Reported by: Gardner Bell <gbell72 rogers com>
Analyzed and first version of fix by: Tijl Coosemans <tijl coosemans org>
PR: 142757
Reviewed by: davidxu
MFC after: 2 weeks
count) while vnode is exclusively locked.
The code for vput(9), vrele(9) and vunref(9) is merged.
In collaboration with: pho
Reviewed by: alc
MFC after: 3 weeks
whether to use source address selection (default) or the primary
jail address for unbound outgoing connections.
This is intended to be used by people upgrading from single-IP
jails to multi-IP jails but not having to change firewall rules,
application ACLs, ... but to force their connections (unless
otherwise changed) to the primry jail IP they had been used for
years, as well as for people prefering to implement similar policies.
Note that for IPv6, if configured incorrectly, this might lead to
scope violations, which single-IPv6 jails could as well, as by the
design of jails. [1]
Reviewed by: jamie, hrs (ipv6 part)
Pointed out by: hrs [1]
MFC After: 2 weeks
Asked for by: Jase Thew (bazerka beardz.net)
of allocating what ever the user asks for up to "ngroups_max + 1". On
systems with large values of kern.ngroups this will be more efficient.
The now redundant check that the array is large enough in
kern_getgroups() is deliberate to allow this change to be merged to
stable/8 without breaking potential third party consumers of the API.
Reported by: bde
MFC after: 28 days
kern.ngroups+1. kern.ngroups can range from NGROUPS_MAX=1023 to
INT_MAX-1. Given that the Windows group limit is 1024, this range
should be sufficient for most applications.
MFC after: 1 month