Commit Graph

10710 Commits

Author SHA1 Message Date
kmacy
565bc001a5 Add accessor functions for socket fields.
MFC after:	1 week
2008-07-21 00:49:34 +00:00
alc
08181df483 Eliminate dead code. (The commit message for revision 1.287 explains why
this code is dead.)
2008-07-20 04:13:51 +00:00
rwatson
b53b96f01c Rather than simply waiting silently and indefinitely for all
interrupt-driven configuration handlers to complete, print out a
diagnostic message every 60 second indicating which handlers are
still running.  Do this at most 5 times per run so as to avoid
scrolling out any useful information from the kernel message
buffer.

The interval of 60 seconds was selected based on a best guess as
to the nature of "long enough" and may want to be tuned higher
or lower depending on real-world tolerances.

MFC after:	3 days
Discussed with:	scottl
2008-07-19 19:08:35 +00:00
rwatson
2df3fcd0c6 witness_addgraph() is required even if DDB isn't compiled into the kernel,
so exclude it from #ifdef DDB.

Submitted by:	attilio
2008-07-19 17:47:23 +00:00
rwatson
8fd5cf995c Add DDB "show conifhk" command, which lists hooks currently waiting
for completion in run_interrupt_driven_config_hooks().  This is
helpful when trying to figure out which device drivers have gone
into la-la land during boot-time autoconfiguration.

MFC after:	3 days
2008-07-19 12:12:54 +00:00
jeff
7ff6e9903f Fix a race which could result in some timeout buckets being skipped.
- When a tick occurs on a cpu, iterate from cs_softticks until ticks.
   The per-cpu tick processing happens asynchronously with the actual
   adjustment of the 'ticks' variable.  Sometimes the results may
   be visible before the local call and sometimes after.  Previously this
   could cause a one tick window where we didn't evaluate the bucket.
 - In softclock fetch curticks before incrementing cc_softticks so we
   don't skip insertions which were made for the current time.

Sponsored by:	Nokia
2008-07-19 05:18:29 +00:00
jeff
b2f69d1b1e - Check whether we've recorded this tick in ts_ticks on another cpu in
sched_tick() to prevent multiple increments for one tick.  This pushes
   the value out of range and breaks priority calculation.

Reviewed by:	kib
Found by:	pho/nokia
Sponsored by:	Nokia
MFC after:	3 days
2008-07-19 05:13:47 +00:00
kmacy
6dfc39c2b6 revert local change 2008-07-18 07:10:33 +00:00
kmacy
eacfaa0e61 revert change from local tree 2008-07-18 07:07:57 +00:00
kmacy
c01ed5ad9b import vendor fixes to cxgb 2008-07-18 06:12:31 +00:00
kib
eff9ee09b4 Pair the VOP_OPEN call from do_execve() with the reciprocal VOP_CLOSE.
This was unnoticed because local filesystems usually do nothing
non-trivial in the close vop.

Reported and tested by:	Rick Macklem
MFC after:	2 weeks
2008-07-17 16:44:07 +00:00
antoine
89ca3c5933 Staticize M_STACK.
Approved by:	rwatson (mentor)
MFC after:	1 month
2008-07-13 17:15:05 +00:00
rodrigc
f280e5ed8f In nmount(), if we see "update" in the mount options,
set MNT_UPDATE in fsflags, and delete the
"update" option from the global mount options.

MNT_UPDATE is a command, and not a property of a mount
that should persist after the command is executed.

We need to do similar things for MNT_FORCE and MNT_RELOAD.

All mount flags are prefixed by MNT_..... it would
be nice if flags which were commands were named differently
from flags which are persistent properties of a mount.
This was not such a big deal in the pre-nmount() days,
but with nmount() it is more important.

Requested by:	yar
MFC after:	2 weeks
2008-07-12 20:12:40 +00:00
obrien
fa9172e3f7 Improve readability and cscope searches a little bit by not using the
same variable name in closely related (but not conflicting) contexts.
2008-07-11 14:48:28 +00:00
kib
da671c0533 Make it atomic for the devfs_populate_loop() to see the setting of
SI_ALIAS flag and initialization of the si_parent when alias is created.
Assert that supplied parent device is not NULL.

Both situations could cause NULL dereference in the
devfs_populate_loop() when creating a symlink for SI_ALIAS'ed device.
Namely, cdp->cdp_c.si_parent may be NULL.

Reported by:	mav
MFC after:	2 weeks
2008-07-11 11:22:19 +00:00
obrien
3b9db50b75 Revert r180431.
r180431 broke the AMD64 build (the only arch using kern/link_elf_obj.c)
2008-07-11 01:10:40 +00:00
obrien
0bc4bc025d Allow 'elf_file_t' to be used in a wider scope. 2008-07-10 16:35:57 +00:00
edwin
e80b338f3b Improve the output of kldload(8) to show which module can't be loaded.
Was:		kldload: Unsupported file type
Is now:		kldload: /boot/modules/test.ko: Unsupported file type

PR:		kern/121276
Submitted by:	Edwin Groothuis <edwin@mavetju.org>
Approved by:	bde (mentor)
MFC after:	1 week
2008-07-08 23:51:38 +00:00
bz
f93b85c0df Add a `show cpusets' DDB command to print numbered root and
assigned CPU affinity sets.

Reviewed by:	brooks
2008-07-07 21:32:02 +00:00
bz
6988e35234 MFp4 144659:
Plug a memory leak with jail services.

PR:		125257
Submitted by:	Mateusz Guzik <mjguzik gmail.com>
MFC after:	6 days
2008-07-07 20:53:49 +00:00
bz
cf63123d06 Move cpuset_refroot and cpuset_refbase functions up, grouping the
cpuset_ref* functions together. Will make it easier to read and
add code without forward declarations.
No functional changes.
2008-07-07 20:45:55 +00:00
kib
d39c6bcffb The kqueue_register() function assumes that it is called from the top of
the syscall code and acquires various event subsystem locks as needed.
The handling of the NOTE_TRACK for EVFILT_PROC is currently done by
calling the kqueue_register() from filt_proc() filter, causing recursive
entrance of the kqueue code. This results in the LORs and recursive
acquisition of the locks.

Implement the variant of the knote() function designed to only handle
the fork() event. It mostly copies the knote() body, but also handles
the NOTE_TRACK, removing the handling from the filt_proc(), where it
causes problems described above. The function is called from the fork1()
instead of knote().

When encountering NOTE_TRACK knote, it marks the knote as influx
and drops the knlist and kqueue lock. In this context call to
kqueue_register is safe from the problems.

An error from the kqueue_register() is reported to the observer as
NOTE_TRACKERR fflag.

PR:	108201
Reviewed by:	jhb, Pramod Srinivasan <pramod juniper net> (previous version)
Discussed with:	jmg
Tested by:	pho
MFC after:	2 weeks
2008-07-07 09:30:11 +00:00
kib
ea1979e3d2 The r178914 I erronously put the setting of the KQ_FLUXWAIT flag before
KQ_FLUX_WAKEUP(). Since the later macro clears the KQ_FLUXWAIT, the
kqueue_scan() thread may be not woken up.

Move the setting of KQ_FLUXWAIT after wakeup to correct the issue.

Reported and tested by:	pho
MFC after:	3 days
2008-07-07 09:15:29 +00:00
alc
c016906f4e Enable the creation of a kmem map larger than 4GB.
Submitted by: Tz-Huan Huang

Make several variables related to kmem map auto-sizing static.
Found by: CScout
2008-07-05 19:34:33 +00:00
rwatson
051819b847 Introduce a new lock, hostname_mtx, and use it to synchronize access
to global hostname and domainname variables.  Where necessary, copy
to or from a stack-local buffer before performing copyin() or
copyout().  A few uses, such as in cd9660 and daemon_saver, remain
under-synchronized and will require further updates.

Correct a bug in which a failed copyin() of domainname would leave
domainname potentially corrupted.

MFC after:	3 weeks
2008-07-05 13:10:10 +00:00
alc
b7d6153751 Correct an error in the comments for init_param3().
Discussed with: silby
2008-07-04 19:36:58 +00:00
rwatson
482bfeab47 Remove NETISR_MPSAFE, which allows specific netisr handlers to be directly
dispatched without Giant, and add NETISR_FORCEQUEUE, which allows specific
netisr handlers to always be dispatched via a queue (deferred).  Mark the
usb and if_ppp netisr handlers as NETISR_FORCEQUEUE, and explicitly
acquire Giant in those handlers.

Previously, any netisr handler not marked NETISR_MPSAFE would necessarily
run deferred and with Giant acquired.  This change removes Giant
scaffolding from the netisr infrastructure, but NETISR_FORCEQUEUE allows
non-MPSAFE handlers to continue to force deferred dispatch so as to avoid
lock order reversals between their acqusition of Giant and any calling
context.

It is likely we will be able to remove NETISR_FORCEQUEUE once
IFF_NEEDSGIANT is removed, as non-MPSAFE usb and if_ppp drivers will no
longer be supported.

Reviewed by:	bz
MFC after:	1 month
X-MFC note:	We can't remove NETISR_MPSAFE from stable/7 for KPI reasons,
		but the rest can go back.
2008-07-04 00:21:38 +00:00
emaste
240825654b Use bcopy instead of strlcpy in uipc_bind and unp_connect, since
soun->sun_path isn't a null-terminated string.  As UNIX(4) states, "the
terminating NUL is not part of the address."  Since strlcpy has to return
"the total length of the string [it] tried to create," it walks off the end
of soun->sun_path looking for a \0.

This reverts r105332.

Reported by:    Ryan Stone
2008-07-03 23:26:10 +00:00
julian
7b11deb4f4 Change a variable name to not shadow a global
Obtained from:	vimage
2008-07-03 08:35:59 +00:00
rwatson
108da791bb Update copyright date in light of soreceive_dgram(9). 2008-07-03 06:47:45 +00:00
rwatson
0c50a62527 Add soreceive_dgram(9), an optimized socket receive function for use by
datagram-only protocols, such as UDP.  This version removes use of
sblock(), which is not required due to an inability to interlace data
improperly with datagrams, as well as avoiding some of the larger loops
and state management that don't apply on datagram sockets.

This is experimental code, so hook it up only for UDPv4 for testing; if
there are problems we may need to revise it or turn it off by default,
but it offers *significant* performance improvements for threaded UDP
applications such as BIND9, nsd, and memcached using UDP.

Tested by:	kris, ps
2008-07-02 23:23:27 +00:00
rdivacky
d3e39bd522 Use msleep_spin() instead of unlock/tsleep/lock. This was
already commited but with a wrong msleep variant and then
backed out. Note that this changes the semantic a little
as msleep_spin does not let us to specify priority after
wakeup.

Approved by:	wkoszek, cognet
Approved by:	kib (mentor)
2008-07-02 20:44:33 +00:00
bz
30064ea555 Remove an unneeded error variable to make clear that if reaching
the end of the function we never return an error.
2008-06-29 18:26:07 +00:00
bz
103613ceb8 Add a new priv 'PRIV_SCHED_CPUSET' to check if manipulating cpusets is
allowed and replace the suser() call. Do not allow it in jails.

Reviewed by:	rwatson
2008-06-29 17:58:16 +00:00
jhb
411d068395 Rework the lifetime management of the kernel implementation of POSIX
semaphores.  Specifically, semaphores are now represented as new file
descriptor type that is set to close on exec.  This removes the need for
all of the manual process reference counting (and fork, exec, and exit
event handlers) as the normal file descriptor operations handle all of
that for us nicely.  It is also suggested as one possible implementation
in the spec and at least one other OS (OS X) uses this approach.

Some bugs that were fixed as a result include:
- References to a named semaphore whose name is removed still work after
  the sem_unlink() operation.  Prior to this patch, if a semaphore's name
  was removed, valid handles from sem_open() would get EINVAL errors from
  sem_getvalue(), sem_post(), etc.  This fixes that.
- Unnamed semaphores created with sem_init() were not cleaned up when a
  process exited or exec'd.  They were only cleaned up if the process
  did an explicit sem_destroy().  This could result in a leak of semaphore
  objects that could never be cleaned up.
- On the other hand, if another process guessed the id (kernel pointer to
  'struct ksem' of an unnamed semaphore (created via sem_init)) and had
  write access to the semaphore based on UID/GID checks, then that other
  process could manipulate the semaphore via sem_destroy(), sem_post(),
  sem_wait(), etc.
- As part of the permission check (UID/GID), the umask of the proces
  creating the semaphore was not honored.  Thus if your umask denied group
  read/write access but the explicit mode in the sem_init() call allowed
  it, the semaphore would be readable/writable by other users in the
  same group, for example.  This includes access via the previous bug.
- If the module refused to unload because there were active semaphores,
  then it might have deregistered one or more of the semaphore system
  calls before it noticed that there was a problem.  I'm not sure if
  this actually happened as the order that modules are discovered by the
  kernel linker depends on how the actual .ko file is linked.  One can
  make the order deterministic by using a single module with a mod_event
  handler that explicitly registers syscalls (and deregisters during
  unload after any checks).  This also fixes a race where even if the
  sem_module unloaded first it would have destroyed locks that the
  syscalls might be trying to access if they are still executing when
  they are unloaded.

  XXX: By the way, deregistering system calls doesn't do any blocking
  to drain any threads from the calls.
- Some minor fixes to errno values on error.  For example, sem_init()
  isn't documented to return ENFILE or EMFILE if we run out of semaphores
  the way that sem_open() can.  Instead, it should return ENOSPC in that
  case.

Other changes:
- Kernel semaphores now use a hash table to manage the namespace of
  named semaphores nearly in a similar fashion to the POSIX shared memory
  object file descriptors.  Kernel semaphores can now also have names
  longer than 14 chars (up to MAXPATHLEN) and can include subdirectories
  in their pathname.
- The UID/GID permission checks for access to a named semaphore are now
  done via vaccess() rather than a home-rolled set of checks.
- Now that kernel semaphores have an associated file object, the various
  MAC checks for POSIX semaphores accept both a file credential and an
  active credential.  There is also a new posixsem_check_stat() since it
  is possible to fstat() a semaphore file descriptor.
- A small set of regression tests (using the ksem API directly) is present
  in src/tools/regression/posixsem.

Reported by:	kris (1)
Tested by:	kris
Reviewed by:	rwatson (lightly)
MFC after:	1 month
2008-06-27 05:39:04 +00:00
julian
e62e072121 Someone cut and pasted a bunch of stuff here so lots of
indents were spaces when they should have been tabs,
screwing up diffs and patches..

Whitespace commit as my first SVN commit. (yay)

MFC after:	1 week
2008-06-26 22:45:04 +00:00
dfr
41cea6d5ca Re-implement the client side of rpc.lockd in the kernel. This implementation
provides the correct semantics for flock(2) style locks which are used by the
lockf(1) command line tool and the pidfile(3) library. It also implements
recovery from server restarts and ensures that dirty cache blocks are written
to the server before obtaining locks (allowing multiple clients to use file
locking to safely share data).

Sponsored by:	Isilon Systems
PR:		94256
MFC after:	2 weeks
2008-06-26 10:21:54 +00:00
ru
c878414354 Fix a chicken-and-egg problem: this files implements SSP support,
so we cannot compile it with -fstack-protector[-all] flags (or
it will self-recurse); this is ensured in sys/conf/files.  This
OTOH means that checking for defines __SSP__ and __SSP_ALL__ to
determine if we should be compiling the support is impossible
(which it was trying, resulting in an empty object file).  Fix
this by always compiling the symbols in this files.  It's good
because it allows us to always have SSP support, and then compile
with SSP selectively.

Repoted by:	tinderbox
2008-06-26 07:52:45 +00:00
ru
8735fdbd4c Enable GCC stack protection (aka Propolice) for userland:
- It is opt-out for now so as to give it maximum testing, but it may be
  turned opt-in for stable branches depending on the consensus.  You
  can turn it off with WITHOUT_SSP.
- WITHOUT_SSP was previously used to disable the build of GNU libssp.
  It is harmless to steal the knob as SSP symbols have been provided
  by libc for a long time, GNU libssp should not have been much used.
- SSP is disabled in a few corners such as system bootstrap programs
  (sys/boot), process bootstrap code (rtld, csu) and SSP symbols themselves.
- It should be safe to use -fstack-protector-all to build world, however
  libc will be automatically downgraded to -fstack-protector because it
  breaks rtld otherwise.
- This option is unavailable on ia64.

Enable GCC stack protection (aka Propolice) for kernel:
- It is opt-out for now so as to give it maximum testing.
- Do not compile your kernel with -fstack-protector-all, it won't work.

Submitted by:	Jeremie Le Hen <jeremie@le-hen.org>
2008-06-25 21:33:28 +00:00
davidxu
70dd244f26 Add two commands to _umtx_op system call to allow a simple mutex to be
locked and unlocked completely in userland. by locking and unlocking mutex
in userland, it reduces the total time a mutex is locked by a thread,
in some application code, a mutex only protects a small piece of code, the
code's execution time is less than a simple system call, if a lock contention
happens, however in current implemenation, the lock holder has to extend its
locking time and enter kernel to unlock it, the change avoids this disadvantage,
it first sets mutex to free state and then enters kernel and wake one waiter
up. This improves performance dramatically in some sysbench mutex tests.

Tested by: kris
Sounds great: jeff
2008-06-24 07:32:12 +00:00
jhb
437891381c Remove the posixsem_check_destroy() MAC check. It is semantically identical
to doing a MAC check for close(), but no other types of close() (including
close(2) and ksem_close(2)) have MAC checks.

Discussed with:	rwatson
2008-06-23 21:37:53 +00:00
rwatson
1e17e3cd45 If S_IFIFO is passed to mknod(2), invoke kern_mkfifoat(9) to create a
FIFO, as required by SUSv3.  No specific privilege check is performed
in this case, as FIFOs may be created by unprivileged processes
(subject to the normal file system name space restrictions that may be
in place).

Unlike the Apple implementation, we reject requests to create a FIFO
using mknod(2) if there is a non-zero dev argument to the system call,
which is permitted by the Open Group specification ("... undefined
...").  We might want to revise this if we find it causes
compatibility problems for applications in practice.

PR:		kern/74242, kern/68459
Obtained from:	Apple, Inc.
MFC after:	3 weeks
2008-06-22 21:51:32 +00:00
gonzo
f0ffee5444 Use minimum of max_aio_procs and target_aio_procs when spawning new
aiod since there should be no more then max_aio_procs processes.
2008-06-21 11:34:34 +00:00
imp
bf94b8a5bf Split out the probing magic of device_probe_and_attach into
device_probe() so that it can be used by busses that may wish to do
additional processing between probe and attach.

Reviewed by:	dfr@
2008-06-20 16:58:15 +00:00
alc
c5556f0762 Enforce the mapping of kernel loadable modules in the uppermost 2GB of the
kernel virtual address space on amd64.
2008-06-20 06:24:34 +00:00
delphij
4f152d47fa Revert rev. 178124 as requested by kris@. Having jail id not being
reused too frequently is useful for script controlled environment.
2008-06-19 21:41:57 +00:00
gonzo
c5bc6314e2 Renew semaphore's pointer after wakeup since during msleep
sem_base may have been modified by destroying one of semaphores
and semptr would not be valid in this case.

PR: kern/123731
2008-06-19 18:08:42 +00:00
kib
eecc60305f Struct cdev is always the member of the struct cdev_priv. When devfs
needed to promote cdev to cdev_priv, the si_priv pointer was followed.

Use member2struct() to calculate address of the wrapping cdev_priv.
Rename si_priv to __si_reserved.

Tested by:	pho
Reviewed by:	ed
MFC after:	2 weeks
2008-06-16 17:34:59 +00:00
jb
567c5d727e Remove code that isn't required. It actually breaks the case where KDTRACE_HOOKS
is defined and KDB isn't. This is the case that it was intended for.
2008-06-16 04:44:29 +00:00
ed
4327eebef0 Turn dev2unit(), minor(), unit2minor() and minor2unit() into macro's.
Now that we got rid of the minor-to-unit conversion and the constraints
on device minor numbers, we can convert the functions that operate on
minor and unit numbers to simple macro's. The unit2minor() and
minor2unit() macro's are now no-ops.

The ZFS code als defined a macro named `minor'. Change the ZFS code to
use umajor() and uminor() here, as it is the correct approach to do
this. Also add $FreeBSD$ to keep SVN happy.

Approved by:	philip (mentor), pjd
2008-06-12 08:30:54 +00:00
ed
1bfc292986 Don't enforce unique device minor number policy anymore.
Except for the case where we use the cloner library (clone_create() and
friends), there is no reason to enforce a unique device minor number
policy. There are various drivers in the source tree that allocate unr
pools and such to provide minor numbers, without using them themselves.

Because we still need to support unique device minor numbers for the
cloner library, introduce a new flag called D_NEEDMINOR. All cdevsw's
that are used in combination with the cloner library should be marked
with this flag to make the cloning work.

This means drivers can now freely use si_drv0 to store their own flags
and state, making it effectively the same as si_drv1 and si_drv2. We
still keep the minor() and dev2unit() routines around to make drivers
happy.

The NTFS code also used the minor number in its hash table. We should
not do this anymore. If the si_drv0 field would be changed, it would no
longer end up in the same list.

Approved by:	philip (mentor)
2008-06-11 18:55:19 +00:00
gonzo
4f61d04fd8 Keep proper track of nsegs counter: sem_free is called for all
allocated semaphores, so it's wrong to increase it conditionally,
  in this case for every over-the-limit semaphore nsegs is decreased
  without being previously increased.

  PR:	kern/123685
  Approved by:	cognet (mentor)
2008-06-10 20:55:10 +00:00
kib
926d12d0ea Provide the mutual exclusion between the nfs export list modifications
and nfs requests processing. Lockmgr lock provides the shared locking for
nfs requests, while exclusive mode is used for modifications. The writer
starvation is handled by lockmgr too.

Reported by:	kris, pho, many
Based on the submission by:	mohan
Tested by:	pho
MFC after:	2 weeks
2008-06-09 10:31:38 +00:00
wkoszek
3183578270 Remove checks against DDB, which isn't used in this file.
My intention is to bring no functional change.

Discussion on:	IRC
Reviewed by:	ed, kan, rink,
2008-06-08 20:43:27 +00:00
ed
be822a5885 Remove unneeded Giant locking of /dev/tty.
The Giant lock is acquired in two places in tty_tty.c. In both places,
it is unneeded.

There is no reason to specify D_NEEDGIANT on this device node. The
device node has only been designed to return ENXIO when opened. It
doesn't make any sense to lock/unlock Giant, just to return this error.
D_TTY is also unneeded. The unimplemented functions don't need to be
patched by devfs.

We don't need to lock Giant when we want to lookup the proper TTY vnode.
s_ttyvp is already protected by proctree_lock (see devfs_vnops.c).

Approved by:	philip (mentor)
2008-06-03 12:38:00 +00:00
davidxu
d4f2094515 Use a seperated hash table for mutex and rwlock, avoid wasting some time
on walking through idle threads sleeping on condition variables.
2008-05-30 02:18:54 +00:00
ed
5de6a45e07 Remove the distinction between device minor and unit numbers.
Even though we got rid of device major numbers some time ago, device
drivers still need to provide unique device minor numbers to make_dev().
These numbers are only used inside the kernel. They are not related to
device major and minor numbers which are visible in devfs. These are
actually based on the inode number of the device.

It would eventually be nice to remove minor numbers entirely, but we
don't want to be too agressive here.

Because the 8-15 bits of the device number field (si_drv0) are still
reserved for the major number, there is no 1:1 mapping of the device
minor and unit numbers. Because this is now unused, remove the
restrictions on these numbers.

The MAXMAJOR definition was actually used for two purposes. It was used
to convert both the userspace and kernelspace device numbers to their
major/minor pair, which is why it is now named UMINORMASK.

minor2unit() and unit2minor() have now become useless. Both minor() and
dev2unit() now serve the same purpose. We should eventually remove some
of them, at least turning them into macro's. If devfs would become
completely minor number unaware, we could consider using si_drv0 directly,
just like si_drv1 and si_drv2.

Approved by:	philip (mentor)
2008-05-29 12:50:46 +00:00
ed
83304da0e8 Remove redundant checks from fcntl()'s F_DUPFD.
Right now we perform some of the checks inside the fcntl()'s F_DUPFD
operation twice. We first validate the `fd' argument. When finished,
we validate the `arg' argument. These checks are also performed inside
do_dup().

The reason we need to do this, is because fcntl() should return different
errno's when the `arg' argument is out of bounds (EINVAL instead of
EBADF). To prevent the redundant locking of the PROC_LOCK and
FILEDESC_SLOCK, patch do_dup() to support the error semantics required
by fcntl().

Approved by:	philip (mentor)
2008-05-28 20:25:19 +00:00
ed
00336df1bc Rename tty_subr.c' to subr_clist.c'.
Because clists are also used outside the TTY layer, rename the file
containing the clist routines to something more accurate.

The mpsafetty TTY layer doesn't use clists. It uses its own buffers,
which also implement the unbuffered copying to userspace. We cannot
simply remove the clist routines then, because this would break various
drivers that are present within the kernel.

Approved by:	philip (mentor)
2008-05-27 06:41:50 +00:00
attilio
e089ccfc1b Improve a comment which, in the actual CVS stock, doesn't completely
explain the logic of the code chunk.
2008-05-27 00:27:50 +00:00
kib
5941eb2619 Take into account possible overflow when multiplying. The casuality is
the malloc call later, panicing kernel due to the oversized allocation.

Reported by:	pho
Reviewed by:	jeff
2008-05-26 10:01:13 +00:00
rwatson
a3623cb733 Remove netatm from HEAD as it is not MPSAFE and relies on the now removed
NET_NEEDS_GIANT.  netatm has been disconnected from the build for ten
months in HEAD/RELENG_7.  Specifics:

- netatm include files
- netatm command line management tools
- libatm
- ATM parts in rescue and sysinstall
- sample configuration files and documents
- kernel support as a module or in NOTES
- netgraph wrapper nodes for netatm
- ctags data for netatm.
- netatm-specific device drivers.

MFC after:	3 weeks
Reviewed by:	bz
Discussed with:	bms, bz, harti
2008-05-25 22:11:40 +00:00
attilio
4755d96541 The "if" semantic is not needed, just fix this. 2008-05-25 16:11:27 +00:00
attilio
4d240aa98e Replace direct atomic operation for the file refcount witht the
refcount interface.
It also introduces the correct usage of memory barriers, as sometimes
fdrop() and fhold() are used with shared locks, which don't use any
release barrier.
2008-05-25 14:57:43 +00:00
jb
1c6ecc547f Add the vtime (virtual time) hooks for DTrace. 2008-05-25 01:44:58 +00:00
jb
c4443570b6 Add DTrace 'proc' provider probes using the Statically Defined Trace
(sdt) mechanism.
2008-05-24 06:22:16 +00:00
rodrigc
a9cd468083 Do not convert the "snapshot" string to the MNT_SNAPSHOT flag here, since
we do it further down in ffs_vfsops.c

MFC after:	1 month
2008-05-23 23:33:07 +00:00
kib
797c3188c0 Rev. 1.274 put the ttyrel() call before the destroy_dev() in the
ttyfree(), freeing the tty. Since destroy_dev() may call d_purge()
cdevsw method, that is the ttypurge() for the tty, the code ends up
accessing freed tty structure.

Put the ttyrel() after destroy_dev() in the ttyfree. To prevent the
panic the rev. 1.274 provided fix for, check the TS_GONE in sysctl
handler and refuse to provide information on such tty.

Reported, debugging help and tested by:	pho
DIscussed with and reviewed by:	jhb
MFC after:	1 week
2008-05-23 16:47:55 +00:00
kib
90775e30db The dev_refthread() in the tty_gettp() may fail, because Giant is taken
in the giant_trick routines after the dev_refthread increments the
si_threadcount. Remove assert, do not perform dev_relthread() for failed
dev_refthread(), and handle failure in the tty_gettp() callers (cdevsw
tty methods).

Before kern_conf.c 1.210 and 1.211, the kernel usually paniced in the
giant_trick routines dereferencing NULL cdevsw, not taking this fault.

Reported by:	Vince Hoffman <jhary unsane co uk>
Debugging help and tested by:	pho
Reviewed by:	jhb
MFC after:	1 week
2008-05-23 16:46:27 +00:00
kib
a0dac34fa6 Use the t_state for the TS_GONE test.
Submitted by:   jhb
MFC after:	3 days
2008-05-23 16:43:59 +00:00
kib
c1c2996ed2 Assert that si_threadcount > 0 before decrementing it. This helps catching
the improper use of the dev_refthread/dev_relthread.

Tested by:	pho
MFC after:	1 week
2008-05-23 16:38:38 +00:00
ed
bdc5be605f Move TTY unrelated bits out of <sys/tty.h>.
For some reason, the <sys/tty.h> header file also contains routines of the
clists and console that are used inside the TTY layer. Because the clists
are not only used by the TTY layer (example: various input drivers), we'd
better move the entire clist programming interface into <sys/clist.h>. Also
remove a declaration of nonexistent variable.

The <sys/tty.h> header also contains various definitions for the console
code (tty_cons.c). Also move these to <sys/cons.h>, because they are
not implemented inside the TTY layer.

While there, create separate malloc pools for the clist and console code.

Approved by:	philip (mentor)
2008-05-23 16:06:35 +00:00
kib
bb95365b8c Another problem caused by the knlist_cleardel() potentially dropping
PIPE_MTX().

Since the pipe_present is cleared before (potentially) sleeping, the
second thread may enter the pipeclose() for the reciprocal pipe end.
The test at the end of the pipeclose() for the pipe_present == 0 would
succeed, allowing the second thread to free the pipe memory. First
threads then accesses the freed memory after being woken up.

Properly track the closing state of the pipe in the pipe_present.
Introduce the intermediate state that marks the pipe as mostly
dismantled but might be sleeping waiting for the knote list to be
cleared. Free the pipe pair memory only when both ends pass that point.

Debugging help and tested by:	pho
Discussed with:	jmg
MFC after:	2 weeks
2008-05-23 11:14:03 +00:00
kib
c106911b42 Destruction of the pipe calls knlist_cleardel() to remove the knotes
monitoring the pipe. The code sets pipe_present = 0 and enters
knlist_cleardel(), where the PIPE_MTX might be dropped when knl->kl_list
cannot be cleared due to influx knotes.

If the following often encountered code fragment
                if (!(kn->kn_status & KN_DETACHED))
                        kn->kn_fop->f_detach(kn);
                knote_drop(kn, td); [1]
is executed while the knlist lock is dropped, then the knote memory is freed
by the knote_drop() without knote being removed from the knlist, since
the filt_pipedetach() contains the following:
        if (kn->kn_filter == EVFILT_WRITE) {
                if (!cpipe->pipe_peer->pipe_present) {
                        PIPE_UNLOCK(cpipe);
                        return;

Now, the memory may be reused in the zone, causing the access to the
freed memory. I got the panics caused by the marker knote appearing on
the knlist, that, I believe, manifestation of the issue. In the Peter
Holm test scenarious, we got unkillable processes too.

The pipe_peer that has the knote for write shall be present. Ignore the
pipe_present value for EVFILT_WRITE in filt_pipedetach().

Debugging help and tested by:	pho
Discussed with:	jmg
MFC after:	2 weeks
2008-05-23 11:09:50 +00:00
jb
6a077c58b8 Add the ctf_get function and update the args to linker_file_function_listall. 2008-05-23 07:08:59 +00:00
jb
1ebf94be7d Add the ctf_get method. 2008-05-23 04:06:49 +00:00
jb
e922b9b976 Allow a rendezvous with just a specified CPU too.
Make the API work in the non-smp case too so that a kernel module
can work the same regardless of whether or not it is loaded on a SMP
kernel or not.
2008-05-23 04:05:26 +00:00
jb
858f2ace1b Add the CTF source file which gets shared with link_elf.c and link_elf_obj.c. 2008-05-23 03:04:27 +00:00
jb
090fe643c2 Add hooks for the Compact C Type Format (CTF) data to be attached to
the elf files. This is complicated by the fact that the actual CTF
parsing has to be done in CDDL'd code, so the BSD licensed code only
knows about the opaque data which it must be able to free.
2008-05-23 00:49:39 +00:00
jb
8c4eed9aad Add support for the DTrace malloc provider which can enable probes
on a per-malloc type basis.
2008-05-23 00:43:36 +00:00
rwatson
60b4eaf522 When sendto(2) is called with an explicit destination address
argument, call mac_socket_check_connect() on that address before
proceeding with the send.  Otherwise policies instrumenting the
connect entry point for the purposes of checking destination
addresses will not have the opportunity to check implicit
connect requests.

MFC after:	3 weeks
Sponsored by:	nCircle Network Security, Inc.
2008-05-22 07:18:54 +00:00
kib
5971791c18 Implement the per-open file data for the cdev.
The patch does not change the cdevsw KBI. Management of the data is
provided by the functions
int	devfs_set_cdevpriv(void *priv, cdevpriv_dtr_t dtr);
int	devfs_get_cdevpriv(void **datap);
void	devfs_clear_cdevpriv(void);
All of the functions are supposed to be called from the cdevsw method
contexts.

- devfs_set_cdevpriv assigns the priv as private data for the file
  descriptor which is used to initiate currently performed driver
  operation. dtr is the function that will be called when either the
  last refernce to the file goes away, the device is destroyed  or
  devfs_clear_cdevpriv is called.
- devfs_get_cdevpriv is the obvious accessor.
- devfs_clear_cdevpriv allows to clear the private data for the still
  open file.

Implementation keeps the driver-supplied pointers in the struct
cdev_privdata, that is referenced both from the struct file and struct
cdev, and cannot outlive any of the referee.

Man pages will be provided after the KPI stabilizes.

Reviewed by:	jhb
Useful suggestions from:	jeff, antoine
Debugging help and tested by:	pho
MFC after:	1 month
2008-05-21 09:31:44 +00:00
pjd
a1af6d977b Be more friendly for DDB pager.
Educated by:	jhb's BSDCan presentation
2008-05-18 21:08:12 +00:00
jb
52f46ad538 Add support for the DTrace struct proc and struct thread extended
data via ctor and dtor event handlers.

The size of the extra data is allocated opaquely and this file
contains a function which the dtrace module can call to check
that the kernel supports at least the amount of data that it needs.

This file is optionally compiled into nthe kernel if the KDTRACE_HOOKS
kernel option is defined.
2008-05-18 19:43:52 +00:00
jb
456cdd0179 Add kernel support for the Statically Defined Trace provider.
This is BSD licensed code written specifically for FreeBSD.

It initialises using SYSINIT so that the SDT provider, probe and
argument description linkage is done whenever a module is loaded,
regardless of whether the DTrace modules are loaded or not.

This file is optionally compiled into the kernel if the KDTRACE_HOOKS
option is defined.
2008-05-18 19:32:36 +00:00
rpaulo
2670a520c1 devctl_process_running(): Check for devsoftc.inuse == 1 instead of
devsoftc.async_proc != NULL because the latter might not be true
sometimes.
This way /etc/rc.suspend gets executed.

Reviwed	by:	njl
Submitted by:	Mitsuru IWASAKI <iwasaki at jp.FreeBSD.org>
Tested also by:	Andreas Wetzel <mickey242 at gmx.net>
MFC after:	1 week
2008-05-18 13:55:51 +00:00
rwatson
14ceaad756 Attempt to improve convergence of POSIX semaphore code with style(9).
MFC after:	3 days
2008-05-16 18:10:07 +00:00
gnn
368bdf05e9 Update the kernel to count the number of mbufs and clusters
(all types) used per socket buffer.

Add support to netstat to print out all of the socket buffer
statistics.

Update the netstat manual page to describe the new -x flag
which gives the extended output.

Reviewed by:	rwatson, julian
2008-05-15 20:18:44 +00:00
attilio
f7f31164f1 - Embed the recursion counter for any locking primitive directly in the
lock_object, using an unified field called lo_data.
- Replace lo_type usage with the w_name usage and at init time pass the
  lock "type" directly to witness_init() from the parent lock init
  function.  Handle delayed initialization before than
  witness_initialize() is called through the witness_pendhelp structure.
- Axe out LO_ENROLLPEND as it is not really needed.  The case where the
  mutex init delayed wants to be destroyed can't happen because
  witness_destroy() checks for witness_cold and panic in case.
- In enroll(), if we cannot allocate a new object from the freelist,
  notify that to userspace through a printf().
- Modify the depart function in order to return nothing as in the current
  CVS version it always returns true and adjust callers accordingly.
- Fix the witness_addgraph() argument name prototype.
- Remove unuseful code from itismychild().

This commit leads to a shrinked struct lock_object and so smaller locks,
in particular on amd64 where 2 uintptr_t (16 bytes per-primitive) are
gained.

Reviewed by:	jhb
2008-05-15 20:10:06 +00:00
jhb
ec0d9f9d00 Go back to using the process command name (p_comm) for the file name and
command line arguments stored in the note at the beginning of a core dump
instead of the current thread name.

Reviewed by:	julian
2008-05-15 03:07:34 +00:00
kib
592c22cb14 Add the devctl notifications for the cdev create/destroy events.
Based on the submission by: Andriy Gapon <avg icyb net ua>
MFC after:	2 weeks
2008-05-14 14:29:54 +00:00
julian
27367de06f fix typo in runz_fuzz
noticed by:Elijah Buck
2008-05-12 06:42:06 +00:00
alc
c251140c26 Introduce a new parameter "superpage_align" to kmem_suballoc() that is
used to request superpage alignment for the submap.

Request superpage alignment for the kmem_map.

Pass VMFS_ANY_SPACE instead of TRUE to vm_map_find().  (They are currently
equivalent but VMFS_ANY_SPACE is the new preferred spelling.)

Remove a stale comment from kmem_malloc().
2008-05-10 21:46:20 +00:00
kib
9a39931e9b Kqueue_scan() may sleep when encountered the influx knotes. On the other
hand, it may cause other threads to sleep since kqueue_scan() may mark
some knotes as infux. This could lead to the deadlock.

Before kqueue_scan() sleeps, wakeup the threads that are waiting for the
influx knotes produced by this thread.

Tested by:	pho (previous version)
Reviewed by:	jmg
MFC after:	2 weeks
2008-05-10 11:37:05 +00:00
kib
0f388c4977 The kqueue_close() encountering the KN_INFLUX knotes on the kq being
closed is the legitimate situation. For instance, filedescriptor with
registered events may be closed in parallel with closing the kqueue.
Properly handle the case instead of asserting that this cannot happen.

Reported and tested by:	pho
Reviewed by:	jmg
MFC after:	2 weeks
2008-05-10 11:35:32 +00:00
julian
1dfc5c98a4 Add code to allow the system to handle multiple routing tables.
This particular implementation is designed to be fully backwards compatible
and to be MFC-able to 7.x (and 6.x)

Currently the only protocol that can make use of the multiple tables is IPv4
Similar functionality exists in OpenBSD and Linux.

From my notes:

-----

  One thing where FreeBSD has been falling behind, and which by chance I
  have some time to work on is "policy based routing", which allows
  different
  packet streams to be routed by more than just the destination address.

  Constraints:
  ------------

  I want to make some form of this available in the 6.x tree
  (and by extension 7.x) , but FreeBSD in general needs it so I might as
  well do it in -current and back port the portions I need.

  One of the ways that this can be done is to have the ability to
  instantiate multiple kernel routing tables (which I will now
  refer to as "Forwarding Information Bases" or "FIBs" for political
  correctness reasons). Which FIB a particular packet uses to make
  the next hop decision can be decided by a number of mechanisms.
  The policies these mechanisms implement are the "Policies" referred
  to in "Policy based routing".

  One of the constraints I have if I try to back port this work to
  6.x is that it must be implemented as a EXTENSION to the existing
  ABIs in 6.x so that third party applications do not need to be
  recompiled in timespan of the branch.

  This first version will not have some of the bells and whistles that
  will come with later versions. It will, for example, be limited to 16
  tables in the first commit.
  Implementation method, Compatible version. (part 1)
  -------------------------------
  For this reason I have implemented a "sufficient subset" of a
  multiple routing table solution in Perforce, and back-ported it
  to 6.x. (also in Perforce though not  always caught up with what I
  have done in -current/P4). The subset allows a number of FIBs
  to be defined at compile time (8 is sufficient for my purposes in 6.x)
  and implements the changes needed to allow IPV4 to use them. I have not
  done the changes for ipv6 simply because I do not need it, and I do not
  have enough knowledge of ipv6 (e.g. neighbor discovery) needed to do it.

  Other protocol families are left untouched and should there be
  users with proprietary protocol families, they should continue to work
  and be oblivious to the existence of the extra FIBs.

  To understand how this is done, one must know that the current FIB
  code starts everything off with a single dimensional array of
  pointers to FIB head structures (One per protocol family), each of
  which in turn points to the trie of routes available to that family.

  The basic change in the ABI compatible version of the change is to
  extent that array to be a 2 dimensional array, so that
  instead of protocol family X looking at rt_tables[X] for the
  table it needs, it looks at rt_tables[Y][X] when for all
  protocol families except ipv4 Y is always 0.
  Code that is unaware of the change always just sees the first row
  of the table, which of course looks just like the one dimensional
  array that existed before.

  The entry points rtrequest(), rtalloc(), rtalloc1(), rtalloc_ign()
  are all maintained, but refer only to the first row of the array,
  so that existing callers in proprietary protocols can continue to
  do the "right thing".
  Some new entry points are added, for the exclusive use of ipv4 code
  called in_rtrequest(), in_rtalloc(), in_rtalloc1() and in_rtalloc_ign(),
  which have an extra argument which refers the code to the correct row.

  In addition, there are some new entry points (currently called
  rtalloc_fib() and friends) that check the Address family being
  looked up and call either rtalloc() (and friends) if the protocol
  is not IPv4 forcing the action to row 0 or to the appropriate row
  if it IS IPv4 (and that info is available). These are for calling
  from code that is not specific to any particular protocol. The way
  these are implemented would change in the non ABI preserving code
  to be added later.

  One feature of the first version of the code is that for ipv4,
  the interface routes show up automatically on all the FIBs, so
  that no matter what FIB you select you always have the basic
  direct attached hosts available to you. (rtinit() does this
  automatically).

  You CAN delete an interface route from one FIB should you want
  to but by default it's there. ARP information is also available
  in each FIB. It's assumed that the same machine would have the
  same MAC address, regardless of which FIB you are using to get
  to it.

  This brings us as to how the correct FIB is selected for an outgoing
  IPV4 packet.

  Firstly, all packets have a FIB associated with them. if nothing
  has been done to change it, it will be FIB 0. The FIB is changed
  in the following ways.

  Packets fall into one of a number of classes.

  1/ locally generated packets, coming from a socket/PCB.
     Such packets select a FIB from a number associated with the
     socket/PCB. This in turn is inherited from the process,
     but can be changed by a socket option. The process in turn
     inherits it on fork. I have written a utility call setfib
     that acts a bit like nice..

         setfib -3 ping target.example.com # will use fib 3 for ping.

     It is an obvious extension to make it a property of a jail
     but I have not done so. It can be achieved by combining the setfib and
     jail commands.

  2/ packets received on an interface for forwarding.
     By default these packets would use table 0,
     (or possibly a number settable in a sysctl(not yet)).
     but prior to routing the firewall can inspect them (see below).
     (possibly in the future you may be able to associate a FIB
     with packets received on an interface..  An ifconfig arg, but not yet.)

  3/ packets inspected by a packet classifier, which can arbitrarily
     associate a fib with it on a packet by packet basis.
     A fib assigned to a packet by a packet classifier
     (such as ipfw) would over-ride a fib associated by
     a more default source. (such as cases 1 or 2).

  4/ a tcp listen socket associated with a fib will generate
     accept sockets that are associated with that same fib.

  5/ Packets generated in response to some other packet (e.g. reset
     or icmp packets). These should use the FIB associated with the
     packet being reponded to.

  6/ Packets generated during encapsulation.
     gif, tun and other tunnel interfaces will encapsulate using the FIB
     that was in effect withthe proces that set up the tunnel.
     thus setfib 1 ifconfig gif0 [tunnel instructions]
     will set the fib for the tunnel to use to be fib 1.

  Routing messages would be associated with their
  process, and thus select one FIB or another.
  messages from the kernel would be associated with the fib they
  refer to and would only be received by a routing socket associated
  with that fib. (not yet implemented)

  In addition Netstat has been edited to be able to cope with the
  fact that the array is now 2 dimensional. (It looks in system
  memory using libkvm (!)). Old versions of netstat see only the first FIB.

  In addition two sysctls are added to give:
  a) the number of FIBs compiled in (active)
  b) the default FIB of the calling process.

  Early testing experience:
  -------------------------

  Basically our (IronPort's) appliance does this functionality already
  using ipfw fwd but that method has some drawbacks.

  For example,
  It can't fully simulate a routing table because it can't influence the
  socket's choice of local address when a connect() is done.

  Testing during the generating of these changes has been
  remarkably smooth so far. Multiple tables have co-existed
  with no notable side effects, and packets have been routes
  accordingly.

  ipfw has grown 2 new keywords:

  setfib N ip from anay to any
  count ip from any to any fib N

  In pf there seems to be a requirement to be able to give symbolic names to the
  fibs but I do not have that capacity. I am not sure if it is required.

  SCTP has interestingly enough built in support for this, called VRFs
  in Cisco parlance. it will be interesting to see how that handles it
  when it suddenly actually does something.

  Where to next:
  --------------------

  After committing the ABI compatible version and MFCing it, I'd
  like to proceed in a forward direction in -current. this will
  result in some roto-tilling in the routing code.

  Firstly: the current code's idea of having a separate tree per
  protocol family, all of the same format, and pointed to by the
  1 dimensional array is a bit silly. Especially when one considers that
  there is code that makes assumptions about every protocol having the
  same internal structures there. Some protocols don't WANT that
  sort of structure. (for example the whole idea of a netmask is foreign
  to appletalk). This needs to be made opaque to the external code.

  My suggested first change is to add routing method pointers to the
  'domain' structure, along with information pointing the data.
  instead of having an array of pointers to uniform structures,
  there would be an array pointing to the 'domain' structures
  for each protocol address domain (protocol family),
  and the methods this reached would be called. The methods would have
  an argument that gives FIB number, but the protocol would be free
  to ignore it.

  When the ABI can be changed it raises the possibilty of the
  addition of a fib entry into the "struct route". Currently,
  the structure contains the sockaddr of the desination, and the resulting
  fib entry. To make this work fully, one could add a fib number
  so that given an address and a fib, one can find the third element, the
  fib entry.

  Interaction with the ARP layer/ LL layer would need to be
  revisited as well. Qing Li has been working on this already.

  This work was sponsored by Ironport Systems/Cisco

Reviewed by:    several including rwatson, bz and mlair (parts each)
Obtained from:  Ironport systems/Cisco
2008-05-09 23:03:00 +00:00
dfr
b95b50cdbb When blocking on an F_FLOCK style lock request which is upgrading a
shared lock to exclusive, drop the shared lock before deadlock
detection.

MFC after: 2 days
2008-05-09 10:34:23 +00:00
pjd
a902aa50c3 - Export HZ value via kern.hz sysctl (this is the same name as for the
loader tunable).
- Document other sysctls in this file and also mark them as loader tunable
  via CTLFLAG_RDTUN flag.

Reviewed by:	roberto
2008-05-09 07:42:02 +00:00
attilio
0ce490cd03 Add a new witness sysctl which returns the relations between any lock
and its children in the form:
"parent","child"
so that head and bottom of an oriented graph can be easilly detected and
various form of diagrams can be build.
The sysctl is called debug.witness.graphs and it is read-only; in order
to get the list of relations, a simple:
#sysctl debug.witness.graphs
will do the trick.

This approach has been choosen in order to support easilly things like
the DOT format and such.  Soon, an auto-explicative awk script, which
filters simple informations returned by the sysctl and converts them into
a real DOT script, will be committed to the repository between examples.

Discussed with:	rwatson
2008-05-07 21:41:36 +00:00
kmacy
afbf6fcd73 add malloc flag to blist so that it can be used in ithread context
Reviewed by: alc, bsdimp
2008-05-05 19:48:54 +00:00
jhb
4fb93663e6 Fix a few edge cases with error handling in cpufreq(4)'s CPUFREQ_GET()
method:
- If the last of the child cpufreq drivers returns an error while trying to
  fetch its list of supported frequencies but an earlier driver found the
  requested frequency, don't return an error to the caller.
- If all of the child cpufreq drivers fail and the attempt to match the
  frequency based on 'cpu_est_clockrate()' fails, return ENXIO rather than
  returning success and returning a frequency of CPUFREQ_VAL_UNKNOWN.

MFC after:	3 days
PR:		kern/121433
Reported by:	Eugene Grosbein  eugen ! kuzbass dot ru
2008-05-05 19:13:52 +00:00
peter
5a3c5f632b Expand kdb_alt_break a little, most commonly used with the option
ALT_BREAK_TO_DEBUGGER.  In addition to "Enter ~ ctrl-B" (to enter the
debugger), there is now "Enter ~ ctrl-P" (force panic) and
"Enter ~ ctrl-R" (request clean reboot, ala ctrl-alt-del on syscons).

We've used variations of this at work.  The force panic sequence is
best used with KDB_UNATTENDED for when you just want it to dump and
get on with it.

The reboot request is a safer way of getting into single user than
a power cycle.  eg: you've hosed the ability to log in (pam, rtld, etc).
It gives init the reboot signal, which causes an orderly reboot.

I've taken my best guess at what the !x86 and non-sio code changes
should be.

This also makes sio release its spinlock before calling KDB/DDB.
2008-05-04 23:29:38 +00:00
attilio
bb68298f62 sync_vnode() has some messy code about locking in order to deal with
mount fs needing Giant to be held when processing bufobjs.
Use a different subqueue for pending workitems on filesystems requiring
Giant. This simplifies the code notably and also reduces the number of
Giant acquisitions (and the whole processing cost).

Suggested by:	jeff
Reviewed by:	kib
Tested by:	pho
2008-05-04 13:54:55 +00:00
julian
7db99d1cdd Attempt to make the print types more friendly to other architectures.
Prodded by: Max Laier
Help from: BMS, jhb
2008-04-30 20:00:30 +00:00
julian
2ddf06099d Document the kproc_kthread_add() call
and fix a small detail of its implementation.
MFC after: 1 week
2008-04-29 22:43:15 +00:00
rdivacky
2db61d77e9 Lock filedesc exclusively when modifying fd_[cr]dir.
This is probably harmless but it's better to lock it
correctly.

Approved by:	kib (mentor)
2008-04-29 21:40:11 +00:00
julian
28430bf762 Add an option (compiled out by default)
to profile outoing packets for a number of mbuf chain
related parameters
e.g. number of mbufs, wasted space.
probably will do with further work later.

Reviewed by: various
2008-04-29 21:23:21 +00:00
davidxu
4121b0c965 Fix compiling problem. 2008-04-29 05:48:05 +00:00
davidxu
e43b7bfc16 Introduce command UMTX_OP_WAIT_UINT_PRIVATE and UMTX_OP_WAKE_PRIVATE
to allow userland to specify that an address is not shared by multiple
processes.
2008-04-29 03:48:48 +00:00
rwatson
fbda0dfa86 When writing trailers in sendfile(2), don't call kern_writev()
while holding the socket buffer lock.  These leads to an
immediate panic due to recursing the socket buffer lock.  This
bug was introduced in uipc_syscalls.c:1.240, but masked by
another bug until that was fixed in uipc_syscalls.c:1.269.

Note that the current fix isn't perfect, but better than
panicking: normally we guarantee that simultaneous invocations
of a system call to write on a stream socket won't be
interlaced, which is ensured by use of the socket buffer sleep
lock.  This is guaranteed for the sendfile headers, but not
trailers.  In practice, this is likely not a problem, but
should be fixed.

MFC after:	3 days
Pointy hat to:	andre (1.240), cperciva (1.269)
2008-04-27 15:50:00 +00:00
kris
150f1de0cf * Correct a mis-merge that leaked the PROC_LOCK [1]
* Return ENOENT on error instead of 0 [2]

Submitted by: rdivacky [1], kib [2]
2008-04-26 13:16:55 +00:00
pjd
cb7610bd52 Implement 'show mount' command in DDB. Without argument, it prints short
info about all currently mounted file systems. When an address is given
as an argument, prints detailed info about the given mount point.

MFC after:	2 weeks
2008-04-26 13:04:48 +00:00
jeff
14b586bf96 - Add an integer argument to idle to indicate how likely we are to wake
from idle over the next tick.
 - Add a new MD routine, cpu_wake_idle() to wakeup idle threads who are
   suspended in cpu specific states.  This function can fail and cause the
   scheduler to fall back to another mechanism (ipi).
 - Implement support for mwait in cpu_idle() on i386/amd64 machines that
   support it.  mwait is a higher performance way to synchronize cpus
   as compared to hlt & ipis.
 - Allow selecting the idle routine by name via sysctl machdep.idle.  This
   replaces machdep.cpu_idle_hlt.  Only idle routines supported by the
   current machine are permitted.

Sponsored by:	Nokia
2008-04-25 05:18:50 +00:00
kris
d6c5faf2cc fdhold can return NULL, so add the one remaining missing check for this
condition.

Reviewed by:    attilio
MFC after:      1 week
2008-04-24 22:08:36 +00:00
kib
9f2031da02 Allow the vnode zone to return the unused memory. The vnode reference
count is/shall be properly maintained for the long time, and VFS
shall be safe against the vnode memory reclamation.

Proposed by:	jeff
Tested by:	pho
2008-04-24 09:58:33 +00:00
phk
8d647da1ed Now that all platforms use genclock, shuffle things around slightly
for better structure.

Much of this is related to <sys/clock.h>, which should really have
been called <sys/calendar.h>, but unless and until we need the name,
the repocopy can wait.

In general the kernel does not know about minutes, hours, days,
timezones, daylight savings time, leap-years and such.  All that
is theoretically a matter for userland only.

Parts of kernel code does however care: badly designed filesystems
store timestamps in local time and RTC chips almost universally
track time in a YY-MM-DD HH:MM:SS format, and sometimes in local
timezone instead of UTC.  For this we have <sys/clock.h>

<sys/time.h> on the other hand, deals with time_t, timeval, timespec
and so on.  These know only seconds and fractions thereof.

Move inittodr() and resettodr() prototypes to <sys/time.h>.
Retain the names as it is one of the few surviving PDP/VAX references.

Move startrtclock() to <machine/clock.h> on relevant platforms, it
is a MD call between machdep.c/clock.c.  Remove references to it
elsewhere.

Remove a lot of unnecessary <sys/clock.h> includes.

Move the machdep.disable_rtc_set sysctl to subr_rtc.c where it belongs.
XXX: should be kern.disable_rtc_set really, it's not MD.
2008-04-22 19:38:30 +00:00
pjd
056014462f Back-out previous revision. For now I can use _ddb() variants of stack(9) KPI,
as I use it for debugging only. Once someone will need it for more production
features, the change should be reconsider.

Requested by:	rwatson
2008-04-21 17:22:35 +00:00
rwatson
ca47fccd6b Convert pcbinfo and inpcb mutexes to rwlocks, and modify macros to
explicitly select write locking for all use of the inpcb mutex.
Update some pcbinfo lock assertions to assert locked rather than
write-locked, although in practice almost all uses of the pcbinfo
rwlock main exclusive, and all instances of inpcb lock acquisition
are exclusive.

This change should introduce (ideally) little functional change.
However, it lays the groundwork for significantly increased
parallelism in the TCP/IP code.

MFC after:	3 months
Tested by:	kris (superset of committered patch)
2008-04-17 21:38:18 +00:00
pjd
3e83d6e7db Allow linker_search_symbol_name() to be called with KLD lock held.
The linker_search_symbol_name() function is used by stack_print()
and stack_print() can be called from kernel module unload method.

MFC after:	1 week
2008-04-17 19:19:40 +00:00
jeff
3f4fde5950 - Add a metric to describe how busy a processor has been over the last
two ticks by counting the number of switches and the load when
   sched_clock() is called.
 - If the busy metric exceeds a threshold allow the idle thread to spin
   waiting for new work for a brief period to avoid using IPIs.  This
   reduces the cost on the sender and receiver as well as reducing wakeup
   latency considerably when it works.

Sponsored by:	Nokia
2008-04-17 09:56:01 +00:00
jeff
9d30d1d7a4 - Make SCHED_STATS more generic by adding a wrapper to create the
variables and sysctl nodes.
 - In reset walk the children of kern_sched_stats and reset the counters
   via the oid_arg1 pointer.  This allows us to add arbitrary counters to
   the tree and still reset them properly.
 - Define a set of switch types to be passed with flags to mi_switch().
   These types are named SWT_*.  These types correspond to SCHED_STATS
   counters and are automatically handled in this way.
 - Make the new SWT_ types more specific than the older switch stats.
   There are now stats for idle switches, remote idle wakeups, remote
   preemption ithreads idling, etc.
 - Add switch statistics for ULE's pickcpu algorithm.  These stats include
   how much migration there is, how often affinity was successful, how
   often threads were migrated to the local cpu on wakeup, etc.

Sponsored by:	Nokia
2008-04-17 04:20:10 +00:00
dfr
f50ee5045a Fix compilation with LOCKF_DEBUG. 2008-04-16 14:08:12 +00:00
kib
52243403eb Move the head of byte-level advisory lock list from the
filesystem-specific vnode data to the struct vnode. Provide the
default implementation for the vop_advlock and vop_advlockasync.
Purge the locks on the vnode reclaim by using the lf_purgelocks().
The default implementation is augmented for the nfs and smbfs.
In the nfs_advlock, push the Giant inside the nfs_dolock.

Before the change, the vop_advlock and vop_advlockasync have taken the
unlocked vnode and dereferenced the fs-private inode data, racing with
with the vnode reclamation due to forced unmount. Now, the vop_getattr
under the shared vnode lock is used to obtain the inode size, and
later, in the lf_advlockasync, after locking the vnode interlock, the
VI_DOOMED flag is checked to prevent an operation on the doomed vnode.

The implementation of the lf_purgelocks() is submitted by dfr.

Reported by:	kris
Tested by:	kris, pho
Discussed with:	jeff, dfr
MFC after:	2 weeks
2008-04-16 11:33:32 +00:00
davidxu
a19eeb1bb9 Implement POSIX function tcgetsid() which returns session id.
PR: stand/107561
2008-04-15 08:33:32 +00:00
marcel
da8b8894d6 Support and switch to the ULE scheduler:
o  Implement IPI_PREEMPT,
o  Set td_lock for the thread being switched out,
o  For ULE & SMP, loop while td_lock points to blocked_lock for
   the thread being switched in,
o  Enable ULE by default in GENERIC and SKI,
2008-04-15 05:02:42 +00:00
rrs
5759bc8cd3 Add pru_flush routine so a transport can
flush itself during Shutdown

MFC after:	1 week
2008-04-14 18:06:04 +00:00
alc
23cff96741 Initialize the vm object's flags to include OBJ_NOSPLIT, just like the
vm objects that are used by System V shared memory segments.
2008-04-13 21:08:34 +00:00
attilio
5a49f99cf6 Use a "rel" memory barrier for disowning the lock as it cames from an
exclusive locking operation.
2008-04-13 01:21:56 +00:00
attilio
ab58eeddbc struct lock_instance and struct lock_list_entry don't need to be in the
public namespace for WITNESS as they are only used internally so just
move them in the private namespace for the subsystem (with all related
supporting definitions).
2008-04-13 01:20:47 +00:00
phk
17665aa28a fix printf type confusion on amd64 2008-04-12 21:51:54 +00:00
phk
cee09d51d4 Emit summaries of struct c(alender)t(ime) <-> struct timespec conversions
under bootverbose.

Struct ct is used for setting/reading real time clocks and I'm about
to Do Things to some of those, so a bit of preemptive debugging is
in order.

Remove a pointless __inline.
2008-04-12 20:35:56 +00:00
attilio
7ba94cc449 - Re-introduce WITNESS support for lockmgr. About the old implementation
the only one difference is that lockmgr*() functions now accept
  LK_NOWITNESS flag which skips ordering for the instanced calling.
- Remove an unuseful stub in witness_checkorder() (because the above check
  doesn't allow ever happening) and allow witness_upgrade() to accept
  non-try operation too.
2008-04-12 19:57:30 +00:00
attilio
6c6f1ddb9c - Remove a stale comment.
- Add an extra assertion in order to catch malformed requested operations.
2008-04-12 13:56:17 +00:00
attilio
4364cd23ef Add missing stubs for spinlocks cpuset and intrcnt.
Submitted by:	kris
2008-04-12 13:51:18 +00:00
delphij
f2c4672082 Instead of rolling our own jail number allocation procedure, use
alloc_unr() to do it.

Submitted by:	Ed Schouten <ed 80386 nl>
PR:		kern/122270
MFC after:	1 month
2008-04-11 21:31:15 +00:00
jhb
b1a2f38848 Use kthread_exit() to terminate a taskqueue thread rather than kproc_exit()
now that the taskqueue threads are kthreads rather than kprocs.

Reported by:	kris
2008-04-11 17:35:54 +00:00
jeff
8efb03d60e - Add the interrupt vector number to intr_event_create so MI code can
lookup hard interrupt events by number.  Ignore the irq# for soft intrs.
 - Add support to cpuset for binding hardware interrupts.  This has the
   side effect of binding any ithread associated with the hard interrupt.
   As per restrictions imposed by MD code we can only bind interrupts to
   a single cpu presently.  Interrupts can be 'unbound' by binding them
   to all cpus.

Reviewed by:	jhb
Sponsored by:	Nokia
2008-04-11 03:26:41 +00:00
pjd
d876c8127f - Use LK_TYPE_MASK where needed. Actually after sys/sys/lockmgr.h:1.69 it is
no longer needed, but for now we still want to be consistent with other
  similar checks in the tree.
- Call ASSERT_VOP_ELOCKED() only when vget() returns 0.

Reviewed by:	jeff
2008-04-09 20:19:55 +00:00
sam
b6fa36ece7 Do image loading in a context known to have a root directory:
o create a private task queue thread that sets up root and current
  directories (hooking mountroot event as needed); this is necessary
  because task queue threads are parented from proc0 and it does not
  have a reference to rootvnode (lost when / mounting moved to init)
o bounce image load + unload requests through the private task q so
  we can load images even when the request is made from a thread that
  does not have sufficient context (e.g. task q thread)
o add a check in the task q thread to fail requests before root is
  mounted (just in case)

Reviewed by:	jhb, mlaier, luigi (glance)
MFC after:	1 month
2008-04-09 19:07:48 +00:00
sam
7979f5eff2 o add a mountroot event handler that fires when / is mounted; this information
was lost when root started being mounted by init
o remove SI_SUB_MOUNT_ROOT since it's no longer meaningful

MFC after:	2 weeks
2008-04-08 17:53:33 +00:00
sam
2601660586 change taskqueue_start_threads to create threads instead of proc's
Reviewed by:	jhb
2008-04-08 17:48:02 +00:00
kib
eb77b477b4 Implement the linux syscalls
openat, mkdirat, mknodat, fchownat, futimesat, fstatat, unlinkat,
    renameat, linkat, symlinkat, readlinkat, fchmodat, faccessat.

Submitted by:	rdivacky
Sponsored by:	Google Summer of Code 2007
Tested by:	pho
2008-04-08 09:45:49 +00:00
attilio
d5dbd84790 - Use a different encoding for lockmgr options: make them encoded by
bit in order to allow per-bit checks on the options flag, in particular
  in the consumers code [1]
- Re-enable the check against TDP_DEADLKTREAT as the anti-waiters
  starvation patch allows exclusive waiters to override new shared
  requests.

[1] Requested by:	pjd, jeff
2008-04-07 14:46:38 +00:00
truckman
3ab6955dbc vfs_syscalls.c 1.452 mistakenly swapped the behavior of chown() and lchown(). 2008-04-07 00:29:32 +00:00
attilio
07441f19e1 Optimize lockmgr in order to get rid of the pool mutex interlock, of the
state transitioning flags and of msleep(9) callings.
Use, instead, an algorithm very similar to what sx(9) and rwlock(9)
alredy do and direct accesses to the sleepqueue(9) primitive.

In order to avoid writer starvation a mechanism very similar to what
rwlock(9) uses now is implemented, with the correspective per-thread
shared lockmgrs counter.

This patch also adds 2 new functions to lockmgr KPI: lockmgr_rw() and
lockmgr_args_rw().  These two are like the 2 "normal" versions, but they
both accept a rwlock as interlock.  In order to realize this, the general
lockmgr manager function "__lockmgr_args()" has been implemented through
the generic lock layer. It supports all the blocking primitives, but
currently only these 2 mappers live.

The patch drops the support for WITNESS atm, but it will be probabilly
added soon. Also, there is a little race in the draining code which is
also present in the current CVS stock implementation: if some sharers,
once they wakeup, are in the runqueue they can contend the lock with
the exclusive drainer.  This is hard to be fixed but the now committed
code mitigate this issue a lot better than the (past) CVS version.
In addition assertive KA_HELD and KA_UNHELD have been made mute
assertions because they are dangerous and they will be nomore supported
soon.

In order to avoid namespace pollution, stack.h is splitted into two
parts: one which includes only the "struct stack" definition (_stack.h)
and one defining the KPI.  In this way, newly added _lockmgr.h can
just include _stack.h.

Kernel ABI results heavilly changed by this commit (the now committed
version of "struct lock" is a lot smaller than the previous one) and
KPI results broken by lockmgr_rw() / lockmgr_args_rw() introduction,
so manpages and __FreeBSD_version will be updated accordingly.

Tested by:      kris, pho, jeff, danger
Reviewed by:    jeff
Sponsored by:   Google, Summer of Code program 2007
2008-04-06 20:08:51 +00:00
jeff
d1c199f415 - Correct a major error introduced in the per-cpu timeout commit. Sleep
and wakeup require the same wait channel to function properly.

Found by:	kris
Pointy hat:	me
2008-04-06 11:08:49 +00:00
jhb
68917b32fc Move INTR_FILTER from opt_global.h to its own header. 2008-04-05 20:13:15 +00:00
jhb
79918c45a6 Add a MI intr_event_handle() routine for the non-INTR_FILTER case. This
allows all the INTR_FILTER #ifdef's to be removed from the MD interrupt
code.
- Rename the intr_event 'eoi', 'disable', and 'enable' hooks to
  'post_filter', 'pre_ithread', and 'post_ithread' to be less x86-centric.
  Also, add a comment describe what the MI code expects them to do.
- On amd64, i386, and powerpc this is effectively a NOP.
- On arm, don't bother masking the interrupt unless the ithread is
  scheduled in the non-INTR_FILTER case to match what INTR_FILTER did.
  Also, don't bother unmasking the interrupt in the post_filter case if
  we never masked it.  The INTR_FILTER case had been doing this by having
  arm_unmask_irq for the post_filter (formerly 'eoi') hook.
- On ia64, stray interrupts are now masked for the non-INTR_FILTER case.
  They were already masked in the INTR_FILTER case.
- On sparc64, use the a NULL pre_ithread hook and use intr_enable_eoi() for
  both the 'post_filter' and 'post_ithread' hooks to match what the
  non-INTR_FILTER code did.
- On sun4v, retire the ithread wrapper hack by using an appropriate
  'post_ithread' hook instead (it's what 'post_ithread'/'enable' was
  designed to do even in 5.x).

Glanced at by:	piso
Reviewed by:	marius
Requested by:	marius [1], [5]
Tested on:	amd64, i386, arm, sparc64
2008-04-05 19:58:30 +00:00
alc
067dba5f97 Reintroduce UMA_SLAB_KMAP; however, change its spelling to
UMA_SLAB_KERNEL for consistency with its sibling UMA_SLAB_KMEM.
(UMA_SLAB_KMAP met its original demise in revision 1.30 of
vm/uma_core.c.)  UMA_SLAB_KERNEL is now required by the jumbo frame
allocators.  Without it, UMA cannot correctly return pages from the
jumbo frame zones to the VM system because it resets the pages' object
field to NULL instead of the kernel object.  In more detail, the jumbo
frame zones are created with the option UMA_ZONE_REFCNT.  This causes
UMA to overwrite the pages' object field with the address of the slab.
However, when UMA wants to release these pages, it doesn't know how to
restore the object field, so it sets it to NULL.  This change teaches
UMA how to reset the object field to the kernel object.

Crashes reported by: kris
Fix tested by: kris
Fix discussed with: jeff
MFC after: 6 weeks
2008-04-04 18:41:12 +00:00
jeff
d6dabc3153 - Add sysctls at debug.rwlock to control the behavior of the speculative
spinning when readers hold a lock.  This spinning is speculative because,
   unlike the write case, we can not test whether the owners are running.
 - Add speculative read spinning for readers who are blocked by pending
   writers while a read lock is still held.  This allows the thread to
   spin until the write lock succeeds after which it may spin until the
   writer has released the lock.  This prevents excessive context switches
   when readers and writers both hold the lock for brief periods.

Sponsored by:	Nokia
2008-04-04 10:00:46 +00:00
jeff
73796e923c - Add a Nokia copyright to cpuset to reflect their generous
contribution to this work.
2008-04-04 01:22:04 +00:00
jeff
85d3ffe23c - Allow static_boost to specify no boost with '0', traditional kernel
fixed pri boost with '1' or any priority less than the current thread's
   priority with a value greater than two.  Default the boost to
   PRI_MIN_TIMESHARE to prevent regular user-space threads from starving
   threads in the kernel.  This prevents these user-threads from also
   being scheduled as if they are high fixed-priority kernel threads.
 - Restore the setting of lowpri in tdq_choose().  It has to be either here
   or in sched_switch().  I accidentally removed it from both places.

Tested by:	kris
2008-04-04 01:16:18 +00:00
jeff
c50de590cc - Don't check for the ITHD pri class in tdq_load_add and rem. 4BSD doesn't
do this either.  Simply check P_NOLOAD.  It'd be nice if this was
   in a thread flag so we didn't have an extra cache miss every time we
   add and remove a thread from the run-queue.
2008-04-04 01:04:43 +00:00
jeff
7d635b683d - Fix a mis-merge that crept in during the softclock changes.
Spotted by:	jhb
2008-04-04 01:03:23 +00:00
davidxu
6e5250730e let umtxq_busy() only spin on mp machine. make function name
do_rwlock_unlock to be consistent with others.
2008-04-03 11:49:20 +00:00
jeff
ca28ca664f - Convert two timeout users to the new callout_reset_curcpu() api.
Sponsored by:	Nokia
2008-04-02 11:21:42 +00:00
jeff
b065517935 Implement per-cpu callout threads, wheels, and locks.
- Move callout thread creation from kern_intr.c to kern_timeout.c
 - Call callout_tick() on every processor via hardclock_cpu() rather than
   inspecting callout internal details in kern_clock.c.
 - Remove callout implementation details from callout.h
 - Package up all of the global variables into a per-cpu callout structure.
 - Start one thread per-cpu.  Threads are not strictly bound.  They prefer
   to execute on the native cpu but may migrate temporarily if interrupts
   are starving callout processing.
 - Run all callouts by default in the thread for cpu0 to maintain current
   ordering and concurrency guarantees.  Many consumers may not properly
   handle concurrent execution.
 - The new callout_reset_on() api allows specifying a particular cpu to
   execute the callout on.  This may migrate a callout to a new cpu.
   callout_reset() schedules on the last assigned cpu while
   callout_reset_curcpu() schedules on the current cpu.

Reviewed by:	phk
Sponsored by:	Nokia
2008-04-02 11:20:30 +00:00
kib
c951adca24 Add two missed chunks from the rev. 1.210, for the giant_read() and
giant_ioctl().

PR:	kern/122287
MFC after:	3 days
2008-04-02 11:11:58 +00:00
jeff
639ca8f21b - Destroy the bo mtx when the vnode is destroyed. 2008-04-02 10:40:03 +00:00
davidxu
aefa44f0cc Fix compiling problem for amd64. 2008-04-02 05:54:41 +00:00
davidxu
f4f495d3ed Er, don't restart a timeout version. 2008-04-02 04:26:59 +00:00
davidxu
ebdf401288 Introduce kernel based userland rwlock. Each umtx chain now has two lists,
one for readers and one for writers, other types of synchronization
object just use first list.

Asked by: jeff
2008-04-02 04:08:37 +00:00
attilio
672b78c87b Add rw_try_rlock() and rw_try_wlock() to rwlocks.
These functions try the specified operation (rlocking and wlocking) and
true is returned if the operation completes, false otherwise.

The KPI is enriched by this commit, so __FreeBSD_version bumping and
manpage updating will happen soon.

Requested by:	jeff, kris
2008-04-01 20:31:55 +00:00
dfr
60db59bdb1 Don't try to use an SX lock while holding the vnode interlock.
Sponsored by:	Isilon Systems
2008-04-01 16:07:01 +00:00
kib
5c017b360f Regen 2008-03-31 12:12:27 +00:00
kib
6687cc3940 Add the openat(), fexecve() and other *at() syscalls to the table.
Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:06:55 +00:00
kib
78facfe99b Implement the fexecve(2) syscall.
Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:05:52 +00:00
kib
e0eb2de7e9 Implement the
openat(2), faccessat(2), fchmodat(2), fchownat(2), fstatat(2),
	futimesat(2), linkat(2), mkdirat(2), mkfifoat(2), mknodat(2),
	readlinkat(2), renameat(2), symlinkat(2)
syscalls.

Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:04:20 +00:00
kib
eff8c6d35e Add the support for the AT_FDCWD and fd-relative name lookups to the
namei(9).

Based on the submission by rdivacky,
	sponsored by Google Summer of Code 2007
Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 12:01:21 +00:00
kib
fb67926ebb Add the support for the O_EXEC open(2) mode, as specified by the
POSIX Extended API Set Part 2 extension specification.

Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 11:57:18 +00:00
kib
fe816f911f Add the utility function vn_commname() to retrieve the command name
from the vfs namecache, when available.

Reviewed by:	rwatson, rdivacky
Tested by:	pho
2008-03-31 11:53:03 +00:00
jeff
c2c4476b86 - Consistently return EDEADLK when presented with a new set that is
incompatible with existing bindings.
 - Try to copyout the setid in cpuset() before migrating the proc to the
   setid in case the user has supplied a bad buffer.
 - Rename cpuset_root() and cpuset_base() to cpuset_ref{root,base} to
   be more descriptive and free cpuset_root to be used as a different
   type of symbol.
 - Make cpuset_root the cpuset_t set of all cpus in the system.  This
   should contain the same bitmask as all_cpus presently.
 - Add a CPU_CMP() macro to compare two sets.
2008-03-30 11:31:14 +00:00
jeff
ef1b5135a9 - Don't allow calls to vn_lock() with no lock type requested. Callers
which simply want a reference should use vref().  Callers which want
   to check validity need to hold a lock while performing any action
   based on that validity.  vn_lock() would always release the interlock
   before returning making any action synchronous with the validity check
   impossible.
2008-03-29 23:36:26 +00:00
jeff
26629f5add - Use vget() to lock the vnode rather than refing without a lock and
locking in separate steps.
2008-03-29 23:30:40 +00:00
attilio
7e107a0c8c b_waiters cannot be adequately protected by the interlock because it is
dropped after the call to lockmgr() so just revert this approach using
something similar to the precedent one:
BUF_LOCKWAITERS() just checks if there are waiters (not the actual number
of them) and it is based on newly introduced lockmgr_waiters() which
returns if the lockmgr has waiters or not. The name has been choosen
differently by old lockwaiters() in order to not confuse them.

KPI results enriched by this commit so __FreeBSD_version bumping and
manpage update will be happening soon.
'struct buf' also changes, so kernel ABI is disturbed.

Bug found by:	jeff
Approved by:	jeff, kib
2008-03-28 12:30:12 +00:00
jb
735643909e Regen after makesyscalls.sh change. 2008-03-27 01:55:06 +00:00
jb
4a6deb0614 Generate another function for the DTrace syscall provider to specify
the syscall argument types.

This code is only compiled into the systrace kernel modul and has no
effect otherwise.
2008-03-27 01:53:44 +00:00
phk
fa71439e44 The "free-lance" timer in the i8254 is only used for the speaker
these days, so de-generalize the acquire_timer/release_timer api
to just deal with speakers.

The new (optional) MD functions are:
	timer_spkr_acquire()
	timer_spkr_release()
and
	timer_spkr_setfreq()

the last of which configures the timer to generate a tone of a given
frequency, in Hz instead of 1/1193182th of seconds.

Drop entirely timer2 on pc98, it is not used anywhere at all.

Move sysbeep() to kern/tty_cons.c and use the timer_spkr*() if
they exist, and do nothing otherwise.

Remove prototypes and empty acquire-/release-timer() and sysbeep()
functions from the non-beeping archs.

This eliminate the need for the speaker driver to know about
i8254frequency at all.  In theory this makes the speaker driver MI,
contingent on the timer_spkr_*() functions existing but the driver
does not know this yet and still attaches to the ISA bus.

Syscons is more tricky, in one function, sc_tone(), it knows the hz
and things are just fine.

In the other function, sc_bell() it seems to get the period from
the KDMKTONE ioctl in terms if 1/1193182th second, so we hardcode
the 1193182 and leave it at that.  It's probably not important.

Change a few other sysbeep() uses which obviously knew that the
argument was in terms of i8254 frequency, and leave alone those
that look like people thought sysbeep() took frequency in hertz.

This eliminates the knowledge of i8254_freq from all but the actual
clock.c code and the prof_machdep.c on amd64 and i386, where I think
it would be smart to ask for help from the timecounters anyway [TBD].
2008-03-26 20:09:21 +00:00
dfr
1c5a20ad66 Regen. 2008-03-26 15:24:02 +00:00
dfr
79d2dfdaa6 Add the new kernel-mode NFS Lock Manager. To use it instead of the
user-mode lock manager, build a kernel with the NFSLOCKD option and
add '-k' to 'rpc_lockd_flags' in rc.conf.

Highlights include:

* Thread-safe kernel RPC client - many threads can use the same RPC
  client handle safely with replies being de-multiplexed at the socket
  upcall (typically driven directly by the NIC interrupt) and handed
  off to whichever thread matches the reply. For UDP sockets, many RPC
  clients can share the same socket. This allows the use of a single
  privileged UDP port number to talk to an arbitrary number of remote
  hosts.

* Single-threaded kernel RPC server. Adding support for multi-threaded
  server would be relatively straightforward and would follow
  approximately the Solaris KPI. A single thread should be sufficient
  for the NLM since it should rarely block in normal operation.

* Kernel mode NLM server supporting cancel requests and granted
  callbacks. I've tested the NLM server reasonably extensively - it
  passes both my own tests and the NFS Connectathon locking tests
  running on Solaris, Mac OS X and Ubuntu Linux.

* Userland NLM client supported. While the NLM server doesn't have
  support for the local NFS client's locking needs, it does have to
  field async replies and granted callbacks from remote NLMs that the
  local client has contacted. We relay these replies to the userland
  rpc.lockd over a local domain RPC socket.

* Robust deadlock detection for the local lock manager. In particular
  it will detect deadlocks caused by a lock request that covers more
  than one blocking request. As required by the NLM protocol, all
  deadlock detection happens synchronously - a user is guaranteed that
  if a lock request isn't rejected immediately, the lock will
  eventually be granted. The old system allowed for a 'deferred
  deadlock' condition where a blocked lock request could wake up and
  find that some other deadlock-causing lock owner had beaten them to
  the lock.

* Since both local and remote locks are managed by the same kernel
  locking code, local and remote processes can safely use file locks
  for mutual exclusion. Local processes have no fairness advantage
  compared to remote processes when contending to lock a region that
  has just been unlocked - the local lock manager enforces a strict
  first-come first-served model for both local and remote lockers.

Sponsored by:	Isilon Systems
PR:		95247 107555 115524 116679
MFC after:	2 weeks
2008-03-26 15:23:12 +00:00
scottl
a3b7a4bce8 Implement taskqueue_block() and taskqueue_unblock(). These functions allow
the owner of a queue to block and unblock execution of the tasks in the
queue while allowing tasks to continue to be added queue.  Combining this
with taskqueue_drain() allows a queue to be safely disabled.  The unblock
function may run (or schedule to run) the queue when it is called, just as
calling taskqueue_enqueue() would.

Reviewed by: jhb, sam
2008-03-25 22:38:45 +00:00
ru
3b1bf8c2e9 Replaced the misleading uses of a historical artefact M_TRYWAIT with M_WAIT.
Removed dead code that assumed that M_TRYWAIT can return NULL; it's not true
since the advent of MBUMA.

Reviewed by:	arch

There are ongoing disputes as to whether we want to switch to directly using
UMA flags M_WAITOK/M_NOWAIT for mbuf(9) allocation.
2008-03-25 09:39:02 +00:00
ru
0655a583e2 Regen after changing prototypes of cpuset_{get,set}affinity(). 2008-03-25 09:14:17 +00:00
ru
4feaeed265 Fixed type of the fourth argument of cpuset_{get,set}affinity(2) to be size_t.
Prodded by:	davidxu
2008-03-25 09:11:53 +00:00
jeff
3ad75daf19 - Greatly simplify vget() by removing the guarantee that any new
references to a vnode with VI_OWEINACT set will force the vinactive()
   call.  The kernel makes no guarantees about which reference was the
   last to close a file or when the actual inactive processing will
   happen.  The previous code was designed to preserve existing semantics
   in the face of shared locks, however, this was unnecessary.

Discussed with:	mckusick
2008-03-24 04:22:58 +00:00
jeff
1bf44343e2 - Don't acquire the vnode interlock in _vn_lock() unless no lock type
is requested.  Handle this case specially before the while loop.
 - Use the held vnode lock to check for VI_DOOMED.  The vnode lock and
   interlock must both be held to set VI_DOOMED so either one held, even
   shared, is sufficient to check it.

No objection by:	kib
2008-03-24 04:17:35 +00:00
kib
5ddf5664cc Yield the cpu in the kernel while iterating the list of the
vnodes belonging to the mountpoint. Also, yield when in the
softdep_process_worklist() even when we are not going to sleep due to
buffer drain.

It is believed that the ULE fixed the problem [1], but the yielding
seems to be needed at least for the 4BSD case.

Discussed:	on stable@, with bde
Reviewed by:	tegge, jeff [1]
MFC after:	2 weeks
2008-03-23 13:45:24 +00:00
davidxu
c32a483ae9 Remove commented out code, thread suspension is done in thread library. 2008-03-23 02:03:06 +00:00
jeff
8103d042fb - Only return 1 from sync_vnode() in cases where the vnode is still
at the head of the sync list.  This prevents sched_sync() from
   re-queueing a vnode which may have been freed already.

Discussed with:	kib
2008-03-23 01:44:28 +00:00
jeff
73b6a5597c - Pass BO_MTX(bo) to lockmgr in vtruncbuf, we don't own the vnode
interlock here anymore.

Reported by:	kris
2008-03-23 01:42:19 +00:00
phk
5a1f4173f5 In abort2(2): Accept a NULL arg pointer if nargs == 0 2008-03-22 16:32:52 +00:00
jeff
a9d123c3ab - Complete part of the unfinished bufobj work by consistently using
BO_LOCK/UNLOCK/MTX when manipulating the bufobj.
 - Create a new lock in the bufobj to lock bufobj fields independently.
   This leaves the vnode interlock as an 'identity' lock while the bufobj
   is an io lock.  The bufobj lock is ordered before the vnode interlock
   and also before the mnt ilock.
 - Exploit this new lock order to simplify softdep_check_suspend().
 - A few sync related functions are marked with a new XXX to note that
   we may not properly interlock against a non-zero bv_cnt when
   attempting to sync all vnodes on a mountlist.  I do not believe this
   race is important.  If I'm wrong this will make these locations easier
   to find.

Reviewed by:	kib (earlier diff)
Tested by:	kris, pho (earlier diff)
2008-03-22 09:15:16 +00:00
alfred
b283b3e59a Fix a race where timeout/untimeout could cause crashes for Giant locked
code.

The bug:

There exists a race condition for timeout/untimeout(9) due to the
way that the softclock thread dequeues timeouts.

The softclock thread sets the c_func and c_arg of the callout to
NULL while holding the callout lock but not Giant.  It then drops
the callout lock and acquires Giant.

It is at this point where untimeout(9) on another cpu/thread could
be called.

Since c_arg and c_func are cleared, untimeout(9) does not touch the
callout and returns as if the callout is canceled.

The softclock then tries to acquire Giant and likely blocks due to
the other cpu/thread holding it.

The other cpu/thread then likely deallocates the backing store that
c_arg points to and finishes working and hence drops Giant.

Softclock resumes and acquires giant and calls the function with
the now free'd c_arg and we have corruption/crash.

The fix:

We need to track curr_callout even for timeout(9) (LOCAL_ALLOC)
callouts.  We need to free the callout after the softclock processes
it to deal with the race here.

Obtained from: Juniper Networks, iedowse
Reviewed by: jhb, iedowse
MFC After: 2 weeks.
2008-03-22 07:29:45 +00:00
kib
bc4bc893dd Reduce contention on the vnode interlock by not acquiring the BO_LOCK
around the check for the BV_BKGRDINPROG in the brelse() and bqrelse().
See the comment for the explanation why it is safe.

Tested by:	pho
Submitted by:	jeff
2008-03-21 12:38:44 +00:00
jeff
72142b2fae - Reduce contention on the global bdonelock and bpinlock by using
a pool mutex to protect these sleep/wakeup/counter races.  This
   still is preferable to bloating each bio with a mtx.
2008-03-21 10:00:05 +00:00
jeff
ba540b27d6 - Add a new td flag TDF_NEEDSUSPCHK that is set whenever a thread needs
to enter thread_suspend_check().
 - Set TDF_ASTPENDING along with TDF_NEEDSUSPCHK so we can move the
   thread_suspend_check() to ast() rather than userret().
 - Check TDF_NEEDSUSPCHK in the sleepq_catch_signals() optimization so
   that we don't miss a suspend request.  If this is set use the
   expensive signal path.
 - Set NEEDSUSPCHK when creating a new thread in thr in case the
   creating thread is due to be suspended as well but has not yet.

Reviewed by:	davidxu (Authored original patch)
2008-03-21 08:23:25 +00:00
jhb
6cf6d7b22b Implement a BUS_BIND_INTR() method in the bus interface to bind an IRQ
resource to a CPU.  The default method is to pass the request up to the
parent similar to BUS_CONFIG_INTR() so that all busses don't have to
explicitly implement bus_bind_intr.  A bus_bind_intr(9) wrapper routine
similar to bus_setup/teardown_intr() is added for device drivers to use.
Unbinding an interrupt is done by binding it to NOCPU.  The IRQ resource
must be allocated, but it can happen in any order with respect to
bus_setup_intr().  Currently it is only supported on amd64 and i386 via
nexus(4) methods that simply call the intr_bind() routine.

Tested by:	gallatin
2008-03-20 21:24:32 +00:00
kib
28174d9ffb Fix the leak of the vmspace on the fork when the process limits
are exceeded.

Pointy hat to:	me
MFC after:	3 days
2008-03-20 15:24:49 +00:00
jeff
a3f8e0c20d - Restore runq to manipulating threads directly by putting runq links and
rqindex back in struct thread.
 - Compile kern_switch.c independently again and stop #include'ing it from
   schedulers.
 - Remove the ts_thread backpointers and convert most code to go from
   struct thread to struct td_sched.
 - Cleanup the ts_flags #define garbage that was causing us to sometimes
   do things that expanded to td->td_sched->ts_thread->td_flags in 4BSD.
 - Export the kern.sched sysctl node in sysctl.h
2008-03-20 05:51:16 +00:00
jeff
4274384df8 - Remove the unused and redundant sched_newproc() function.
- Remove the unused and redundant sched_newthread() which peaks into scheduler
   private structures.
2008-03-20 03:09:15 +00:00