Commit Graph

2132 Commits

Author SHA1 Message Date
John Baldwin
76764432e4 - Add support for "paging" in stack trace output. That is, when you do
a stack trace from ddb, the output will pause with a '--More--' prompt
  every 18 lines.  If you hit Enter, it will print another line and prompt
  again.  If you hit space it will output another page and then prompt.
  If you hit 'q' or 'x' it will abort the rest of the stack trace.
- Fix the sparc64 userland stack trace to honor the total count of lines
  to print.  This is useful if your trace happens to walk back onto
  0xdeadc0de and gets stuck in an endless loop.

MFC after:	1 month
Tested on:	i386, alpha, sparc64
2004-09-20 19:05:32 +00:00
Alan Cox
de6c3db01f Simplify the reference counting of page table pages. Specifically, use
the page table page's wired count rather than its hold count to contain
the reference count.  My rationale for this change is based on several
factors:

1. The machine-independent and pmap layers used the same hold count field
   in subtly different ways.  The machine-independent layer uses the hold
   count to implement a form of ephemeral wiring that is used by pipes,
   physio, etc.  In other words, subsystems where we wish to temporarily
   block a page from being swapped out while it is mapped into the kernel's
   address space.  Such pages are never removed from the page queues.
   Instead, the page daemon recognizes a non-zero hold count to mean "hands
   off this page."  In contrast, page table pages are never in the page
   queues; they are wired from birth to death.  The hold count was being
   used as a kind of reference count, specifically, the number of valid
   page table entries within the page.  Not surprisingly, these two
   different uses imply different synchronization rules: in the machine-
   independent layer access to the hold count requires the page queues
   lock; whereas in the pmap layer the pmap lock is required.  Thus,
   continued use by the pmap layer of vm_page_unhold(), which asserts that
   the page queues lock is held, made no sense.

2. _pmap_unwire_pte_hold() was too forgiving in its handling of the wired
   count.  An unexpected wired count on a page table page was ignored and
   the underlying page leaked.

3. In a word, microoptimization.  Using the wired count exclusively, rather
   than a combination of the wired and hold counts, makes the code slightly
   smaller and faster.

Reviewed by: tegge@
2004-09-19 21:20:01 +00:00
Alan Cox
6134d96917 MFamd64/i386
Avoid recomputing PHYS_TO_VM_PAGE() unnecessarily in pmap_protect().
2004-09-19 05:34:49 +00:00
Alan Cox
8478ea241b Remove an outdated assertion from _pmap_allocpte(). (When vm_page_alloc()
succeeds, the page's queue field is unconditionally set to PQ_NONE by
vm_pageq_remove_nowakeup().)
2004-09-19 02:39:31 +00:00
Poul-Henning Kamp
216d5bb528 Allocate tty at attach time instead of open time. 2004-09-17 11:04:57 +00:00
Poul-Henning Kamp
c7076ea1b8 Be slightly less bogus in struct tty allocation. 2004-09-17 11:02:53 +00:00
Poul-Henning Kamp
7ce1979be6 Add new a function isa_dma_init() which returns an errno when it fails
and which takes a M_WAITOK/M_NOWAIT flag argument.

Add compatibility isa_dmainit() macro which whines loudly if
isa_dma_init() fails.

Problem uncovered by:	tegge
2004-09-15 12:09:50 +00:00
Alan Cox
fe8d8261ec Add nge. (I've used one for about a week in an XP1000.) 2004-09-11 07:26:50 +00:00
Marcel Moolenaar
7dafab2e78 The previous commit, roughly one and a half years ago removed the
branch prediction optimization for LINT, because the kernel was too
large. This commit now removes it altogether since it causes build
failures for GENERIC kernels and the various applicable trends are
such that one can expect that it these failure will cause more
problems than they're worth in the future. These trends include:
1. Alpha was demoted from tier 1 to tier 2 due to lack of active
   support. The number of people willing to fix build breakages
   is not likely to increase and those developers that do have the
   gumption to test MI changes on alpha are not likely to spend
   time fixing unexpected build failures first.
2. The kernel will only increase in size. Even though stripped-down
   kernels do link without problems now, compiler optimizations (like
   inlining) and new (non-optional) functionality will likely cause
   stripped-down kernels to break in the future as well.

So, with my asbestos suit on, get rid of potential problems before
they happen.

MT5 candidate.
2004-09-10 05:00:27 +00:00
Scott Long
50736a153b Fix a problem with tag->boundary inheritence that has existed since day one
and was propagated to nearly every platform.  The boundary of the child needs
to consider the boundary of the parent and pick the minimum of the two, not
the maximum.  However, if either is 0 then pick the appropriate one.
This bug was exposed by a recent change to ATA, which should now be fixed by
this change.  The alignment and maxsegsz tag attributes likely also need
a similar review in the near future.

This is a MT5 candidate.

Reviewed by: marcel
Submitted by: sos (in part)
2004-09-08 04:54:19 +00:00
Scott Long
444ba94513 Switch the default scheduler to 4BSD to match what will go into RELENG_5 soon.
It can be switched back once 5.3 is tested and released.  Also turn on
PREEMPTION as many of the stability problems with it have been fixed.

MT5: 3 days.
2004-09-07 22:37:43 +00:00
Poul-Henning Kamp
23385eb7dd Make the alpha timecounter preferable to the i8254. 2004-09-07 07:06:36 +00:00
Julian Elischer
ed062c8d66 Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.

The KSE (or td_sched) structure is  now allocated per thread and has no
allocation code of its own.

Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.

Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.

The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.

A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.

Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.

Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by:	scottl, peter
MFC after:	1 week
2004-09-05 02:09:54 +00:00
Scott Long
9923b511ed Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined.  Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in kern_switch.c).  This
is a possible MT5 candidate.
2004-09-02 18:59:15 +00:00
Julian Elischer
2630e4c90c Give setrunqueue() and sched_add() more of a clue as to
where they are coming from and what is expected from them.

MFC after:	2 days
2004-09-01 02:11:28 +00:00
Julian Elischer
5995adc206 Remove an unneeded argument..
The removed argument could trivially be derived from the remaining one.
That in turn should be the same as curthread, but it is possible that curthread could be expensive to derive on some syste,s so leave it as an argument.
Having both proc and thread as an argumen tjust gives an opportunity for
them to get out sync.

MFC after:	3 days
2004-08-31 07:34:54 +00:00
Julian Elischer
99e9dcb817 Remove sched_free_thread() which was only used
in diagnostics. It has outlived its usefulness and has started
causing panics for people who turn on DIAGNOSTIC, in what is otherwise
good code.

MFC after:	2 days
2004-08-31 06:12:13 +00:00
Wilko Bulte
0d86d31bba Add em(4) to Alpha. I had a couple running recently on Alpha and it
appeared to work fine.

Submitted by:	Konstantin Saurbier saurbier at mathematik uni-bielefeld de
2004-08-30 18:40:00 +00:00
Marcel Moolenaar
d4da990081 In alpha_pci_alloc_resource(), when allocating a memory resource,
do not set the virtual address to the bus address when the bus
doesn't have either of the PCI_RF_DENSE or PCI_RF_BWX flags set.
The TGA driver uses the virtual address to access the registers,
which on some machines can cause a memory management fault.  Map
the bus address as K0SEG virtual memory instead. Note that with
some hardware combinations involving the TGA2 adapter this change
merely results that the memory management fault is replaced by a
machine check.
2004-08-29 19:07:14 +00:00
Wilko Bulte
11e9b7ef70 Stop pretending: TurboLaser support is really broken
MFC after:	2 days
2004-08-28 21:47:24 +00:00
Wilko Bulte
c8d51c4c0d Stop pretending: TurboLaser support is really broken.
MFC after:	2 days
2004-08-28 21:42:15 +00:00
Marcel Moolenaar
0f2fe153bc Move the kernel-specific logic to adjust frompc from MI to MD. For
these two reasons:
1. On ia64 a function pointer does not hold the address of the first
   instruction of a functions implementation. It holds the address
   of a function descriptor. Hence the user(), btrap(), eintr() and
   bintr() prototypes are wrong for getting the actual code address.
2. The logic forces interrupt, trap and exception entry points to
   be layed-out contiguously. This can not be achieved on ia64 and is
   generally just bad programming.

The MCOUNT_FROMPC_USER macro is used to set the frompc argument to
some kernel address which represents any frompc that falls outside
the kernel text range. The macro can expand to ~0U to bail out in
that case.
The MCOUNT_FROMPC_INTR macro is used to set the frompc argument to
some kernel address to represent a call to a trap or interrupt
handler. This to avoid that the trap or interrupt handler appear to
be called from everywhere in the call graph. The macro can expand
to ~0U to prevent adjusting frompc. Note that the argument is selfpc,
not frompc.

This commit defines the macros on all architectures equivalently to
the original code in sys/libkern/mcount.c. People can take it from
here...

Compile-tested on: alpha, amd64, i386, ia64 and sparc64
Boot-tested on: i386
2004-08-27 19:42:35 +00:00
Marcel Moolenaar
e50cb9b07f Provide extern declarations for btext and etext when GPROF is defined.
These are referenced in subr_prof.c when building a profiling kernel.
2004-08-27 19:20:42 +00:00
Alan Cox
8991a235cb The machine-independent parts of the virtual memory system always pass a
valid pmap to the pmap functions that require one.  Remove the checks for
NULL.  (These checks have their origins in the Mach pmap.c that was
integrated into BSD.  None of the new code written specifically for
FreeBSD included them.)
2004-08-27 19:06:17 +00:00
Andre Oppermann
c21fd23260 Always compile PFIL_HOOKS into the kernel and remove the associated kernel
compile option.  All FreeBSD packet filters now use the PFIL_HOOKS API and
thus it becomes a standard part of the network stack.

If no hooks are connected the entire packet filter hooks section and related
activities are jumped over.  This removes any performance impact if no hooks
are active.

Both OpenBSD and DragonFlyBSD have integrated PFIL_HOOKS permanently as well.
2004-08-27 15:16:24 +00:00
Alan Cox
7fde238f04 Remove unnecessary check for curthread == NULL. 2004-08-26 04:34:39 +00:00
John Baldwin
8a7aa72dec Regenerate after fcntl() wrappers were marked MP safe. 2004-08-24 20:24:34 +00:00
John Baldwin
2ca25ab53e Fix the ABI wrappers to use kern_fcntl() rather than calling fcntl()
directly.  This removes a few more users of the stackgap and also marks
the syscalls using these wrappers MP safe where appropriate.

Tested on:	i386 with linux acroread5
Compiled on:	i386, alpha LINT
2004-08-24 20:21:21 +00:00
Tim J. Robbins
0e73a96209 Add a new type, l_uintptr_t, which is an unsigned integer type with the
same width as a pointer under Linux. Add two new macros, PTRIN and PTROUT,
which convert between l_uintptr_t and native pointers.
2004-08-16 07:05:44 +00:00
Alan Cox
87b9823ede - Make pmap_emulate_reference() MP and preemption safe. Previously, it
contained "sanity" checks that could be violated if another CPU modified
   the pmap between the emulation trap and locking the pmap in
   pmap_emulate_reference().  As a result, the pte could be inconsistent
   with the access that caused the emulation trap.  In such cases,
   pmap_emulate_reference() now flushes the current CPU's TLB entry and
   returns.
 - Make pmap_changebit() an inline function, reducing object code size.
2004-08-15 20:54:25 +00:00
Marcel Moolenaar
4da47b2fec Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
Alan Cox
1b3b9cfe1d Post-locking clean up/simplification, particularly, the elimination of
vm_page_sleep_if_busy() and the page table page's busy flag as a
synchronization mechanism on page table pages.

Also, relocate the inline pmap_unwire_pte_hold() so that it can be used
to shorten _pmap_unwire_pte_hold() on alpha and amd64.  This places
pmap_unwire_pte_hold() next to a comment that more accurately describes
it than _pmap_unwire_pte_hold().
2004-08-04 18:04:44 +00:00
Mark Murray
d23a262fc5 Making a loadable null.ko for /dev/(null|zero) proved rather
unpopular, so remove this (mis)feature.

Encouragement provided by:	jhb (and others)
2004-08-03 19:24:54 +00:00
Maxime Henrion
9f1b87f106 Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
Mark Murray
a5ed4a0ad5 Remove extraneous ';'. 2004-08-01 18:51:44 +00:00
Scott Long
9352fe30a0 Turn off PREEMPTION by default while it gets debugged. It's been causing
4 weeks of problems including deadlocks and instant panics.  Note that the
real bugs are likely in the scheduler.
2004-08-01 14:31:45 +00:00
Mark Murray
8ab2f5ecc5 Break out the MI part of the /dev/[k]mem and /dev/io drivers into
their own directory and module, leaving the MD parts in the MD
area (the MD parts _are_ part of the modules). /dev/mem and /dev/io
are now loadable modules, thus taking us one step further towards
a kernel created entirely out of modules. Of course, there is nothing
preventing the kernel from having these statically compiled.
2004-08-01 11:40:54 +00:00
Alan Cox
a087914310 Advance the state of pmap locking on alpha, amd64, and i386.
- Enable recursion on the page queues lock.  This allows calls to
   vm_page_alloc(VM_ALLOC_NORMAL) and UMA's obj_alloc() with the page
   queues lock held.  Such calls are made to allocate page table pages
   and pv entries.
 - The previous change enables a partial reversion of vm/vm_page.c
   revision 1.216, i.e., the call to vm_page_alloc() by vm_page_cowfault()
   now specifies VM_ALLOC_NORMAL rather than VM_ALLOC_INTERRUPT.
 - Add partial locking to pmap_copy().  (As a side-effect, pmap_copy()
   should now be faster on i386 SMP because it no longer generates IPIs
   for TLB shootdown on the other processors.)
 - Complete the locking of pmap_enter() and pmap_enter_quick().  (As of now,
   all changes to a user-level pmap on alpha, amd64, and i386 are performed
   with appropriate locking.)
2004-07-29 18:56:31 +00:00
Poul-Henning Kamp
0658bb8ef8 Move a relic to its correct location(s): Put nfs diskless initialization
calls with the code they call.  (Yet another example of mindless copy&paste).
2004-07-28 21:54:57 +00:00
Robert Watson
1a8cfbc450 Pass a thread argument into cpu_critical_{enter,exit}() rather than
dereference curthread.  It is called only from critical_{enter,exit}(),
which already dereferences curthread.  This doesn't seem to affect SMP
performance in my benchmarks, but improves MySQL transaction throughput
by about 1% on UP on my Xeon.

Head nodding:	jhb, bmilekic
2004-07-27 16:41:01 +00:00
Colin Percival
56f21b9d74 Rename suser_cred()'s PRISON_ROOT flag to SUSER_ALLOWJAIL. This is
somewhat clearer, but more importantly allows for a consistent naming
scheme for suser_cred flags.

The old name is still defined, but will be removed in a few days (unless I
hear any complaints...)

Discussed with:	rwatson, scottl
Requested by:	jhb
2004-07-26 07:24:04 +00:00
Marcel Moolenaar
fd32d93b97 Unify db_stack_trace_cmd(). All it did was look up the thread given
the thread ID and call db_trace_thread().
Since arm has all the logic in db_stack_trace_cmd(), rename the
new DB_COMMAND function to db_stack_trace to avoid conflicts on
arm.
While here, have db_stack_trace parse its own arguments so that
we can use a more natural radix for IDs. If the ID is not a thread
ID, or more precisely when no thread exists with the ID, try if
there's a process with that ID and return the first thread in it.
This makes it easier to print stack traces from the ps output.

requested by: rwatson@
tested on: amd64, i386, ia64
2004-07-21 05:07:09 +00:00
Alan Cox
09281f58ed Add some additional pmap locking and lock assertions. 2004-07-21 03:38:46 +00:00
Alan Cox
fb7ed0f9cc The previous revision introduced a compilation error, i.e., the use of an
undefined variable.  Correct this error.
2004-07-20 06:32:32 +00:00
Alan Cox
e832aafc51 - Eliminate the pte object from the pmap. Instead, page table pages are
allocated as "no object" pages.  Similar changes were made to the amd64
   and i386 pmap last year.  The primary reason being that maintaining
   a pte object leads to lock order violations.  A secondary reason being
   that the pte object is redundant, i.e., the page table itself can be
   used to lookup page table pages.  (Historical note: The pte object
   predates our ability to allocate "no object" pages.  Thus, the pte
   object was a necessary evil.)
 - Unconditionally check the vm object lock's status in vm_page_remove().
   Previously, this assertion could not be made on Alpha due to its use
   of a pte object.
2004-07-19 18:12:04 +00:00
John Baldwin
788195c186 As a temporary hack, turn off deferred preemptions that are the result of
a fast interrupt handler doing an swi_sched().  This fixed the lockups I
saw on my laptop when using xmms in KDE and on rwatson's MySQL benchmarks
on SMP.  This will eventually be removed and/or modified when I figure out
what the root cause is and fix that.
2004-07-19 16:37:47 +00:00
Maxim Konovalov
aa355a2679 In -CURRENT pseudo devices are not statically assigned at compile time,
remove a stale comment.

PR:		kern/62285
2004-07-18 09:03:12 +00:00
Alan Cox
4a28f15ccd Only extract a physical address from a pte in pmap_extract() if the pte is
valid.

Implement the protection check required by the pmap_extract_and_hold()
specification.  (This enables the elimination of Giant from that function.)
2004-07-18 05:09:28 +00:00
Alan Cox
cd1f927d84 MFamd64 revision 1.478
Simplify pmap_remove_pages(), eliminating unnecessary indirection.
2004-07-17 04:01:29 +00:00
Alan Cox
2c45ce1b42 Remove dead or unused code, such as spl calls. 2004-07-16 21:38:48 +00:00