Commit Graph

10084 Commits

Author SHA1 Message Date
Alan Cox
20351faf18 When sf_buf_alloc() replaces a virtual-to-physical mapping, it needn't
invalidate the TLB(s) if the old mapping wasn't used by the CPU.  With
network interfaces that implement checksum off-loading, the old mapping is
almost never used by the CPU, only by the device driver for setting up the
DMA operation.

Reviewed by: tegge@
2004-10-16 22:32:50 +00:00
Nate Lawson
ccd582b0fd Let nexus print our flags for us. Also, clean up an obfuscated if stmt. 2004-10-14 22:37:51 +00:00
Nate Lawson
8f528832e5 Print flags in the nexus for child devices. 2004-10-14 22:36:47 +00:00
Nate Lawson
e5979322ba Remove local hacks to set flags now that the device probe does this for us.
Tested on every device except sio_pci and the pc98 fd.c.  Perhaps something
similar should be done for the "disabled" hints also.

MFC after:	2 weeks
2004-10-14 22:21:59 +00:00
Poul-Henning Kamp
09af1b6cdd Add zero flags argument to sysctl calls. 2004-10-12 07:59:02 +00:00
Poul-Henning Kamp
e1e785a3d4 Add missing zero flag arguments to sysctl calls.
Add missing pointy hat to peter@
2004-10-12 07:58:13 +00:00
Warner Losh
fd492ee0e6 Make the lower range of the memory area 0x80000000 again. Also
introduce hw.{pci,acpi}.host_mem_start tunable to change this.

MFC: ASAP
2004-10-11 21:10:23 +00:00
Nate Lawson
7f35f90eae Match surrounding style, not style(msmith). 2004-10-11 05:42:12 +00:00
Nate Lawson
31ad3b8802 Move the code for halting the CPU (acpi_cpu_c1) into machdep files.
This removes the last MD portion of acpi_cpu.c.

MFC after:	2 weeks
2004-10-11 05:39:15 +00:00
Warner Losh
905454c86c Fix conflicts I didn't fix before I committed my busspace changes.
Noticed by: ru@ (and likely tinderbox, I haven't checked)
2004-10-11 00:58:24 +00:00
Warner Losh
ac00fac23a Convert to newbus. (chances are we could now move this to dev/pbio
since I believe it is now MI, but that hasn't been done yet).

Reviewed by: dds
2004-10-10 03:26:20 +00:00
David E. O'Brien
3b33d41dc2 style(9) 2004-10-09 08:31:21 +00:00
Alan Cox
aced26ce6e Make pte_load_store() an atomic operation in all cases, not just i386 PAE.
Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit
in a race between processors.  (This restructuring assumes the newly atomic
pte_load_store() for correct operation.)

Reviewed by: tegge@
PR: i386/61852
2004-10-08 08:23:43 +00:00
Warner Losh
bacb482d94 Port pbio to HEAD.
OK'd by: dds
2004-10-07 16:21:03 +00:00
Warner Losh
e625cbacaf Add missing 'static' 2004-10-06 15:18:12 +00:00
Warner Losh
0b3a486f21 For legacy PCI bridges, limit memory allocation to the top 32MB of
RAM.  Many older, legacy bridges only allow allocation from this
range.  This only appies to devices who don't have their memory
assigned by the BIOS (since we allocate the ranges so assigned
exactly), so should have minimal impact.

Hoewver, for CardBus bridges (cbb), they rarely get the resources
allocated by the BIOS, and this patch helps them greatly.  Typically
the 'bad Vcc' messages are caused by this problem.
2004-10-06 07:22:58 +00:00
Alan Cox
caa665aae3 Undo revision 1.251. This change was a performance pessimizing work-around
that is no longer required.  (In fact, it is not clear that it was ever
required in HEAD or RELENG_4, only RELENG_3 required a work-around.)  Now,
as before revision 1.251, if the preexisting PTE is invalid, pmap_enter()
does not call pmap_invalidate_page() to update the TLB(s).

Note: Even with this change, the handling of a copy-on-write fault is
inefficient, in such cases pmap_enter() calls pmap_invalidate_page() twice.

Discussed with: bde@
PR: kern/16568
2004-10-03 20:14:07 +00:00
Alan Cox
8ceb3dcb60 The physical address stored in the vm_page is page aligned. There is no
need to mask off the page offset bits.  (This operation made some sense
prior to i386/i386/pmap.c revision 1.254 when we passed a physical address
rather than a vm_page pointer to pmap_enter().)
2004-10-03 00:16:43 +00:00
Alan Cox
07b3303943 Eliminate unnecessary uses of PHYS_TO_VM_PAGE() from pmap_enter(). These
uses predate the change in the pmap_enter() interface that replaced the
page's physical address by the address of its vm_page structure.  The
PHYS_TO_VM_PAGE() was being used to compute the address of the same vm_page
structure that was being passed in.
2004-10-02 07:34:58 +00:00
Yoshihiro Takahashi
92f8f73a93 Fix BIOS default geometry on pc98.
PR:		kern/72225
Submitted by:	Hirokazu WATANABE <wnabe@par.odn.ne.jp>
2004-10-01 15:57:23 +00:00
David Schultz
46ec41ecb4 Fix the following race:
1. Process p1 is currently being swapped in.
  2. Process p2 calls linux_ptrace(PTRACE_GETFPXREGS, p1_pid, ...)
  3. After acquiring a reference to FIRST_THREAD_IN_PROC(p1),
     p2 blocks in faultin() while p1 finishes being swapped in.
     This means p2 won't get back the lock on p1 until after p1's
     threads are runnable.
  4. After p1 is swapped in, the first thread in p1 exits.
  5. p2 now uses its dangling reference to p1's first thread.
2004-10-01 05:01:00 +00:00
Alan Cox
0a752e9843 Prevent the unexpected deallocation of a page table page while performing
pmap_copy().  This entails additional locking in pmap_copy() and the
addition of a "flags" parameter to the page table page allocator for
specifying whether it may sleep when memory is unavailable.  (Already,
pmap_copy() checks the availability of memory, aborting if it is scarce.
In theory, another CPU could, however, allocate memory between
pmap_copy()'s check and the call to the page table page allocator,
causing the current thread to release its locks and sleep.  This change
makes this scenario impossible.)

Reviewed by: tegge@
2004-09-29 19:20:40 +00:00
John Baldwin
9eba48462e Improve the panic message for a busted MP table with conflicting entries
for the same PCI interrupt.

Tested by:	Pavel Gubin pg at ie dot tusur dot ru
MFC after:	3 days
2004-09-24 18:42:54 +00:00
Roman Kurakin
9b27ceb6dc Invalidate cache after changing pte entry.
Discussed with:	jhp and njl
MFC after:	5 days
2004-09-23 16:06:27 +00:00
Matt Jacob
1db03259c9 PAE seems to work for isp- at least under mimimal testing. 2004-09-23 05:26:19 +00:00
Alan Cox
a971139680 Correct a long-standing error in _pmap_unwire_pte_hold() affecting
multiprocessors.  Specifically, the error is conditioning the call to
pmap_invalidate_page() on whether the pmap is active on the current CPU.
This call must be unconditional.  Regardless of whether the pmap is active
on the CPU performing _pmap_unwire_pte_hold(), it could be active on another
CPU.  For example, a call to pmap_remove_all() by the page daemon could
result in a call to _pmap_unwire_pte_hold() with the pmap inactive on the
current CPU and active on another CPU.  In such circumstances, failing to
call pmap_invalidate_page() results in a stale TLB entry on the other CPU
that still maps the now deallocated page table page.  What happens next is
typically a mysterious panic in pmap_enter() by the other CPU, either
"pmap_enter: attempted pmap_enter on 4MB page" or "pmap_enter: pte vanished,
va: 0x%lx".  Both occur because the former page table page has been recycled
and allocated to a new purpose.  Consequently, it no longer contains zeroes.

See also Peter's i386/i386/pmap.c revision 1.448 and the related e-mail
thread last year.

Many thanks to the engineers at Sandvine for providing clear and concise
information until all of the pieces of the puzzle fell into place and
for testing an earlier patch.

MT5 Candidate
2004-09-22 05:01:48 +00:00
John Baldwin
76764432e4 - Add support for "paging" in stack trace output. That is, when you do
a stack trace from ddb, the output will pause with a '--More--' prompt
  every 18 lines.  If you hit Enter, it will print another line and prompt
  again.  If you hit space it will output another page and then prompt.
  If you hit 'q' or 'x' it will abort the rest of the stack trace.
- Fix the sparc64 userland stack trace to honor the total count of lines
  to print.  This is useful if your trace happens to walk back onto
  0xdeadc0de and gets stuck in an endless loop.

MFC after:	1 month
Tested on:	i386, alpha, sparc64
2004-09-20 19:05:32 +00:00
Alan Cox
de6c3db01f Simplify the reference counting of page table pages. Specifically, use
the page table page's wired count rather than its hold count to contain
the reference count.  My rationale for this change is based on several
factors:

1. The machine-independent and pmap layers used the same hold count field
   in subtly different ways.  The machine-independent layer uses the hold
   count to implement a form of ephemeral wiring that is used by pipes,
   physio, etc.  In other words, subsystems where we wish to temporarily
   block a page from being swapped out while it is mapped into the kernel's
   address space.  Such pages are never removed from the page queues.
   Instead, the page daemon recognizes a non-zero hold count to mean "hands
   off this page."  In contrast, page table pages are never in the page
   queues; they are wired from birth to death.  The hold count was being
   used as a kind of reference count, specifically, the number of valid
   page table entries within the page.  Not surprisingly, these two
   different uses imply different synchronization rules: in the machine-
   independent layer access to the hold count requires the page queues
   lock; whereas in the pmap layer the pmap lock is required.  Thus,
   continued use by the pmap layer of vm_page_unhold(), which asserts that
   the page queues lock is held, made no sense.

2. _pmap_unwire_pte_hold() was too forgiving in its handling of the wired
   count.  An unexpected wired count on a page table page was ignored and
   the underlying page leaked.

3. In a word, microoptimization.  Using the wired count exclusively, rather
   than a combination of the wired and hold counts, makes the code slightly
   smaller and faster.

Reviewed by: tegge@
2004-09-19 21:20:01 +00:00
Alan Cox
8478ea241b Remove an outdated assertion from _pmap_allocpte(). (When vm_page_alloc()
succeeds, the page's queue field is unconditionally set to PQ_NONE by
vm_pageq_remove_nowakeup().)
2004-09-19 02:39:31 +00:00
Matt Jacob
b3940a8730 Put in a commented out ispfw device under isp and note that this is usually
a module.
2004-09-19 00:52:22 +00:00
Alan Cox
7580b56bdc Release the page queues lock earlier in pmap_protect() and pmap_remove() in
order to reduce contention.
2004-09-18 22:56:58 +00:00
Julian Elischer
def46d58a6 Fix breakpoint handling for i386.
not sure yet about 5.x... MFC if needed.
Also fixes small problems with examining some registers and
some specific gdb transfer problems.

	As the patch says:
	This is not a pretty patch and only meant as a temporary
	fix until a better solution is committed.

PR:		i386/71715
Submitted by:	Stephan Uphoff <ups@tree.com>
MFC after:	1 week
2004-09-15 23:26:49 +00:00
Poul-Henning Kamp
7ce1979be6 Add new a function isa_dma_init() which returns an errno when it fails
and which takes a M_WAITOK/M_NOWAIT flag argument.

Add compatibility isa_dmainit() macro which whines loudly if
isa_dma_init() fails.

Problem uncovered by:	tegge
2004-09-15 12:09:50 +00:00
Poul-Henning Kamp
5757a0b985 Remove now unused #include files. 2004-09-15 12:02:35 +00:00
Alan Cox
031102cc7b Use an atomic op to update the pte in pmap_protect(). This is to prevent
the loss of a page modified (PG_M) bit in a race between processors.

Quoting Tor:
	One scenario where the old code could cause a lost PG_M bit is a
	multithreaded linux program (or FreeBSD program using the
	linuxthreads port) where one thread was starting a subprocess.
	The thread doing fork() would call vmspace_fork(), which would then
	call vm_map_copy_entry() which would call pmap_protect() on an area
	possibly accessed by other threads.

Additionally, make the clearing of PG_M by pmap_protect() unconditional if
write permission is removed.  Previously, PG_M could persist on a read-only
unmanaged page.  That seems inconsistent and confusing.

In collaboration with: tegge@

MT5 candidate
PR: 61852
2004-09-12 20:20:40 +00:00
Scott Long
1e7fad6b6a Revert the previous round of changes to td_pinned. The scheduler isn't
fully initialed when the pmap layer tries to call sched_pini() early in the
boot and results in an quick panic.  Use ke_pinned instead as was originally
done with Tor's patch.

Approved by: julian
2004-09-11 10:07:22 +00:00
Scott Long
9e0c3bdf64 Double the number of kernel page tables for amd64 and for i386/PAE. The old
value was only enough for 8GB of RAM, the new value can do 16GB.  This still
isn't optimal since it doesn't scale.  Fixing this for amd64 looks to be
fairly easy, but for i386 will be quite difficult.

Reviewed by: peter
2004-09-11 01:31:26 +00:00
Julian Elischer
5c854accc1 Make up my mind if cpu pinning is stored in the thread structure or the
scheduler specific extension to it. Put it in the extension as
the implimentation details of how the pinning is done needn't be visible
outside the scheduler.

Submitted by:	tegge  (of course!)   (with changes)
MFC after:	3 days
2004-09-10 22:28:33 +00:00
Bill Paul
a07bd003bf Add device driver support for the VIA Networking Technologies
VT6122 gigabit ethernet chip and integrated 10/100/1000 copper PHY.
The vge driver has been added to GENERIC for i386, pc98 and amd64,
but not to sparc or ia64 since I don't have the ability to test
it there. The vge(4) driver supports VLANs, checksum offload and
jumbo frames.

Also added the lge(4) and nge(4) drivers to GENERIC for i386 and
pc98 since I was in the neighborhood. There's no reason to leave them
out anymore.
2004-09-10 20:57:46 +00:00
John Baldwin
64621fc5af Teach the stack trace code how to step across a double fault when stepping
across frames.  Basically, if the current frame is for the
'dblfault_handler' function, then get the next %eip and %ebp values to use
from the original TSS of the thread that has the saved state when the
double fault triggered.

MFC after:	4 days
2004-09-09 20:39:31 +00:00
Alan Cox
e232eb8288 Use atomic ops in pmap_clear_ptes() to prevent SMP races that could
result in the loss of an accessed or modified bit from the pte.

In collaboration with: tegge@

MT5 candidate
2004-09-08 18:58:29 +00:00
Scott Long
50736a153b Fix a problem with tag->boundary inheritence that has existed since day one
and was propagated to nearly every platform.  The boundary of the child needs
to consider the boundary of the parent and pick the minimum of the two, not
the maximum.  However, if either is 0 then pick the appropriate one.
This bug was exposed by a recent change to ATA, which should now be fixed by
this change.  The alignment and maxsegsz tag attributes likely also need
a similar review in the near future.

This is a MT5 candidate.

Reviewed by: marcel
Submitted by: sos (in part)
2004-09-08 04:54:19 +00:00
Scott Long
4ef90982ca Fix a cut-n-paste glitch with SCHED_4BSD. 2004-09-07 22:44:55 +00:00
Scott Long
444ba94513 Switch the default scheduler to 4BSD to match what will go into RELENG_5 soon.
It can be switched back once 5.3 is tested and released.  Also turn on
PREEMPTION as many of the stability problems with it have been fixed.

MT5: 3 days.
2004-09-07 22:37:43 +00:00
Doug Rabson
bd263739c1 Regen. 2004-09-06 09:33:30 +00:00
Doug Rabson
1bc85c0dea Add a few stub syscalls to get TransGaming's winex a bit closer to
working.
2004-09-06 09:32:59 +00:00
Julian Elischer
ed062c8d66 Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.

The KSE (or td_sched) structure is  now allocated per thread and has no
allocation code of its own.

Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.

Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.

The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.

A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.

Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.

Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by:	scottl, peter
MFC after:	1 week
2004-09-05 02:09:54 +00:00
Scott Long
9923b511ed Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined.  Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in kern_switch.c).  This
is a possible MT5 candidate.
2004-09-02 18:59:15 +00:00
Julian Elischer
df3a834f7e Give up trying to make preemption dependent on SCHED_4BSD
the list of breakages was getting too long
2004-09-01 20:41:18 +00:00
Alan Cox
3c3e8d1100 Correction to the previous revision: I forgot to apply the ones complement
to a constant.  This didn't show in testing because the broken expression
produced the same result in my tests as the correct expression.
2004-09-01 19:04:09 +00:00