Commit Graph

10217 Commits

Author SHA1 Message Date
njl
ddc1dfe834 Add power profile support so that the LCD changes brightness levels based
on the AC line state.

Submitted by:	OGAWA Takaya <t-ogawa@triaez.kaisei.org>
MFC after:	1 week
2004-11-07 23:18:23 +00:00
peter
9534fe12fd Begin an invasion of i386-land by amd64.
Expose some of the amd64-specific sysarch functions to allow alternative
implementations of the %fs/%gs code for TLS, threads, etc.  USER_LDT does
not exist on the amd64 kernel, so we have to implement things other ways.
2004-11-06 03:23:36 +00:00
philip
a7b4c9ad92 Fix support for the Asus-compatible gadgets in Samsung P30/P35 laptops.
PR:		73380
Submitted by:	Sebastian Schulze Struchtrup <seb@struchtrup.com>
2004-11-05 07:24:11 +00:00
scottl
f85ea86e71 Don't use atomic ops to increment interrupt stats. This was only done on
amd64 and i386 anyways.  The stats are only kept for informational purposes.
2004-11-03 18:03:06 +00:00
scottl
595a6f02e8 Streamline busdma a bit. Inline _bus_dmamap_load_buffer, optimize some
tests, replace a passed td with a passed pmap to eliminate some deferences.
2004-11-02 23:52:58 +00:00
andre
8b6661b126 Reduce annoying SCSI probing delay from 15 to 5 seconds in all GENRIC kernels.
Discussed on:	-current
2004-11-02 20:57:20 +00:00
philip
368170ea60 Add support for Asus M6N laptops
Submitted by:	Andreas Dieling <snow@quantentunnel.de>
2004-11-02 13:02:22 +00:00
jhb
a9860ec891 - Change the ddb paging "support" to use a variable (db_lines_per_page) to
control the number of lines per page rather than a constant.  The variable
  can be examined and changed in ddb as '$lines'.  Setting the variable to
  0 will effectively turn off paging.
- Change db_putchar() to force out pending whitespace before outputting
  newlines and carriage returns so that one can rub out content on the
  current line via '\r     \r' type strings.
- Change the simple pager to rub out the --More-- prompt explicitly when
  the routine exits.
- Add some aliases to the simple pager to make it more compatible with
  more(1): 'e' and 'j' do a single line.  'd' does half a page, and
  'f' does a full page.

MFC after:	1 month
Inspired by:	kris
2004-11-01 22:15:15 +00:00
jhb
28f5ed05c5 Allow individual application processors to be disabled from the loader
via hints for 'lapicX'.  For example, to disable the CPU with the local
APIC ID of 7, use 'hint.lapic.7.disabled=1'.

MFC after:	1 month
2004-11-01 22:11:27 +00:00
des
22d75b597a Add TUNABLE_LONG and TUNABLE_ULONG, and use the latter for the
hw.pci.host_mem_start tunable.  Add comments to TUNABLE_INT and
TUNABLE_QUAD recommending against their use.

MFC after:	3 weeks
2004-10-31 15:50:33 +00:00
des
871722fc59 Whitespace cleanup 2004-10-31 15:02:53 +00:00
alc
6c2b908e77 Implement per-CPU SYSMAPs, i.e., CADDR* and CMAP*, to reduce lock
contention within pmap_zero_page() and pmap_copy_page().
2004-10-29 19:10:46 +00:00
simokawa
5c26c7535f Preserve dcons(4) buffer passed by loader(8). 2004-10-24 12:37:47 +00:00
scottl
ae662d81f3 Hook the hptmv driver up to the build. 2004-10-24 08:53:40 +00:00
rwatson
b8f5639e79 Add some basic KTR tracing to busdma on i386. This is likely not
the final set of traces -- someone with more busdma background
will probably want to review and expand this, as well as port to
other platforms.  This tracing is sufficient to identify key
busdma events on i386, and in particular to draw attention to
bounce buffering events that may have a substantial performance
impact.
2004-10-23 10:34:27 +00:00
njl
addc11daaf Remove a "needs Giant" flag from the /dev/apm compat device.
MFC after:	2 weeks
2004-10-22 17:17:12 +00:00
phk
dafa1caf81 Add new function ttyinitmode() which sets our systemwide default
modes on a tty structure.

Both the ".init" and the current settings are initialized allowing
the function to be used both at attach and open time.

The function takes an argument to decide if echoing should be enabled.
Echoing should not be enabled for regular physical serial ports
unless they are consoles, in which case they should be configured
by ttyconsolemode() instead.

Use the new function throughout.
2004-10-18 21:51:27 +00:00
alc
b5407c1588 When sf_buf_alloc() replaces a virtual-to-physical mapping, it needn't
invalidate the TLB(s) if the old mapping wasn't used by the CPU.  With
network interfaces that implement checksum off-loading, the old mapping is
almost never used by the CPU, only by the device driver for setting up the
DMA operation.

Reviewed by: tegge@
2004-10-16 22:32:50 +00:00
njl
2049409c03 Let nexus print our flags for us. Also, clean up an obfuscated if stmt. 2004-10-14 22:37:51 +00:00
njl
2982f3e0a0 Print flags in the nexus for child devices. 2004-10-14 22:36:47 +00:00
njl
48e4157e91 Remove local hacks to set flags now that the device probe does this for us.
Tested on every device except sio_pci and the pc98 fd.c.  Perhaps something
similar should be done for the "disabled" hints also.

MFC after:	2 weeks
2004-10-14 22:21:59 +00:00
phk
6f926d4861 Add zero flags argument to sysctl calls. 2004-10-12 07:59:02 +00:00
phk
67217123d4 Add missing zero flag arguments to sysctl calls.
Add missing pointy hat to peter@
2004-10-12 07:58:13 +00:00
imp
a5e0614f34 Make the lower range of the memory area 0x80000000 again. Also
introduce hw.{pci,acpi}.host_mem_start tunable to change this.

MFC: ASAP
2004-10-11 21:10:23 +00:00
njl
21bb12fa0a Match surrounding style, not style(msmith). 2004-10-11 05:42:12 +00:00
njl
abd4abd5bd Move the code for halting the CPU (acpi_cpu_c1) into machdep files.
This removes the last MD portion of acpi_cpu.c.

MFC after:	2 weeks
2004-10-11 05:39:15 +00:00
imp
bc7aea493b Fix conflicts I didn't fix before I committed my busspace changes.
Noticed by: ru@ (and likely tinderbox, I haven't checked)
2004-10-11 00:58:24 +00:00
imp
14b8555370 Convert to newbus. (chances are we could now move this to dev/pbio
since I believe it is now MI, but that hasn't been done yet).

Reviewed by: dds
2004-10-10 03:26:20 +00:00
obrien
80780fc5fc style(9) 2004-10-09 08:31:21 +00:00
alc
417a40f2bf Make pte_load_store() an atomic operation in all cases, not just i386 PAE.
Restructure pmap_enter() to prevent the loss of a page modified (PG_M) bit
in a race between processors.  (This restructuring assumes the newly atomic
pte_load_store() for correct operation.)

Reviewed by: tegge@
PR: i386/61852
2004-10-08 08:23:43 +00:00
imp
03e8fdc421 Port pbio to HEAD.
OK'd by: dds
2004-10-07 16:21:03 +00:00
imp
f3f59ffab7 Add missing 'static' 2004-10-06 15:18:12 +00:00
imp
dc2b0adf74 For legacy PCI bridges, limit memory allocation to the top 32MB of
RAM.  Many older, legacy bridges only allow allocation from this
range.  This only appies to devices who don't have their memory
assigned by the BIOS (since we allocate the ranges so assigned
exactly), so should have minimal impact.

Hoewver, for CardBus bridges (cbb), they rarely get the resources
allocated by the BIOS, and this patch helps them greatly.  Typically
the 'bad Vcc' messages are caused by this problem.
2004-10-06 07:22:58 +00:00
alc
52911b00b3 Undo revision 1.251. This change was a performance pessimizing work-around
that is no longer required.  (In fact, it is not clear that it was ever
required in HEAD or RELENG_4, only RELENG_3 required a work-around.)  Now,
as before revision 1.251, if the preexisting PTE is invalid, pmap_enter()
does not call pmap_invalidate_page() to update the TLB(s).

Note: Even with this change, the handling of a copy-on-write fault is
inefficient, in such cases pmap_enter() calls pmap_invalidate_page() twice.

Discussed with: bde@
PR: kern/16568
2004-10-03 20:14:07 +00:00
alc
c4db706631 The physical address stored in the vm_page is page aligned. There is no
need to mask off the page offset bits.  (This operation made some sense
prior to i386/i386/pmap.c revision 1.254 when we passed a physical address
rather than a vm_page pointer to pmap_enter().)
2004-10-03 00:16:43 +00:00
alc
19377ec887 Eliminate unnecessary uses of PHYS_TO_VM_PAGE() from pmap_enter(). These
uses predate the change in the pmap_enter() interface that replaced the
page's physical address by the address of its vm_page structure.  The
PHYS_TO_VM_PAGE() was being used to compute the address of the same vm_page
structure that was being passed in.
2004-10-02 07:34:58 +00:00
nyan
05d726664c Fix BIOS default geometry on pc98.
PR:		kern/72225
Submitted by:	Hirokazu WATANABE <wnabe@par.odn.ne.jp>
2004-10-01 15:57:23 +00:00
das
81fc7cf485 Fix the following race:
1. Process p1 is currently being swapped in.
  2. Process p2 calls linux_ptrace(PTRACE_GETFPXREGS, p1_pid, ...)
  3. After acquiring a reference to FIRST_THREAD_IN_PROC(p1),
     p2 blocks in faultin() while p1 finishes being swapped in.
     This means p2 won't get back the lock on p1 until after p1's
     threads are runnable.
  4. After p1 is swapped in, the first thread in p1 exits.
  5. p2 now uses its dangling reference to p1's first thread.
2004-10-01 05:01:00 +00:00
alc
118a3c283b Prevent the unexpected deallocation of a page table page while performing
pmap_copy().  This entails additional locking in pmap_copy() and the
addition of a "flags" parameter to the page table page allocator for
specifying whether it may sleep when memory is unavailable.  (Already,
pmap_copy() checks the availability of memory, aborting if it is scarce.
In theory, another CPU could, however, allocate memory between
pmap_copy()'s check and the call to the page table page allocator,
causing the current thread to release its locks and sleep.  This change
makes this scenario impossible.)

Reviewed by: tegge@
2004-09-29 19:20:40 +00:00
jhb
11511d9ece Improve the panic message for a busted MP table with conflicting entries
for the same PCI interrupt.

Tested by:	Pavel Gubin pg at ie dot tusur dot ru
MFC after:	3 days
2004-09-24 18:42:54 +00:00
rik
0ad253ffc7 Invalidate cache after changing pte entry.
Discussed with:	jhp and njl
MFC after:	5 days
2004-09-23 16:06:27 +00:00
mjacob
f2f4dbb53f PAE seems to work for isp- at least under mimimal testing. 2004-09-23 05:26:19 +00:00
alc
a6bec8ad06 Correct a long-standing error in _pmap_unwire_pte_hold() affecting
multiprocessors.  Specifically, the error is conditioning the call to
pmap_invalidate_page() on whether the pmap is active on the current CPU.
This call must be unconditional.  Regardless of whether the pmap is active
on the CPU performing _pmap_unwire_pte_hold(), it could be active on another
CPU.  For example, a call to pmap_remove_all() by the page daemon could
result in a call to _pmap_unwire_pte_hold() with the pmap inactive on the
current CPU and active on another CPU.  In such circumstances, failing to
call pmap_invalidate_page() results in a stale TLB entry on the other CPU
that still maps the now deallocated page table page.  What happens next is
typically a mysterious panic in pmap_enter() by the other CPU, either
"pmap_enter: attempted pmap_enter on 4MB page" or "pmap_enter: pte vanished,
va: 0x%lx".  Both occur because the former page table page has been recycled
and allocated to a new purpose.  Consequently, it no longer contains zeroes.

See also Peter's i386/i386/pmap.c revision 1.448 and the related e-mail
thread last year.

Many thanks to the engineers at Sandvine for providing clear and concise
information until all of the pieces of the puzzle fell into place and
for testing an earlier patch.

MT5 Candidate
2004-09-22 05:01:48 +00:00
jhb
e487fab495 - Add support for "paging" in stack trace output. That is, when you do
a stack trace from ddb, the output will pause with a '--More--' prompt
  every 18 lines.  If you hit Enter, it will print another line and prompt
  again.  If you hit space it will output another page and then prompt.
  If you hit 'q' or 'x' it will abort the rest of the stack trace.
- Fix the sparc64 userland stack trace to honor the total count of lines
  to print.  This is useful if your trace happens to walk back onto
  0xdeadc0de and gets stuck in an endless loop.

MFC after:	1 month
Tested on:	i386, alpha, sparc64
2004-09-20 19:05:32 +00:00
alc
a58cafd11e Simplify the reference counting of page table pages. Specifically, use
the page table page's wired count rather than its hold count to contain
the reference count.  My rationale for this change is based on several
factors:

1. The machine-independent and pmap layers used the same hold count field
   in subtly different ways.  The machine-independent layer uses the hold
   count to implement a form of ephemeral wiring that is used by pipes,
   physio, etc.  In other words, subsystems where we wish to temporarily
   block a page from being swapped out while it is mapped into the kernel's
   address space.  Such pages are never removed from the page queues.
   Instead, the page daemon recognizes a non-zero hold count to mean "hands
   off this page."  In contrast, page table pages are never in the page
   queues; they are wired from birth to death.  The hold count was being
   used as a kind of reference count, specifically, the number of valid
   page table entries within the page.  Not surprisingly, these two
   different uses imply different synchronization rules: in the machine-
   independent layer access to the hold count requires the page queues
   lock; whereas in the pmap layer the pmap lock is required.  Thus,
   continued use by the pmap layer of vm_page_unhold(), which asserts that
   the page queues lock is held, made no sense.

2. _pmap_unwire_pte_hold() was too forgiving in its handling of the wired
   count.  An unexpected wired count on a page table page was ignored and
   the underlying page leaked.

3. In a word, microoptimization.  Using the wired count exclusively, rather
   than a combination of the wired and hold counts, makes the code slightly
   smaller and faster.

Reviewed by: tegge@
2004-09-19 21:20:01 +00:00
alc
78d7eda383 Remove an outdated assertion from _pmap_allocpte(). (When vm_page_alloc()
succeeds, the page's queue field is unconditionally set to PQ_NONE by
vm_pageq_remove_nowakeup().)
2004-09-19 02:39:31 +00:00
mjacob
5247a697af Put in a commented out ispfw device under isp and note that this is usually
a module.
2004-09-19 00:52:22 +00:00
alc
3b2a3888c0 Release the page queues lock earlier in pmap_protect() and pmap_remove() in
order to reduce contention.
2004-09-18 22:56:58 +00:00
julian
3f6dedb449 Fix breakpoint handling for i386.
not sure yet about 5.x... MFC if needed.
Also fixes small problems with examining some registers and
some specific gdb transfer problems.

	As the patch says:
	This is not a pretty patch and only meant as a temporary
	fix until a better solution is committed.

PR:		i386/71715
Submitted by:	Stephan Uphoff <ups@tree.com>
MFC after:	1 week
2004-09-15 23:26:49 +00:00
phk
1795816cf7 Add new a function isa_dma_init() which returns an errno when it fails
and which takes a M_WAITOK/M_NOWAIT flag argument.

Add compatibility isa_dmainit() macro which whines loudly if
isa_dma_init() fails.

Problem uncovered by:	tegge
2004-09-15 12:09:50 +00:00
phk
fe0362c5d2 Remove now unused #include files. 2004-09-15 12:02:35 +00:00
alc
acf520c2b0 Use an atomic op to update the pte in pmap_protect(). This is to prevent
the loss of a page modified (PG_M) bit in a race between processors.

Quoting Tor:
	One scenario where the old code could cause a lost PG_M bit is a
	multithreaded linux program (or FreeBSD program using the
	linuxthreads port) where one thread was starting a subprocess.
	The thread doing fork() would call vmspace_fork(), which would then
	call vm_map_copy_entry() which would call pmap_protect() on an area
	possibly accessed by other threads.

Additionally, make the clearing of PG_M by pmap_protect() unconditional if
write permission is removed.  Previously, PG_M could persist on a read-only
unmanaged page.  That seems inconsistent and confusing.

In collaboration with: tegge@

MT5 candidate
PR: 61852
2004-09-12 20:20:40 +00:00
scottl
1e56230631 Revert the previous round of changes to td_pinned. The scheduler isn't
fully initialed when the pmap layer tries to call sched_pini() early in the
boot and results in an quick panic.  Use ke_pinned instead as was originally
done with Tor's patch.

Approved by: julian
2004-09-11 10:07:22 +00:00
scottl
136dd28a2e Double the number of kernel page tables for amd64 and for i386/PAE. The old
value was only enough for 8GB of RAM, the new value can do 16GB.  This still
isn't optimal since it doesn't scale.  Fixing this for amd64 looks to be
fairly easy, but for i386 will be quite difficult.

Reviewed by: peter
2004-09-11 01:31:26 +00:00
julian
7cae3c9d5b Make up my mind if cpu pinning is stored in the thread structure or the
scheduler specific extension to it. Put it in the extension as
the implimentation details of how the pinning is done needn't be visible
outside the scheduler.

Submitted by:	tegge  (of course!)   (with changes)
MFC after:	3 days
2004-09-10 22:28:33 +00:00
wpaul
a2f7a53a34 Add device driver support for the VIA Networking Technologies
VT6122 gigabit ethernet chip and integrated 10/100/1000 copper PHY.
The vge driver has been added to GENERIC for i386, pc98 and amd64,
but not to sparc or ia64 since I don't have the ability to test
it there. The vge(4) driver supports VLANs, checksum offload and
jumbo frames.

Also added the lge(4) and nge(4) drivers to GENERIC for i386 and
pc98 since I was in the neighborhood. There's no reason to leave them
out anymore.
2004-09-10 20:57:46 +00:00
jhb
860ec87f05 Teach the stack trace code how to step across a double fault when stepping
across frames.  Basically, if the current frame is for the
'dblfault_handler' function, then get the next %eip and %ebp values to use
from the original TSS of the thread that has the saved state when the
double fault triggered.

MFC after:	4 days
2004-09-09 20:39:31 +00:00
alc
ffa5e13a9f Use atomic ops in pmap_clear_ptes() to prevent SMP races that could
result in the loss of an accessed or modified bit from the pte.

In collaboration with: tegge@

MT5 candidate
2004-09-08 18:58:29 +00:00
scottl
e23092c9a9 Fix a problem with tag->boundary inheritence that has existed since day one
and was propagated to nearly every platform.  The boundary of the child needs
to consider the boundary of the parent and pick the minimum of the two, not
the maximum.  However, if either is 0 then pick the appropriate one.
This bug was exposed by a recent change to ATA, which should now be fixed by
this change.  The alignment and maxsegsz tag attributes likely also need
a similar review in the near future.

This is a MT5 candidate.

Reviewed by: marcel
Submitted by: sos (in part)
2004-09-08 04:54:19 +00:00
scottl
f048458585 Fix a cut-n-paste glitch with SCHED_4BSD. 2004-09-07 22:44:55 +00:00
scottl
061851f776 Switch the default scheduler to 4BSD to match what will go into RELENG_5 soon.
It can be switched back once 5.3 is tested and released.  Also turn on
PREEMPTION as many of the stability problems with it have been fixed.

MT5: 3 days.
2004-09-07 22:37:43 +00:00
dfr
abfb7537fa Regen. 2004-09-06 09:33:30 +00:00
dfr
865b03d472 Add a few stub syscalls to get TransGaming's winex a bit closer to
working.
2004-09-06 09:32:59 +00:00
julian
5813d27029 Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.

The KSE (or td_sched) structure is  now allocated per thread and has no
allocation code of its own.

Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.

Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.

The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.

A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.

Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.

Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by:	scottl, peter
MFC after:	1 week
2004-09-05 02:09:54 +00:00
scottl
d9af98161a Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined.  Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in kern_switch.c).  This
is a possible MT5 candidate.
2004-09-02 18:59:15 +00:00
julian
2c3ff98534 Give up trying to make preemption dependent on SCHED_4BSD
the list of breakages was getting too long
2004-09-01 20:41:18 +00:00
alc
cddfbc44fa Correction to the previous revision: I forgot to apply the ones complement
to a constant.  This didn't show in testing because the broken expression
produced the same result in my tests as the correct expression.
2004-09-01 19:04:09 +00:00
julian
ecf87479de Don't ask for this for modules. no modules need to know about preemption at the moment 2004-09-01 18:29:57 +00:00
alc
23264b36bb Modify pmap_pte() to support its use on non-current, non-kernel pmaps
without holding Giant.
2004-09-01 18:04:22 +00:00
scottl
c0ab9d7bd6 Protect the PREEMPTION logic with #ifdef _KERNEL to fix the build. 2004-09-01 10:12:08 +00:00
julian
5a545bc8bc Only turn preemption for 4bsd.
it's still poison for ULE.
2004-09-01 09:01:32 +00:00
julian
8354ba9e3a Give the 4bsd scheduler the ability to wake up idle processors
when there is new work to be done.

MFC after:	5 days
2004-09-01 06:42:02 +00:00
julian
e9d9514975 Give setrunqueue() and sched_add() more of a clue as to
where they are coming from and what is expected from them.

MFC after:	2 days
2004-09-01 02:11:28 +00:00
mdodd
cd74e3ee9d Clarify SDT feature word bits.
Obtained from:	 NetBSD
2004-08-31 21:51:51 +00:00
mdodd
d8deb0d121 Fix checksum calculation.
Submitted by:	 Jean Delvare <khali@linux-fr.org>
2004-08-31 21:45:30 +00:00
julian
2782d4b3fc Remove an unneeded argument..
The removed argument could trivially be derived from the remaining one.
That in turn should be the same as curthread, but it is possible that curthread could be expensive to derive on some syste,s so leave it as an argument.
Having both proc and thread as an argumen tjust gives an opportunity for
them to get out sync.

MFC after:	3 days
2004-08-31 07:34:54 +00:00
julian
ee753ed190 Remove sched_free_thread() which was only used
in diagnostics. It has outlived its usefulness and has started
causing panics for people who turn on DIAGNOSTIC, in what is otherwise
good code.

MFC after:	2 days
2004-08-31 06:12:13 +00:00
peter
1d9abdbe78 Kill count device support from config. I've changed the last few
remaining consumers to have the count passed as an option.  This is
i4b, pc98/wdc, and coda.

Bump configvers.h from 500013 to 600000.

Remove heuristics that tried to parse "device ed5" as 5 units of the ed
device.  This broke things like the snd_emu10k1 device, which required
quotes to make it parse right.  The no-longer-needed quotes have been
removed from NOTES, GENERIC etc.  eg, I've removed the quotes from:
   device  snd_maestro
   device  "snd_maestro3"
   device  snd_mss

I believe everything will still compile and work after this.
2004-08-30 23:03:58 +00:00
alc
6b508bc507 Remove unnecessary check for curthread == NULL. 2004-08-30 03:52:05 +00:00
des
63e4700dc5 Add a section for hardware watchdog timers, initially populated by ichwd.
MFC after:	3 days
2004-08-29 11:11:31 +00:00
obrien
0fe47008f6 s/smp_rv_mtx/smp_ipi_mtx/g
Requested by:	jhb
2004-08-28 00:49:55 +00:00
marcel
01fd13440d Move the kernel-specific logic to adjust frompc from MI to MD. For
these two reasons:
1. On ia64 a function pointer does not hold the address of the first
   instruction of a functions implementation. It holds the address
   of a function descriptor. Hence the user(), btrap(), eintr() and
   bintr() prototypes are wrong for getting the actual code address.
2. The logic forces interrupt, trap and exception entry points to
   be layed-out contiguously. This can not be achieved on ia64 and is
   generally just bad programming.

The MCOUNT_FROMPC_USER macro is used to set the frompc argument to
some kernel address which represents any frompc that falls outside
the kernel text range. The macro can expand to ~0U to bail out in
that case.
The MCOUNT_FROMPC_INTR macro is used to set the frompc argument to
some kernel address to represent a call to a trap or interrupt
handler. This to avoid that the trap or interrupt handler appear to
be called from everywhere in the call graph. The macro can expand
to ~0U to prevent adjusting frompc. Note that the argument is selfpc,
not frompc.

This commit defines the macros on all architectures equivalently to
the original code in sys/libkern/mcount.c. People can take it from
here...

Compile-tested on: alpha, amd64, i386, ia64 and sparc64
Boot-tested on: i386
2004-08-27 19:42:35 +00:00
alc
d0b59a4d47 The machine-independent parts of the virtual memory system always pass a
valid pmap to the pmap functions that require one.  Remove the checks for
NULL.  (These checks have their origins in the Mach pmap.c that was
integrated into BSD.  None of the new code written specifically for
FreeBSD included them.)
2004-08-27 19:06:17 +00:00
andre
d243747d92 Always compile PFIL_HOOKS into the kernel and remove the associated kernel
compile option.  All FreeBSD packet filters now use the PFIL_HOOKS API and
thus it becomes a standard part of the network stack.

If no hooks are connected the entire packet filter hooks section and related
activities are jumped over.  This removes any performance impact if no hooks
are active.

Both OpenBSD and DragonFlyBSD have integrated PFIL_HOOKS permanently as well.
2004-08-27 15:16:24 +00:00
obrien
cb4ed35612 Fix a bug in in_cksum_hdr w/o -O.
The C code assumes that the carry bit is always kept from the previous
operation. However, the pointer indexing requires another add operation.
Thus, the carry bit from the first operation is tromped over by the
"addl" operation that ends up following it, so the "adcl" that follows
that has no effect because the carry bit is cleared before it.
The result is checksum failure on received packets.

The larger issue is that there isn't any other way of preventing the compiler
inserting arbitrary instructions between different __asm statements (and
that the commit message in revision 1.13 of in_cksum.h is wrong on
this point).  From
http://developer.apple.com/documentation/DeveloperTools/gcc-3.3/gcc/Extended-Asm.html
	---8<---8<---8<---
	You can't expect a sequence of volatile asm instructions to remain
	perfectly consecutive. If you want consecutive output, use a single
	asm.  Also, GCC will perform some optimizations across a volatile
	asm instruction; GCC does not "forget everything" when it encounters
	a volatile asm instruction the way some other compilers do.
	---8<---8<---8<---

Also, this change also makes the ASM code much easier to read.

PR:		69257
Submitted by:	Mike Bristow <mike@urgle.com>, Qing Li <qing.li@bluecoat.com>
2004-08-25 18:28:15 +00:00
jhb
325fe79e0c Correct the arguments to kern_sigaltstack() as they were reversed.
PR:		kern/68079
Submitted by:	Georg-W. Koltermann gwk at rahn-koltermann dot de
2004-08-24 20:52:52 +00:00
jhb
ac08ecfc54 Regenerate after fcntl() wrappers were marked MP safe. 2004-08-24 20:24:34 +00:00
jhb
cc23ea84d0 Fix the ABI wrappers to use kern_fcntl() rather than calling fcntl()
directly.  This removes a few more users of the stackgap and also marks
the syscalls using these wrappers MP safe where appropriate.

Tested on:	i386 with linux acroread5
Compiled on:	i386, alpha LINT
2004-08-24 20:21:21 +00:00
njl
8881707b65 Be sure to always unlock the sx lock when exiting the sysctl function.
MFC after:	3 days
2004-08-24 17:53:25 +00:00
peter
326b7f663e Commit Doug White and Alan Cox's fix for the cross-ipi smp deadlock.
We were obtaining different spin mutexes (which disable interrupts after
aquisition) and spin waiting for delivery.  For example, KSE processes
do LDT operations which use smp_rendezvous, while other parts of the
system are doing things like tlb shootdowns with a different mutex.

This patch uses the common smp_rendezvous mutex for all MD home-grown
IPIs that spinwait for delivery.  Having the single mutex means that
the spinloop to aquire it will enable interrupts periodically, thus
avoiding the cross-ipi deadlock.

Obtained from: dwhite, alc
Reviewed by:   jhb
2004-08-23 21:39:29 +00:00
njl
0ffa604436 Add a BUS_GET_RESOURCE_LIST method for nexus.
MFC after:	3 days
2004-08-23 16:26:16 +00:00
sobomax
55526dc004 My recent measurement shows that CPU_DISABLE_CMPXCHG is no longer necessary
with VmWare 4.x. At least with VmWare version 4.5.2, i386 version of
atomic_cmpset_int() is about 30 times slower than non-i386 version. It
makes this delta a good 5.3 MFC candidate, since otherwise it will
mislead users who run FreeBSD under modern VmWare otherwise.
2004-08-23 15:55:03 +00:00
sobomax
42e3bb3749 o Fix whitespace bug introduced in the previous commit.
Submitted by:	ru

o Simplify p4tcc_power_profile().

Submitted by:	maxim
2004-08-23 10:09:29 +00:00
sobomax
fe731a9733 o Extend boot output: print out mimimum/maximum performance value and number
of performance steps available;

o similarly to Enhanced SpeedStep driver, export list of all available steps
  via hw.p4tcc.cpuperf_levels sysctl.
2004-08-23 09:47:56 +00:00
alc
d421a19d6e Properly free the temporary sf_buf in uiomove_fromphys() if a copyin or
copyout fails.

Obtained from: DragonFlyBSD
2004-08-21 18:50:34 +00:00
obrien
311d4dd9cc Unconditionally support the AMD64 GART HW. 2004-08-19 20:58:24 +00:00
njl
50a2c589ee Disable interrupts after using pmap_enter() to add the identity mapping.
Since pmap_enter() calls pmap_invalidate_page(), which needs interrupts
enabled in the SMP case, we defer the disable to right before saving the
register context.  This has been incorrect for about a year but caused no
real problems because the identity page never actually replaces a previously
mapped page and suspend/resume on SMP systems has been uncommon.

Tested by:	sos
MFC after:	3 days
2004-08-19 18:48:17 +00:00
gibbs
3faf671104 Modify the "legacy bus" to pass all resource allocations through to its
parent rather than track resources locally.  The original code
was incomplete in that it would only honor requests for resources
that already exist in its resource list.  This prevented many ISA
identify routines from allocating temporary resources.  Passing
the requests up to legacy's parent losing no functionality and
allows these requests to succeed.

Reviewed by: imp, jhb
Approved by: RE
2004-08-16 21:55:29 +00:00
obrien
963044797e AMD64 on-CPU GART support.
This also applies to AMD64 HW running 'i386' OS.

Submitted by:	Jung-uk Kim <jkim@niksun.com>
Integration by:	obrien
2004-08-16 12:25:48 +00:00
obrien
01b52fbc3e Increase the scaling of VM_KMEM_SIZE_MAX.
Submitted by:	alc
2004-08-16 08:35:22 +00:00
tjr
e6930a385c Add a new type, l_uintptr_t, which is an unsigned integer type with the
same width as a pointer under Linux. Add two new macros, PTRIN and PTROUT,
which convert between l_uintptr_t and native pointers.
2004-08-16 07:05:44 +00:00
rwatson
6f9a1cc98b Preemptive anti-footshooting: cause a #error if MP_WATCHDOG is compiled
with SCHED_ULE.
2004-08-15 20:32:40 +00:00
rwatson
5faa346ee2 Spell MP_WATCHDIG right: I fixed the build without MP_WATCHDOG after
testing MP_WATCHDOG, and used an incorrect ifdef.
2004-08-15 19:57:14 +00:00
rwatson
abf6ea2973 Add an "options MP_WATCHDOG" to i386. This option allows one of the
logical CPUs on a system to be used as a dedicated watchdog to cause a
drop to the debugger and/or generate an NMI to the boot processor if
the kernel ceases to respond.  A sysctl enables the watchdog running
out of the processor's idle thread; a callout is launched to reset a
timer in the watchdog.  If the callout fails to reset the timer for ten
seconds, the watchdog will fire.  The sysctl allows you to select which
CPU will run the watchdog.

A sample "debug.leak_schedlock" is included, which causes a sysctl to
spin holding sched_lock in order to trigger the watchdog.  On my Xeons,
the watchdog is able to detect this failure mode and break into the
debugger, which cannot otherwise be done without an NMI button.

This option does not currently work with sched_ule due to ule's push
notion of scheduling, similar to machdep.hlt_logical_cpus failing to
work with that scheduler.

On face value, this might seem somewhat inefficient, but there are a
lot of dual-processor Xeons with HTT around, so using one as a watchdog
for testing is not as inefficient as one might fear.
2004-08-15 18:02:09 +00:00
njl
5e14731c01 MPSAFE locking
* Serialize access to the sysctl routines and the notify handler
* Assert that the sx lock is held in any functions they call.
* Note that recursively calling to re-enable the hotkeys is sub-optimal.
2004-08-13 06:22:35 +00:00
njl
86267b0197 MPSAFE locking
* Serialize access to the sysctl routines and the notify handler
* Assert that the sx lock is held in any functions they call.
2004-08-13 06:22:31 +00:00
njl
8fcef8f766 MPSAFE locking
* Serialize access to the sysctl routines and the notify handler.
2004-08-13 06:22:29 +00:00
marcel
fbbaea5f90 Add __elfN(dump_thread). This function is called from __elfN(coredump)
to allow dumping per-thread machine specific notes. On ia64 we use this
function to flush the dirty registers onto the backingstore before we
write out the PRSTATUS notes.

Tested on: alpha, amd64, i386, ia64 & sparc64
Not tested on: arm, powerpc
2004-08-11 02:35:06 +00:00
rwatson
36e89f928c Add ADAPTIVE_GIANT to GENERIC on i386, with the intent of making it
a standard configuration similar to [NO_]ADAPTIVE_MUTEXES.  This
feature causes Giant to be included in the set of mutexes adaptively
spun on.  It appears to have a positive effect on performance on SMP
across several workloads, including measurements of a 16% improvement
on buildworld, and 30%+ improvement for MySQL using the supersmack
benchmark with Giant over the network stack; a 6% improvement without
Giant on the network stack (as a result of less giant contention).
2004-08-11 01:34:18 +00:00
imp
83dcb02951 Remove commented out pcic driver. It is too broken to work (even if you
fix the obvious bugs, nastier ones reside below the surfac), and having
it commented out here just encourages people to try it.

# I'm not removing it from the base system, yet.
2004-08-09 17:36:19 +00:00
alc
1ac3389eba With the advent of pmap locking it makes sense for pmap_copy() to be less
forgiving about inconsistencies in the source pmap.  Also, remove a new-
line character terminating a nearby panic string.
2004-08-08 00:31:58 +00:00
rwatson
8096ecab6e Generate KTR trace records for syscall enter and exit in i386 system
calls.  Note that the information included is a bit different from the
existing KTR traces generated on powerpc, as I'm primarily interested
in kernel context (thread, syscall #, proc, etc), not the user
arguments to the system call.  Some convergence would be useful here.
2004-08-06 21:56:26 +00:00
njl
708c9dab9f Remove the attempt to cache the previous page mapped at our identity
location (for the wake code).  It should not be needed since we don't
map other pages at the same location and if there was an old mapping, it
would be restored by a fault.  The old code had serious problems, namely
that it was restoring the new page it had just removed (not opage) and
it could only guess at the right protection (since there's no
pmap_extract_protect function).  Thanks to Alan Cox for explaining much
of this to me.

Also, remove a commented-out initializecpu() call since it is not needed.
Restoring the cpu context is better than attempting to init from scratch.

Reviewed by:	alc (earlier version)
2004-08-05 06:29:12 +00:00
rwatson
0a811906a4 Move definition of mem_range_softc from mp_machdep.c to machdep.c so
that it is defined for non-SMP builds, not just SMP ones.
2004-08-05 00:32:08 +00:00
jhb
fb7bd65f3f Remove a potential deadlock on i386 SMP by changing the lazypmap ipi and
spin-wait code to use the same spin mutex (smp_tlb_mtx) as the TLB ipi
and spin-wait code snippets so that you can't get into the situation of
one CPU doing a TLB shootdown to another CPU that is doing a lazy pmap
shootdown each of which are waiting on each other.  With this change, only
one of the CPUs would do an IPI and spin-wait at a time.
2004-08-04 20:31:19 +00:00
markm
124b176fef Fix module builds for i386 and amd64. 2004-08-04 18:30:31 +00:00
alc
97dc3be9d1 Post-locking clean up/simplification, particularly, the elimination of
vm_page_sleep_if_busy() and the page table page's busy flag as a
synchronization mechanism on page table pages.

Also, relocate the inline pmap_unwire_pte_hold() so that it can be used
to shorten _pmap_unwire_pte_hold() on alpha and amd64.  This places
pmap_unwire_pte_hold() next to a comment that more accurately describes
it than _pmap_unwire_pte_hold().
2004-08-04 18:04:44 +00:00
philip
b2d488a77d Unbreak LINT by making sure that method is always defined.
Submitted by:	roam
Pointy hat to:	philip
2004-08-04 14:29:22 +00:00
philip
11953c7462 Further cleanup: merge the three led toggling functions
into a single general function to handle all leds.

Approved by:	njl
2004-08-03 22:37:09 +00:00
njl
f9477f2ed8 Use the acpi_{Get,Set}Integer functions instead of rolling custom ones.
Clean up return path of each function to have a single exit point.  This
reduces diffs against the MPSAFE tree.
2004-08-03 21:17:36 +00:00
markm
f516045149 Making a loadable null.ko for /dev/(null|zero) proved rather
unpopular, so remove this (mis)feature.

Encouragement provided by:	jhb (and others)
2004-08-03 19:24:54 +00:00
mux
35780dc21a Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait().  It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by:	jhb
Reviewed by:	jhb
2004-08-03 18:44:27 +00:00
markm
3ac428cadc Sort includes; minor whitespace. 2004-08-02 20:32:56 +00:00
dfr
090a3e72f1 Add definitions for TLS relocations. 2004-08-02 19:12:17 +00:00
scottl
fb7f90d7ec Optimize intr_execute_handlers() by combining the pic_disable_source() and
pic_eoi_source() into one call.  This halves the number of spinlock operations
and indirect function calls in the normal case of handling a normal (ithread)
interrupt.  Optimize the atpic and ioapic drivers to use inlines where
appropriate in supporting the intr_execute_handlers() change.

This knocks 900ns, or roughly 1350 cycles, off of the time spent servicing an
interrupt in the common case on my 1.5GHz P4 uniprocessor system.  SMP systems
likely won't see as much of a gain due to the ioapic being more efficient than
the atpic.  I'll investigate porting this to amd64 soon.

Reviewed by:	jhb
2004-08-02 15:31:10 +00:00
markm
b8bae5430c Add the I/O device for those architectures that have it. 2004-08-01 19:37:34 +00:00
markm
0d49cf05ec Remove local hack that was not supposed to be committed.
Spotted by:	Antoine Brodin - antoine dot brodin at laposte dot net
2004-08-01 18:12:25 +00:00
scottl
30cf65f35d Turn off PREEMPTION by default while it gets debugged. It's been causing
4 weeks of problems including deadlocks and instant panics.  Note that the
real bugs are likely in the scheduler.
2004-08-01 14:31:45 +00:00
markm
a6c822020d Break out the MI part of the /dev/[k]mem and /dev/io drivers into
their own directory and module, leaving the MD parts in the MD
area (the MD parts _are_ part of the modules). /dev/mem and /dev/io
are now loadable modules, thus taking us one step further towards
a kernel created entirely out of modules. Of course, there is nothing
preventing the kernel from having these statically compiled.
2004-08-01 11:40:54 +00:00
alc
add997e1ff Add pmap locking to pmap_object_init_pt(). 2004-07-31 06:42:05 +00:00
alc
ff80a7c7f2 Advance the state of pmap locking on alpha, amd64, and i386.
- Enable recursion on the page queues lock.  This allows calls to
   vm_page_alloc(VM_ALLOC_NORMAL) and UMA's obj_alloc() with the page
   queues lock held.  Such calls are made to allocate page table pages
   and pv entries.
 - The previous change enables a partial reversion of vm/vm_page.c
   revision 1.216, i.e., the call to vm_page_alloc() by vm_page_cowfault()
   now specifies VM_ALLOC_NORMAL rather than VM_ALLOC_INTERRUPT.
 - Add partial locking to pmap_copy().  (As a side-effect, pmap_copy()
   should now be faster on i386 SMP because it no longer generates IPIs
   for TLB shootdown on the other processors.)
 - Complete the locking of pmap_enter() and pmap_enter_quick().  (As of now,
   all changes to a user-level pmap on alpha, amd64, and i386 are performed
   with appropriate locking.)
2004-07-29 18:56:31 +00:00
phk
98d8f3741c Move a relic to its correct location(s): Put nfs diskless initialization
calls with the code they call.  (Yet another example of mindless copy&paste).
2004-07-28 21:54:57 +00:00
kan
7a2c503e1b Avoid casts as lvalues. While here, avoid storing 32bit quantities in
16bit locations.
2004-07-28 06:32:28 +00:00
rwatson
4ab080249a Pass a thread argument into cpu_critical_{enter,exit}() rather than
dereference curthread.  It is called only from critical_{enter,exit}(),
which already dereferences curthread.  This doesn't seem to affect SMP
performance in my benchmarks, but improves MySQL transaction throughput
by about 1% on UP on my Xeon.

Head nodding:	jhb, bmilekic
2004-07-27 16:41:01 +00:00
tjr
b9200ab932 Use file2c instead of a combination of hexdump, sed and shell script to
generate the wakecode[] array from acpi_wakecode.bin. The old method was
not safe in multibyte locales.
2004-07-27 01:33:27 +00:00
njl
91e938aef3 Get the acpi softc via the devclass, not by caching the device. Replace
apm_softc with a single integer since the whole softc is not used.
2004-07-24 22:41:30 +00:00
njl
a24969480f Whitespace cleanup and move static variables together. 2004-07-24 20:40:02 +00:00
njl
a247fc5f3a Remove unneeded parens and fix whitespace. 2004-07-24 20:39:25 +00:00
scottl
aafe09f586 Arg! Revert local changes that were accidentlly included in the previous
version.
2004-07-22 15:55:03 +00:00
scottl
8301474d15 Don't count needed bounce pages if loading a buffer that was created with
bus_dmamem_alloc()

Submitted by: harti
2004-07-22 15:46:51 +00:00
cognet
dc7f443ca4 Using NULL as a malloc type when calling contigmalloc() is wrong, so introduce
a new malloc type, and use it.
2004-07-21 15:52:34 +00:00
nyan
3c92d13fb3 Add the ACPI Panasonic extras driver.
Submitted by:	OGAWA Takaya <t-ogawa@triaez.kaisei.org> and nyan
2004-07-21 14:47:54 +00:00
marcel
f77d7b9449 Unify db_stack_trace_cmd(). All it did was look up the thread given
the thread ID and call db_trace_thread().
Since arm has all the logic in db_stack_trace_cmd(), rename the
new DB_COMMAND function to db_stack_trace to avoid conflicts on
arm.
While here, have db_stack_trace parse its own arguments so that
we can use a more natural radix for IDs. If the ID is not a thread
ID, or more precisely when no thread exists with the ID, try if
there's a process with that ID and return the first thread in it.
This makes it easier to print stack traces from the ps output.

requested by: rwatson@
tested on: amd64, i386, ia64
2004-07-21 05:07:09 +00:00
davidxu
069d2d3959 Make end of frames for KSE thread, for system scope thread, without this
change, debugger will dump a weird stack backtrace.
2004-07-20 01:38:59 +00:00
jhb
d4ea20b7e1 As a temporary hack, turn off deferred preemptions that are the result of
a fast interrupt handler doing an swi_sched().  This fixed the lockups I
saw on my laptop when using xmms in KDE and on rwatson's MySQL benchmarks
on SMP.  This will eventually be removed and/or modified when I figure out
what the root cause is and fix that.
2004-07-19 16:37:47 +00:00
das
86c293bf54 Make FLT_ROUNDS correctly reflect the dynamic rounding mode. 2004-07-19 08:17:25 +00:00
silby
497e763c75 Add a #error requiring KDB if DDB is specified. (This can probably be
relocated to a better place, if one exists.)
2004-07-19 02:46:34 +00:00
alc
8fe12ef8db Utilize pmap_pte_quick() rather than pmap_pte() in pmap_protect(). The
reason being that pmap_pte_quick() requires the page queues lock, which is
already held, rather than Giant.
2004-07-18 21:19:10 +00:00
maxim
531ff1b33d In -CURRENT pseudo devices are not statically assigned at compile time,
remove a stale comment.

PR:		kern/62285
2004-07-18 09:03:12 +00:00
alc
8dfcfd7253 Remedy my omission of one change in the prevision revision: pmap_remove()
must pin the current thread in order to call pmap_pte_quick().
2004-07-17 23:44:59 +00:00