Commit Graph

4663 Commits

Author SHA1 Message Date
Stephan Uphoff
2053c12705 Remove mpte optimization from pmap_enter_quick().
There is a race with the current locking scheme and removing
it should have no measurable performance impact.
This fixes page faults leading to panics in pmap_enter_quick_locked()
on amd64/i386.

Reviewed by: alc,jhb,peter,ps
2006-06-15 01:01:06 +00:00
Alexander Leidinger
4946fe7c4d regen after MFP4 (soc2006/rdivacky_linuxolator) of syscalls.master
P4-Changes:	similar to 98673 and 98675 but regenerated locally
Sponsored by:	Google SoC 2006
Submitted by:	rdivacky
2006-06-13 18:48:30 +00:00
Alexander Leidinger
c8b579c182 MFP4 (soc2006/rdivacky_linuxolator)
Update of syscall.master:
	o	Adding of several new dummy syscalls (268-310)
	o	Synchronization of amd64 syscall.master with i386 one
	o	Auditing added to amd64 syscall.master
	o	Change auditing type for lstat syscall (bugfix). [1]

P4-Changes:	98672, 98674
Noticed by:	rwatson [1]
Sponsored by:	Google SoC 2006
Submitted by:	rdivacky
2006-06-13 18:43:55 +00:00
David Xu
b41f1452d9 Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.

Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
   timeslice and interaction detecting algorithm are based
   on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
John Baldwin
e3d7caf487 Enable a few more things in x86 NOTES to get broader LINT coverage:
- Turn on iwi(4), ipw(4), and ndis(4) on amd64 and i386.
- Turn on ral(4) and ural(4) on i386, pc98, and amd64.
2006-06-12 20:38:17 +00:00
Alan Cox
b74a62d602 Don't invalidate the TLB in pmap_qenter() unless the old mapping was valid.
Most often, it isn't.

Reviewed by: tegge@
2006-06-12 20:05:27 +00:00
Warner Losh
78878cef94 Add the ability to subset the devices that UART pulls in. This allows
the arm to compile without all the extras that don't appear, at least
not in the flavors of ARM I deal with.  This helps us save about 100k.

If I've botched the available devices on a platform, please let me
know and I'll correct ASAP.
2006-06-12 04:21:50 +00:00
Alan Cox
ce142d9ec0 Introduce the function pmap_enter_object(). It maps a sequence of resident
pages from the same object.  Use it in vm_map_pmap_enter() to reduce the
locking overhead of premapping objects.

Reviewed by: tegge@
2006-06-05 20:35:27 +00:00
Mike Silbersack
f25d341cfb After much discussion with mjacob and scottl, change bus_dmamem_alloc so
that it just warns the user with a printf when it misaligns a piece
of memory that was requested through a busdma tag.

Some drivers (such as mpt, and probably others) were asking for alignments
that could not be satisfied, but as far as driver operation was concerned,
that did not matter.  In the theory that other drivers will fall into
this same category, we agreed that panicing or making the allocation
fail will cause more hardship than is necessary.  The printf should
be sufficient motivation to get the driver glitch fixed.
2006-06-01 04:49:29 +00:00
Matt Jacob
aa57a87a56 Turn the panic on not being able to meet alignment constraints
in bus_dmamem_alloc into the more reasonable EINVAL return.

Also, reclaim memory allocated but then not used if we had
an error return.
2006-05-31 00:37:56 +00:00
Mike Silbersack
3d31890277 MFi386 rev 1.78:
Add a quick hack to ensure that bus_dmamem_alloc properly aligns
small allocations with large alignment requirements.

Add a panic to detect cases where we've still failed to properly align.
2006-05-28 18:31:32 +00:00
Maxim Sobolev
aa1807d5d6 Move clock_lock prototype into <machine/clock.h>, where it is more
appropriate.

Discussed with:	jhb
2006-05-19 18:53:50 +00:00
Marius Strobl
8df071afd9 Add le(4). I could actually only test it on alpha, i386 and sparc64 but
given that this includes the more problematic platforms I see no reason
why it shouldn't also work on amd64 and ia64.
2006-05-17 20:45:45 +00:00
Poul-Henning Kamp
c40da00ca3 Since DELAY() was moved, most <machine/clock.h> #includes have been
unnecessary.
2006-05-16 14:37:58 +00:00
Ruslan Ermilov
155d9f6a98 Kill more references to lnc(4).
Submitted by:	grep(1)
2006-05-16 12:15:39 +00:00
Marius Strobl
055abe9af2 Remove some remnants of lnc(4). 2006-05-14 18:49:25 +00:00
Poul-Henning Kamp
5405ab4889 Clean out sysctl machdep.* related defines.
The cmos clock related stuff should really be in MI code.
2006-05-11 17:29:25 +00:00
Alexander Leidinger
ba5bd0001c regen (linux rt_sigpending) 2006-05-10 18:19:51 +00:00
Alexander Leidinger
17138b619c Implement rt_sigpending in the linuxolator.
PR:		92671
Submitted by:	Markus Niemist"o <markus.niemisto@gmx.net>
2006-05-10 18:17:29 +00:00
Doug Ambrisko
32397ce071 Add in linsysfs. A linux 2.6 like sys filesystem to pacify the Linux
LSI MegaRAID SAS utility.

Sponsored by:		IronPort Systems
Man page help from:	brueffer
2006-05-09 22:27:01 +00:00
Doug Ambrisko
387196bf56 Forgot the amd/linux32 part since sys/*/linux didn't match :-(
Pointed out by:	Alexander (thanks)
2006-05-06 17:26:45 +00:00
Sam Leffler
57d6ae0689 add ath and wlan crypto support
MFC after:	1 month
2006-05-03 18:15:36 +00:00
Scott Long
8d59dfff98 Allow bus_dmamap_load() to pass ENOMEM back to the caller. This puts it into
conformance with the mbuf and uio load routines.  ENOMEM can only happen
with BUS_DMA_NOWAIT is passed in, thus the deferals are disabled.  I don't
like doing this, but fixing this fixes assumptions in other important drivers,
which is a net benefit for now.
2006-05-03 04:14:17 +00:00
John Baldwin
2b8a339c7e Add various constants for the PAT MSR and the PAT PTE and PDE flags.
Initialize the PAT MSR during boot to map PAT type 2 to Write-Combining
(WC) instead of Uncached (UC-).

MFC after:	1 month
2006-05-01 22:07:00 +00:00
John Baldwin
4ac60df584 Add a new 'pmap_invalidate_cache()' to flush the CPU caches via the
wbinvd() instruction.  This includes a new IPI so that all CPU caches on
all CPUs are flushed for the SMP case.

MFC after:	1 month
2006-05-01 21:36:47 +00:00
Alan Cox
e9ba21a5bb Eliminate unnecessary, recursive acquisitions and releases of the page
queues lock by free_pv_entry() and pmap_remove_pages().

Reduce the scope of the page queues lock in pmap_remove_pages().
2006-04-29 00:59:15 +00:00
Marcel Moolenaar
64220a7e28 Rewrite of puc(4). Significant changes are:
o  Properly use rman(9) to manage resources. This eliminates the
   need to puc-specific hacks to rman. It also allows devinfo(8)
   to be used to find out the specific assignment of resources to
   serial/parallel ports.
o  Compress the PCI device "database" by optimizing for the common
   case and to use a procedural interface to handle the exceptions.
   The procedural interface also generalizes the need to setup the
   hardware (program chipsets, program clock frequencies).
o  Eliminate the need for PUC_FASTINTR. Serdev devices are fast by
   default and non-serdev devices are handled by the bus.
o  Use the serdev I/F to collect interrupt status and to handle
   interrupts across ports in priority order.
o  Sync the PCI device configuration to include devices found in
   NetBSD and not yet merged to FreeBSD.
o  Add support for Quatech 2, 4 and 8 port UARTs.
o  Add support for a couple dozen Timedia serial cards as found
   in Linux.
2006-04-28 21:21:53 +00:00
Scott Long
27aafcda76 Enable the rr232x driver for amd64. 2006-04-28 05:23:10 +00:00
Alan Cox
7dece6c7d9 In general, bits in the page directory entry (PDE) and the page table
entry (PTE) have the same meaning.  The exception to this rule is the
eighth bit (0x080).  It is the PS bit in a PDE and the PAT bit in a
PTE.  This change avoids the possibility that pmap_enter() confuses a
PAT bit with a PS bit, avoiding a panic().

Eliminate a diagnostic printf() from the i386 pmap_enter() that serves
no current purpose, i.e., I've seen no bug reports in the last two
years that are helped by this printf().

Reviewed by: jhb
2006-04-27 21:26:25 +00:00
Peter Wemm
0be8b8cee8 Move vm.pmap.pv_entry_count out from the PV_STATS ifdefs. It is always
available and is a real counter, not a statistic.
2006-04-26 21:34:07 +00:00
Jung-uk Kim
daea0aad84 Check if reported HTT cores are physical cores. This commit does not
affect AMD CPUs at all because HTT bit is disabled earlier.  Intel
multicore CPUs and ULE scheduler may be affected.
2006-04-25 00:06:37 +00:00
Jung-uk Kim
091c9b4961 Add another Intel CPU feature flag, xTPR (Send Task Priority Messages). 2006-04-24 22:56:57 +00:00
Jung-uk Kim
cf24d86bcc Check if deterministic cache parameters leaf is valid before use. 2006-04-24 22:23:52 +00:00
Colin Percival
8b4553119e Adjust dangerous-shared-cache-detection logic from "all shared data
caches are dangerous" to "a shared L1 data cache is dangerous".  This
is a compromise between paranoia and performance: Unlike the L1 cache,
nobody has publicly demonstrated a cryptographic side channel which
exploits the L2 cache -- this is harder due to the larger size, lower
bandwidth, and greater associativity -- and prohibiting shared L2
caches turns Intel Core Duo processors into Intel Core Solo processors.

As before, the 'machdep.hyperthreading_allowed' sysctl will allow even
the L1 data cache to be shared.

Discussed with:	jhb, scottl
Security:	See FreeBSD-SA-05:09.htt for background material.
2006-04-24 21:17:01 +00:00
Xin LI
3b28c0c6f9 Move AHC_REG_PRETTY_PRINT and AHD_REG_PRETTY_PRINT below
their corresponding devices.
2006-04-24 08:44:34 +00:00
Peter Wemm
9bbf94367c Oops. Minidumps were developed on 6.x, in without the small pv entry code.
Add some strategic dump_add_page()/dump_drop_page() lines to include pv
chunks in the minidumps - these operate in the direct map region like UMA.
2006-04-21 04:50:18 +00:00
Peter Wemm
c0345a84aa Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.

Traditional dumps store all physical memory.  This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm.  These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.

Minidumps invert the process.  Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.

amd64 has a direct map region that things like UMA use.  Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump.  Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.

Dumps are a custom format.  At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap.  libkvm can now conveniently access the kvm
page table entries.

Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump.  While this is a best case, I expect minidumps
to be in the 100MB-500MB range.  Obviously, never larger than physical
memory of course.

minidumps are on by default.  It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.

Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
Warner Losh
59b8f529ca Set the rid for a resoruce allocated with rman_reserve_resource. 2006-04-20 04:16:34 +00:00
Colin Percival
2652af563e Correct a local information leakage bug affecting AMD FPUs.
Security:	FreeBSD-SA-06:14.fpu
2006-04-19 07:00:19 +00:00
Peter Wemm
714d4fe9b6 If we're doing a try-alloc of a pv entry and give up early, do not forget
to reduce the pv_entry_count counter.  This was found by Tor Egge.  In the
same email, Tor also pointed out the pv_stats problem in the previous
commit, but I'd forgotten about it until I went looking for this email
about this allocation problem.
2006-04-18 20:17:32 +00:00
Peter Wemm
bac58593f1 pv_entry_count is more than a statistic. It is used for resource limiting.
Do not compile out its counter updates if pv entry stats are turned off.
2006-04-18 20:11:00 +00:00
Alan Cox
ad740f9081 Include opt_pmap.h for PMAP_SHPGPERPROC.
PR: 94509
2006-04-13 03:31:48 +00:00
Alan Cox
826c207263 Retire pmap_track_modified(). We no longer need it because we do not
create managed mappings within the clean submap.  To prevent regressions,
add assertions blocking the creation of managed mappings within the clean
submap.

Reviewed by: tegge
2006-04-12 04:22:52 +00:00
Paul Saab
d8636a9ab7 Hook bce up to the build 2006-04-10 20:04:22 +00:00
John Baldwin
907d4d7f45 Cache the value of the lower half of each I/O APIC redirection table entry
so that we only have to do an ioapic_write() instead of an ioapic_read()
followed by an ioapic_write() every time we mask and unmask level triggered
interrupts.  This cuts the execution time for these operations roughly in
half.

Profiled by:	Paolo Pisati <p.pisati@oltrelinux.com>
MFC after:	1 week
2006-04-05 20:43:19 +00:00
Peter Wemm
2e4548288a Convert pv_entry_frees and pv_entry_allocs stats counters from int to long,
they wrap way too quickly.
2006-04-04 20:17:35 +00:00
Marcel Moolenaar
b1fb1bb19a Sync with i386: Map exceptions to signals in gdb_cpu_signal() so
that kgdb(1) gets a SIGTRAP when it needs to.

Pointed out by: grehan@
2006-04-04 03:00:20 +00:00
Marcel Moolenaar
470d831703 The PC is register 16, not 18.
Pointed out by: grehan@
2006-04-04 02:44:51 +00:00
Marcel Moolenaar
bfcdefd8aa Eliminate HAVE_STOPPEDPCBS. On ia64 the PCPU holds a pointer to the
PCB in which the context of stopped CPUs is stored. To access this
PCB from KDB, we introduce a new define, called KDB_STOPPEDPCB. The
definition, when present, lives in <machine/kdb.h> and abstracts
where MD code saves the context. Define KDB_STOPPEDPCB on i386,
amd64, alpha and sparc64 in accordance to previous code.
2006-04-03 22:51:47 +00:00
Peter Wemm
68ac481184 Shrink the amd64 pv entry from 48 bytes to about 24 bytes. On a machine
with large mmap files mapped into many processes, this saves hundreds of
megabytes of ram.
pv entries were individually allocated and had two tailq entries and two
pointers (or addresses).  Each pv entry was linked to a vm_page_t and
a process's address space (pmap).  It had the virtual address and a
pointer to the pmap.
This change replaces the individual allocation with a per-process
allocation system.  A page ("pv chunk") is allocated and this provides
168 pv entries for that process.  We can now eliminate one of the 16 byte
tailq entries because we can simply iterate through the pv chunks to find
all the pv entries for a process.  We can eliminate one of the 8 byte
pointers because the location of the pv entry implies the containing
pv chunk, which has the pointer.  After overheads from the pv chunk
bitmap and tailq linkage, this works out that each pv entry has an
effective size of 24.38 bytes.

Future work still required, and other problems:
* when running low on pv entries or system ram, we may need to defrag
  the chunk pages and free any spares.  The stats (vm.pmap.*) show that
  this doesn't seem to be that much of a problem, but it can be done if
  needed.
* running low on pv entries is now a much bigger problem.  The old
  get_pv_entry() routine just needed to reclaim one other pv entry.
  Now, since they are per-process, we can only use pv entries that are
  assigned to our current process, or by stealing an entire page worth
  from another process.  Under normal circumstances, the pmap_collect()
  code should be able to dislodge some pv entries from the current
  process.  But if needed, it can still reclaim entire pv chunk pages
  from other processes.
* This should port to i386 really easily, except there it would reduce
  pv entries from 24 bytes to about 12 bytes.

(I have integrated Alan's recent changes.)
2006-04-03 21:36:01 +00:00
Peter Wemm
b9eee07e36 Remove the unused sva and eva arguments from pmap_remove_pages(). 2006-04-03 21:16:10 +00:00
Alan Cox
9c6a71e4ca Introduce pmap_try_insert_pv_entry(), a function that conditionally creates
a pv entry if the number of entries is below the high water mark for pv
entries.

Use pmap_try_insert_pv_entry() in pmap_copy() instead of
pmap_insert_entry().  This avoids possible recursion on a pmap lock in
get_pv_entry().

Eliminate the explicit low-memory checks in pmap_copy().  The check that
the number of pv entries was below the high water mark was largely
ineffective because it was located in the outer loop rather than the
inner loop where pv entries were allocated.  Instead of checking, we
attempt the allocation and handle the failure.

Reviewed by: tegge
Reported by: kris
MFC after: 5 days
2006-04-02 05:45:05 +00:00
Maksim Yevmenkin
b643101293 Add kbdmux(4) to GENERIC on amd64
Requested by:	scottl
Tested by:	scottl
2006-03-31 23:04:48 +00:00
Scott Long
7f631a410c Hook the MFI driver up to the build. 2006-03-29 09:57:22 +00:00
John Baldwin
8283c726e7 If the XSDT address in the RSDP for an ACPI 2.0 machine is NULL, then fall
back to using the RSDT instead.  ACPI-CA already follows this same strategy
as a workaround for yet another instance of brain-damaged BIOS writers.

PR:		i386/93963
Submitted by:	Masayuki FUKUI <fukui.FreeBSD@fanet.net>
2006-03-27 15:59:48 +00:00
Alan Cox
fa8053e9a9 Eliminate unnecessary invalidations of the entire TLB by pmap_remove().
Specifically, on mappings with PG_G set pmap_remove() not only performs
the necessary per-page invlpg invalidations but also performs an
unnecessary invalidation of the entire set of non-PG_G entries.

Reviewed by: tegge
2006-03-21 18:07:42 +00:00
David Xu
39d3e6198d Remove stale KSE code.
Reviewed by: alc
2006-03-21 06:46:27 +00:00
John Baldwin
aef8cd01ed Drop some unneeded casts since we program the kernel in C rather than C++. 2006-03-20 19:39:08 +00:00
Alexander Leidinger
79d8404261 regen: fix of linuxolator with testing in a cross-build 2006-03-20 18:54:29 +00:00
Alexander Leidinger
3a192a2050 Fix the linuxolator on amd64 (cross-build). 2006-03-20 18:53:26 +00:00
Ruslan Ermilov
e4e272bfbf Regen. 2006-03-19 11:12:41 +00:00
Ruslan Ermilov
aefce619cf Unbreak COMPAT_LINUX32 option support on amd64.
Broken by:	netchild
2006-03-19 11:10:33 +00:00
Alexander Leidinger
c85625bfe7 regen 2006-03-18 20:49:01 +00:00
Stephan Uphoff
4c0e9e8c79 Enable global pages TLB extension on Application Processors.
MFC after:	3 days
2006-03-18 19:32:46 +00:00
Alexander Leidinger
1f7642e058 regen after COMPAT_43 removal 2006-03-18 18:24:38 +00:00
Alexander Leidinger
5c8919adf4 Get rid of the need of COMPAT_43 in the linuxolator.
Submitted by:	Divacky Roman <xdivac02@stud.fit.vutbr.cz>
Obtained from:	DragonFly (some parts)
2006-03-18 18:20:17 +00:00
John Baldwin
39092e79ed Don't allow userland to set hardware watch points on kernel memory at all.
Previously, we tried to allow this only for root.  However, we were calling
suser() on the *target* process rather than the current process.  This
means that if you can ptrace() a process running as root you can set a
hardware watch point in the kernel.  In practice I think you probably have
to be root in order to pass the p_candebug() checks in ptrace() to attach
to a process running as root anyway.  Rather than fix the suser(), I just
axed the entire idea, as I can't think of any good reason _at all_ for
userland to set hardware watch points for KVM.

MFC after:	3 days
Also thinks hardware watch points on KVM from userland are bad:	bde, rwatson
2006-03-14 16:13:55 +00:00
Peter Wemm
8d0593f54e Merge/sync with i386: various cosmetic tweaks 2006-03-14 00:01:56 +00:00
Peter Wemm
cfa7ffb1d7 MFi386: The SIGFPE macros were moved to signal.h (FPE_INTOVF etc) 2006-03-14 00:01:22 +00:00
Peter Wemm
31b2d08a2d MFi386: rename pcib_devclass to hostb_devclass (cosmetic here) 2006-03-13 23:58:40 +00:00
Peter Wemm
c8df689359 MFi386: add a TRAP_INTERRUPT case 2006-03-13 23:56:44 +00:00
Peter Wemm
29e9282e2e Cosmetic sync with i386 2006-03-13 23:55:31 +00:00
Paul Saab
12aff6461c Fix the format/display descriptor of vm.kmem_size and vm.kmem_free
to be 'long' instead of 'int' so that sysctl(8) correctly displays
the 8 returned bytes as a single 'long' instead of two 'int' values.

Submitted by:	peter
2006-03-13 08:13:37 +00:00
John Baldwin
8e8f0765ab Flip the switch and don't route interrupts to hyperthreads in a HT system.
In at least one benchmark this showed around a 20% performance increase.
If other workloads do benefit from having hyperthreads service interrupts,
we can always make this a loader tunable.

MFC after:	3 days
Tested by:	ps
2006-03-09 16:38:52 +00:00
Stephan Uphoff
68ff3c2445 Fix exec_map resource leaks.
Tested by: kris@
2006-03-08 20:21:54 +00:00
Yaroslav Tykhiy
4ffbe6ba9f MFi386 revision 1.1220: options TDFX_LINUX --> device tdfx_linux 2006-03-06 15:29:28 +00:00
Sam Leffler
5225f08dc9 guard function decls with _KERNEL so user code can include this file 2006-03-01 05:59:56 +00:00
John Baldwin
215e7c161a Rework how we wire up interrupt sources to CPUs:
- Throw out all of the logical APIC ID stuff.  The Intel docs are somewhat
  ambiguous, but it seems that the "flat" cluster model we are currently
  using is only supported on Pentium and P6 family CPUs.  The other
  "hierarchy" cluster model that is supported on all Intel CPUs with
  local APICs is severely underdocumented.  For example, it's not clear
  if the OS needs to glean the topology of the APIC hierarchy from
  somewhere (neither ACPI nor MP Table include it) and setup the logical
  clusters based on the physical hierarchy or not.  Not only that, but on
  certain Intel chipsets, even though there were 4 CPUs in a logical
  cluster, all the interrupts were only sent to one CPU anyway.
- We now bind interrupts to individual CPUs using physical addressing via
  the local APIC IDs.  This code has also moved out of the ioapic PIC
  driver and into the common interrupt source code so that it can be
  shared with MSI interrupt sources since MSI is addressed to APICs the
  same way that I/O APIC pins are.
- Interrupt source classes grow a new method pic_assign_cpu() to bind an
  interrupt source to a specific local APIC ID.
- The SMP code now tells the interrupt code which CPUs are avaiable to
  handle interrupts in a simpler and more intuitive manner.  For one thing,
  it means we could now choose to not route interrupts to HT cores if we
  wanted to (this code is currently in place in fact, but under an #if 0
  for now).
- For now we simply do static round-robin of IRQs to CPUs when the first
  interrupt handler just as before, with the change that IRQs are now
  bound to individual CPUs rather than groups of up to 4 CPUs.
- Because the IRQ to CPU mapping has now been moved up a layer, it would
  be easier to manage this mapping from higher levels.  For example, we
  could allow drivers to specify a CPU affinity map for their interrupts,
  or we could allow a userland tool to bind IRQs to specific CPUs.

The MFC is tentative, but I want to see if this fixes problems some folks
had with UP APIC kernels on 6.0 on SMP machines (an SMP kernel would work
fine, but a UP APIC kernel (such as GENERIC in RELENG_6) would lose
interrupts).

MFC after:	1 week
2006-02-28 22:24:55 +00:00
David Malone
0cbae93607 It seems bit 5 of cpu_feature2 is the VMX (Virtual Machine Extensions)
bit. While I'm here, delete a comment that was cut and past from the
cpu_features code that doesn't belong here.
2006-02-15 14:48:59 +00:00
Poul-Henning Kamp
e8444a7e6f CPU time accounting speedup (step 2)
Keep accounting time (in per-cpu) cputicks and the statistics counts
in the thread and summarize into struct proc when at context switch.

Don't reach across CPUs in calcru().

Add code to calibrate the top speed of cpu_tickrate() for variable
cpu_tick hardware (like TSC on power managed machines).

Don't enforce monotonicity (at least for now) in calcru.  While the
calibrated cpu_tickrate ramps up it may not be true.

Use 27MHz counter on i386/Geode.

Use TSC on amd64 & i386 if present.

Use tick counter on sparc64
2006-02-11 09:33:07 +00:00
Poul-Henning Kamp
eb2da9a51f Simplify system time accounting for profiling.
Rename struct thread's td_sticks to td_pticks, we will need the
other name for more appropriately named use shortly.  Reduce it
from uint64_t to u_int.

Clear td_pticks whenever we enter the kernel instead of recording
its value as reference for userret().  Use the absolute value of
td->pticks in userret() and eliminate third argument.
2006-02-08 08:09:17 +00:00
Poul-Henning Kamp
5b1a8eb397 Modify the way we account for CPU time spent (step 1)
Keep track of time spent by the cpu in various contexts in units of
"cputicks" and scale to real-world microsec^H^H^H^H^H^H^H^Hclock_t
only when somebody wants to inspect the numbers.

For now "cputicks" are still derived from the current timecounter
and therefore things should by definition remain sensible also on
SMP machines.  (The main reason for this first milestone commit is
to verify that hypothesis.)

On slower machines, the avoided multiplications to normalize timestams
at every context switch, comes out as a 5-7% better score on the
unixbench/context1 microbenchmark.  On more modern hardware no change
in performance is seen.
2006-02-07 21:22:02 +00:00
John Baldwin
8917b8d28c - Always call exec_free_args() in kern_execve() instead of doing it in all
the callers if the exec either succeeds or fails early.
- Move the code to call exit1() if the exec fails after the vmspace is
  gone to the bottom of kern_execve() to cut down on some code duplication.
2006-02-06 22:06:54 +00:00
Wayne Salamon
4f9ac41fba Call the audit syscall enter/exit functions for the amd64 architecture,
both 32-bit and 64-bit paths. System calls will now be audited.

Obtained from: TrustedBSD Project
Approved by: rwatson (mentor)
2006-02-04 20:37:20 +00:00
David Xu
6d7c1bdccd MFi386:
Clear carry flag in get_mconetxt so that setcontext does not
	return a bogus error.
2006-02-03 02:49:14 +00:00
Peter Wemm
e2a5e4efdb Make PV entries dynamic on amd64. i386 has a pre-reserved block of kva
dedicated to storing pv entries, originally so that kva didn't have to be
allocated at inconvenient times.  For amd64, we can get the same effect by
using the direct map area.  Allocating pages is the same as with the object
backed method, but now we can just lookup the page in the direct map area.
Thus, no more pageable kva is reserved.  This is the single largest
consumer of kva on our work machines and this change should help conserve
the fixed size 2GB pageable kva on the amd64 kernel.

There are a pair of sysctl nodes introduced, named the same as their
tunable counterparts.  vm.pmap.shpgperproc and vm.pmap.pv_entry_max
They work just like the tunables of the same path, except the values are
linked.  The pv entry cap is now dynamically changeable.

I didn't make them totally unlimited because we need some sort of safety
limit still.  One could consume all physical memory without a cap.
2006-02-03 00:16:36 +00:00
John Baldwin
6966c33482 Call WITNESS_CHECK() in the page fault handler and immediately assume it
is a fatal fault if we are holding any non-sleepable locks.  This should
cut down on the number of bogus LORs we currently get when the kernel
panics due to a NULL (or bogus) pointer dereference that goes wandering
off into the VM system which tries to acquire locks and then kicks off
the spurious LORs.  This should probably be ported to all the archs at
some point.

Tested on:	i386
2006-01-27 22:22:10 +00:00
Scott Long
0af57729a6 Free the newtag if we exit with a failure from alloc_bounce_zone().
Found by: Coverity Prevent(tm)
2006-01-14 17:22:47 +00:00
David E. O'Brien
f8ed1e340d Move linux support to the linux section. 2006-01-12 01:20:59 +00:00
Poul-Henning Kamp
d3e64681d6 Move the old BSD4.3 tty compatibility from (!BURN_BRIDGES && COMPAT_43)
to COMPAT_43TTY.

Add COMPAT_43TTY to NOTES and */conf/GENERIC

Compile tty_compat.c only under the new option.

Spit out
	#warning "Old BSD tty API used, please upgrade."
if ioctl_compat.h gets #included from userland.
2006-01-10 09:19:10 +00:00
Warner Losh
d5e61c97a6 By popular demand, move __HAVE_ACPI and __PCI_REROUTE_INTERRUPT into
param.h.  Per request, I've placed these just after the
_NO_NAMESPACE_POLLUTION ifndef.  I've not renamed anything yet, but
may since we don't need the __.

Submitted by: bde, jhb, scottl, many others.
2006-01-09 06:05:57 +00:00
John Baldwin
04dda605c5 - Make pcib_devclass private to sys/dev/pci/pci_pci.c and change all the
various pcib drivers to use their own private devclass_t variables for
  their modules.
- Use the DEFINE_CLASS_0() macro to declare drivers for the various pcib
  drivers while I'm here.
2006-01-06 19:22:19 +00:00
John Baldwin
360c3c2d1a Fix various places that were testing td_critnest to see if interrupts
should remain disabled during a trap or not to check
td_md.md_spinlock_count instead.
2006-01-06 18:02:12 +00:00
Jung-uk Kim
dccb7faff6 - Explicitly validate an empty filter to match bpf_filter() comment[1].
- Do not use BPF JIT compiler for an empty filter.

[1] Pointed out by:	darrenr
2006-01-03 20:26:03 +00:00
Warner Losh
501755f4f6 Define __HAVE_ACPI and/or __PCI_REROUTE_INTERRUPT, as appropriate for
each platform.  These will be used in the pci code in preference to
the complicated #ifdefs we have there now.
2006-01-01 20:59:28 +00:00
Alexander Leidinger
e3d101c377 Unbreak kernel build.
A happy new year to all.

Submitted by:	Goran Gajic <ggajic@afrodita.rcub.bg.ac.yu>, bz
Pointy hat to:	netchild
Appologies to:	all
2006-01-01 05:35:57 +00:00
Alexander Leidinger
ef39c05baa MI changes:
- provide an interface (macros) to the page coloring part of the VM system,
   this allows to try different coloring algorithms without the need to
   touch every file [1]
 - make the page queue tuning values readable: sysctl vm.stats.pagequeue
 - autotuning of the page coloring values based upon the cache size instead
   of options in the kernel config (disabling of the page coloring as a
   kernel option is still possible)

MD changes:
 - detection of the cache size: only IA32 and AMD64 (untested) contains
   cache size detection code, every other arch just comes with a dummy
   function (this results in the use of default values like it was the
   case without the autotuning of the page coloring)
 - print some more info on Intel CPU's (like we do on AMD and Transmeta
   CPU's)

Note to AMD owners (IA32 and AMD64): please run "sysctl vm.stats.pagequeue"
and report if the cache* values are zero (= bug in the cache detection code)
or not.

Based upon work by:	Chad David <davidc@acns.ab.ca> [1]
Reviewed by:		alc, arch (in 2004)
Discussed with:		alc, Chad David, arch (in 2004)
2005-12-31 14:39:20 +00:00
Pawel Jakub Dawidek
70665fda32 Fix watch address truncation. The address was truncated when it was passed to
amd64_set_watch() as 'unsigned int' and 'unsigned int' is 32bit long on amd64.

Even with that fix hardware watchpoint don't work for me on amd64, ie. when
I set the watchpoint and write a byte there, nothing happens.
2005-12-27 23:23:47 +00:00
Maxim Sobolev
900b28f9f6 Remove kern.elf32.can_exec_dyn sysctl. Instead extend Brandinfo structure
with flags bitfield and set BI_CAN_EXEC_DYN flag for all brands that usually
allow executing elf dynamic binaries (aka shared libraries). When it is
requested to execute ET_DYN elf image check if this flag is on after we
know the elf brand allowing execution if so.

PR:		kern/87615
Submitted by:	Marcin Koziej <creep@desk.pl>
2005-12-26 21:23:57 +00:00
Jeff Roberson
660002d398 - Improve the INKERNEL macro such that it can no longer give false positives.
This fixes the stack(9) functionality.

Submitted by:	Antoine Brodin <antoine.brodin@laposte.net>
2005-12-23 21:33:55 +00:00