4813 Commits

Author SHA1 Message Date
ups
b3a7439a45 Remove mpte optimization from pmap_enter_quick().
There is a race with the current locking scheme and removing
it should have no measurable performance impact.
This fixes page faults leading to panics in pmap_enter_quick_locked()
on amd64/i386.

Reviewed by: alc,jhb,peter,ps
2006-06-15 01:01:06 +00:00
netchild
de5cf4e1bd regen after MFP4 (soc2006/rdivacky_linuxolator) of syscalls.master
P4-Changes:	similar to 98673 and 98675 but regenerated locally
Sponsored by:	Google SoC 2006
Submitted by:	rdivacky
2006-06-13 18:48:30 +00:00
netchild
a561ebc3f4 MFP4 (soc2006/rdivacky_linuxolator)
Update of syscall.master:
	o	Adding of several new dummy syscalls (268-310)
	o	Synchronization of amd64 syscall.master with i386 one
	o	Auditing added to amd64 syscall.master
	o	Change auditing type for lstat syscall (bugfix). [1]

P4-Changes:	98672, 98674
Noticed by:	rwatson [1]
Sponsored by:	Google SoC 2006
Submitted by:	rdivacky
2006-06-13 18:43:55 +00:00
davidxu
82b666ed4a Add scheduler CORE, the work I have done half a year ago, recent,
I picked it up again. The scheduler is forked from ULE, but the
algorithm to detect an interactive process is almost completely
different with ULE, it comes from Linux paper "Understanding the
Linux 2.6.8.1 CPU Scheduler", although I still use same word
"score" as a priority boost in ULE scheduler.

Briefly, the scheduler has following characteristic:
1. Timesharing process's nice value is seriously respected,
   timeslice and interaction detecting algorithm are based
   on nice value.
2. per-cpu scheduling queue and load balancing.
3. O(1) scheduling.
4. Some cpu affinity code in wakeup path.
5. Support POSIX SCHED_FIFO and SCHED_RR.
Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler
uses 256 priority queues. Unlike ULE which using pull and push, the
scheduelr uses pull method, the main reason is to let relative idle
cpu do the work, but current the whole scheduler is protected by the
big sched_lock, so the benefit is not visible, it really can be worse
than nothing because all other cpu are locked out when we are doing
balancing work, which the 4BSD scheduelr does not have this problem.
The scheduler does not support hyperthreading very well, in fact,
the scheduler does not make the difference between physical CPU and
logical CPU, this should be improved in feature. The scheduler has
priority inversion problem on MP machine, it is not good for
realtime scheduling, it can cause realtime process starving.
As a result, it seems the MySQL super-smack runs better on my
Pentium-D machine when using libthr, despite on UP or SMP kernel.
2006-06-13 13:12:56 +00:00
jhb
3ec293f314 Enable a few more things in x86 NOTES to get broader LINT coverage:
- Turn on iwi(4), ipw(4), and ndis(4) on amd64 and i386.
- Turn on ral(4) and ural(4) on i386, pc98, and amd64.
2006-06-12 20:38:17 +00:00
alc
cbeb562815 Don't invalidate the TLB in pmap_qenter() unless the old mapping was valid.
Most often, it isn't.

Reviewed by: tegge@
2006-06-12 20:05:27 +00:00
imp
038d1db25e Add the ability to subset the devices that UART pulls in. This allows
the arm to compile without all the extras that don't appear, at least
not in the flavors of ARM I deal with.  This helps us save about 100k.

If I've botched the available devices on a platform, please let me
know and I'll correct ASAP.
2006-06-12 04:21:50 +00:00
alc
ff4adb11fe Introduce the function pmap_enter_object(). It maps a sequence of resident
pages from the same object.  Use it in vm_map_pmap_enter() to reduce the
locking overhead of premapping objects.

Reviewed by: tegge@
2006-06-05 20:35:27 +00:00
silby
89bd691dee After much discussion with mjacob and scottl, change bus_dmamem_alloc so
that it just warns the user with a printf when it misaligns a piece
of memory that was requested through a busdma tag.

Some drivers (such as mpt, and probably others) were asking for alignments
that could not be satisfied, but as far as driver operation was concerned,
that did not matter.  In the theory that other drivers will fall into
this same category, we agreed that panicing or making the allocation
fail will cause more hardship than is necessary.  The printf should
be sufficient motivation to get the driver glitch fixed.
2006-06-01 04:49:29 +00:00
mjacob
1b7bd7c5ee Turn the panic on not being able to meet alignment constraints
in bus_dmamem_alloc into the more reasonable EINVAL return.

Also, reclaim memory allocated but then not used if we had
an error return.
2006-05-31 00:37:56 +00:00
silby
0daaa33f18 MFi386 rev 1.78:
Add a quick hack to ensure that bus_dmamem_alloc properly aligns
small allocations with large alignment requirements.

Add a panic to detect cases where we've still failed to properly align.
2006-05-28 18:31:32 +00:00
sobomax
210b6777a4 Move clock_lock prototype into <machine/clock.h>, where it is more
appropriate.

Discussed with:	jhb
2006-05-19 18:53:50 +00:00
marius
1a141a2cee Add le(4). I could actually only test it on alpha, i386 and sparc64 but
given that this includes the more problematic platforms I see no reason
why it shouldn't also work on amd64 and ia64.
2006-05-17 20:45:45 +00:00
phk
ef310efff8 Since DELAY() was moved, most <machine/clock.h> #includes have been
unnecessary.
2006-05-16 14:37:58 +00:00
ru
c249b5bd38 Kill more references to lnc(4).
Submitted by:	grep(1)
2006-05-16 12:15:39 +00:00
marius
be5f202f36 Remove some remnants of lnc(4). 2006-05-14 18:49:25 +00:00
phk
5d8c57a08b Clean out sysctl machdep.* related defines.
The cmos clock related stuff should really be in MI code.
2006-05-11 17:29:25 +00:00
netchild
021fd75458 regen (linux rt_sigpending) 2006-05-10 18:19:51 +00:00
netchild
24c492f42c Implement rt_sigpending in the linuxolator.
PR:		92671
Submitted by:	Markus Niemist"o <markus.niemisto@gmx.net>
2006-05-10 18:17:29 +00:00
ambrisko
f7d4a6b03b Add in linsysfs. A linux 2.6 like sys filesystem to pacify the Linux
LSI MegaRAID SAS utility.

Sponsored by:		IronPort Systems
Man page help from:	brueffer
2006-05-09 22:27:01 +00:00
ambrisko
45fe4fa1ab Forgot the amd/linux32 part since sys/*/linux didn't match :-(
Pointed out by:	Alexander (thanks)
2006-05-06 17:26:45 +00:00
sam
f61ce82647 add ath and wlan crypto support
MFC after:	1 month
2006-05-03 18:15:36 +00:00
scottl
7ef1f80fdd Allow bus_dmamap_load() to pass ENOMEM back to the caller. This puts it into
conformance with the mbuf and uio load routines.  ENOMEM can only happen
with BUS_DMA_NOWAIT is passed in, thus the deferals are disabled.  I don't
like doing this, but fixing this fixes assumptions in other important drivers,
which is a net benefit for now.
2006-05-03 04:14:17 +00:00
jhb
00bb13261b Add various constants for the PAT MSR and the PAT PTE and PDE flags.
Initialize the PAT MSR during boot to map PAT type 2 to Write-Combining
(WC) instead of Uncached (UC-).

MFC after:	1 month
2006-05-01 22:07:00 +00:00
jhb
ca8d347695 Add a new 'pmap_invalidate_cache()' to flush the CPU caches via the
wbinvd() instruction.  This includes a new IPI so that all CPU caches on
all CPUs are flushed for the SMP case.

MFC after:	1 month
2006-05-01 21:36:47 +00:00
alc
0b53c91566 Eliminate unnecessary, recursive acquisitions and releases of the page
queues lock by free_pv_entry() and pmap_remove_pages().

Reduce the scope of the page queues lock in pmap_remove_pages().
2006-04-29 00:59:15 +00:00
marcel
193a6144b9 Rewrite of puc(4). Significant changes are:
o  Properly use rman(9) to manage resources. This eliminates the
   need to puc-specific hacks to rman. It also allows devinfo(8)
   to be used to find out the specific assignment of resources to
   serial/parallel ports.
o  Compress the PCI device "database" by optimizing for the common
   case and to use a procedural interface to handle the exceptions.
   The procedural interface also generalizes the need to setup the
   hardware (program chipsets, program clock frequencies).
o  Eliminate the need for PUC_FASTINTR. Serdev devices are fast by
   default and non-serdev devices are handled by the bus.
o  Use the serdev I/F to collect interrupt status and to handle
   interrupts across ports in priority order.
o  Sync the PCI device configuration to include devices found in
   NetBSD and not yet merged to FreeBSD.
o  Add support for Quatech 2, 4 and 8 port UARTs.
o  Add support for a couple dozen Timedia serial cards as found
   in Linux.
2006-04-28 21:21:53 +00:00
scottl
aec4d1388c Enable the rr232x driver for amd64. 2006-04-28 05:23:10 +00:00
alc
da3edd51a2 In general, bits in the page directory entry (PDE) and the page table
entry (PTE) have the same meaning.  The exception to this rule is the
eighth bit (0x080).  It is the PS bit in a PDE and the PAT bit in a
PTE.  This change avoids the possibility that pmap_enter() confuses a
PAT bit with a PS bit, avoiding a panic().

Eliminate a diagnostic printf() from the i386 pmap_enter() that serves
no current purpose, i.e., I've seen no bug reports in the last two
years that are helped by this printf().

Reviewed by: jhb
2006-04-27 21:26:25 +00:00
peter
b9ca1b31c7 Move vm.pmap.pv_entry_count out from the PV_STATS ifdefs. It is always
available and is a real counter, not a statistic.
2006-04-26 21:34:07 +00:00
jkim
18e73c2320 Check if reported HTT cores are physical cores. This commit does not
affect AMD CPUs at all because HTT bit is disabled earlier.  Intel
multicore CPUs and ULE scheduler may be affected.
2006-04-25 00:06:37 +00:00
jkim
eefd58df92 Add another Intel CPU feature flag, xTPR (Send Task Priority Messages). 2006-04-24 22:56:57 +00:00
jkim
6b218fc19f Check if deterministic cache parameters leaf is valid before use. 2006-04-24 22:23:52 +00:00
cperciva
900c118819 Adjust dangerous-shared-cache-detection logic from "all shared data
caches are dangerous" to "a shared L1 data cache is dangerous".  This
is a compromise between paranoia and performance: Unlike the L1 cache,
nobody has publicly demonstrated a cryptographic side channel which
exploits the L2 cache -- this is harder due to the larger size, lower
bandwidth, and greater associativity -- and prohibiting shared L2
caches turns Intel Core Duo processors into Intel Core Solo processors.

As before, the 'machdep.hyperthreading_allowed' sysctl will allow even
the L1 data cache to be shared.

Discussed with:	jhb, scottl
Security:	See FreeBSD-SA-05:09.htt for background material.
2006-04-24 21:17:01 +00:00
delphij
da32f1fb9a Move AHC_REG_PRETTY_PRINT and AHD_REG_PRETTY_PRINT below
their corresponding devices.
2006-04-24 08:44:34 +00:00
peter
0e7c77416b Oops. Minidumps were developed on 6.x, in without the small pv entry code.
Add some strategic dump_add_page()/dump_drop_page() lines to include pv
chunks in the minidumps - these operate in the direct map region like UMA.
2006-04-21 04:50:18 +00:00
peter
dbae6322e8 Introduce minidumps. Full physical memory crash dumps are still available
via the debug.minidump sysctl and tunable.

Traditional dumps store all physical memory.  This was once a good thing
when machines had a maximum of 64M of ram and 1GB of kvm.  These days,
machines often have many gigabytes of ram and a smaller amount of kvm.
libkvm+kgdb don't have a way to access physical ram that is not mapped
into kvm at the time of the crash dump, so the extra ram being dumped
is mostly wasted.

Minidumps invert the process.  Instead of dumping physical memory in
in order to guarantee that all of kvm's backing is dumped, minidumps
instead dump only memory that is actively mapped into kvm.

amd64 has a direct map region that things like UMA use.  Obviously we
cannot dump all of the direct map region because that is effectively
an old style all-physical-memory dump.  Instead, introduce a bitmap
and two helper routines (dump_add_page(pa) and dump_drop_page(pa)) that
allow certain critical direct map pages to be included in the dump.
uma_machdep.c's allocator is the intended consumer.

Dumps are a custom format.  At the very beginning of the file is a header,
then a copy of the message buffer, then the bitmap of pages present in
the dump, then the final level of the kvm page table trees (2MB mappings
are expanded into a 4K page mappings), then the sparse physical pages
according to the bitmap.  libkvm can now conveniently access the kvm
page table entries.

Booting my test 8GB machine, forcing it into ddb and forcing a dump
leads to a 48MB minidump.  While this is a best case, I expect minidumps
to be in the 100MB-500MB range.  Obviously, never larger than physical
memory of course.

minidumps are on by default.  It would want be necessary to turn them off
if it was necessary to debug corrupt kernel page table management as that
would mess up minidumps as well.

Both minidumps and regular dumps are supported on the same machine.
2006-04-21 04:24:50 +00:00
imp
0e9911a7c4 Set the rid for a resoruce allocated with rman_reserve_resource. 2006-04-20 04:16:34 +00:00
cperciva
51d1ca0f6e Correct a local information leakage bug affecting AMD FPUs.
Security:	FreeBSD-SA-06:14.fpu
2006-04-19 07:00:19 +00:00
peter
6030c4e1a5 If we're doing a try-alloc of a pv entry and give up early, do not forget
to reduce the pv_entry_count counter.  This was found by Tor Egge.  In the
same email, Tor also pointed out the pv_stats problem in the previous
commit, but I'd forgotten about it until I went looking for this email
about this allocation problem.
2006-04-18 20:17:32 +00:00
peter
be61087902 pv_entry_count is more than a statistic. It is used for resource limiting.
Do not compile out its counter updates if pv entry stats are turned off.
2006-04-18 20:11:00 +00:00
alc
aac2697d98 Include opt_pmap.h for PMAP_SHPGPERPROC.
PR: 94509
2006-04-13 03:31:48 +00:00
alc
a7e3d6f83b Retire pmap_track_modified(). We no longer need it because we do not
create managed mappings within the clean submap.  To prevent regressions,
add assertions blocking the creation of managed mappings within the clean
submap.

Reviewed by: tegge
2006-04-12 04:22:52 +00:00
ps
cc2c59e66f Hook bce up to the build 2006-04-10 20:04:22 +00:00
jhb
1dfdfa5677 Cache the value of the lower half of each I/O APIC redirection table entry
so that we only have to do an ioapic_write() instead of an ioapic_read()
followed by an ioapic_write() every time we mask and unmask level triggered
interrupts.  This cuts the execution time for these operations roughly in
half.

Profiled by:	Paolo Pisati <p.pisati@oltrelinux.com>
MFC after:	1 week
2006-04-05 20:43:19 +00:00
peter
bfd11ed701 Convert pv_entry_frees and pv_entry_allocs stats counters from int to long,
they wrap way too quickly.
2006-04-04 20:17:35 +00:00
marcel
78f0584b0b Sync with i386: Map exceptions to signals in gdb_cpu_signal() so
that kgdb(1) gets a SIGTRAP when it needs to.

Pointed out by: grehan@
2006-04-04 03:00:20 +00:00
marcel
dc8b7dcaa1 The PC is register 16, not 18.
Pointed out by: grehan@
2006-04-04 02:44:51 +00:00
marcel
8278e2d5fb Eliminate HAVE_STOPPEDPCBS. On ia64 the PCPU holds a pointer to the
PCB in which the context of stopped CPUs is stored. To access this
PCB from KDB, we introduce a new define, called KDB_STOPPEDPCB. The
definition, when present, lives in <machine/kdb.h> and abstracts
where MD code saves the context. Define KDB_STOPPEDPCB on i386,
amd64, alpha and sparc64 in accordance to previous code.
2006-04-03 22:51:47 +00:00
peter
3a90816456 Shrink the amd64 pv entry from 48 bytes to about 24 bytes. On a machine
with large mmap files mapped into many processes, this saves hundreds of
megabytes of ram.
pv entries were individually allocated and had two tailq entries and two
pointers (or addresses).  Each pv entry was linked to a vm_page_t and
a process's address space (pmap).  It had the virtual address and a
pointer to the pmap.
This change replaces the individual allocation with a per-process
allocation system.  A page ("pv chunk") is allocated and this provides
168 pv entries for that process.  We can now eliminate one of the 16 byte
tailq entries because we can simply iterate through the pv chunks to find
all the pv entries for a process.  We can eliminate one of the 8 byte
pointers because the location of the pv entry implies the containing
pv chunk, which has the pointer.  After overheads from the pv chunk
bitmap and tailq linkage, this works out that each pv entry has an
effective size of 24.38 bytes.

Future work still required, and other problems:
* when running low on pv entries or system ram, we may need to defrag
  the chunk pages and free any spares.  The stats (vm.pmap.*) show that
  this doesn't seem to be that much of a problem, but it can be done if
  needed.
* running low on pv entries is now a much bigger problem.  The old
  get_pv_entry() routine just needed to reclaim one other pv entry.
  Now, since they are per-process, we can only use pv entries that are
  assigned to our current process, or by stealing an entire page worth
  from another process.  Under normal circumstances, the pmap_collect()
  code should be able to dislodge some pv entries from the current
  process.  But if needed, it can still reclaim entire pv chunk pages
  from other processes.
* This should port to i386 really easily, except there it would reduce
  pv entries from 24 bytes to about 12 bytes.

(I have integrated Alan's recent changes.)
2006-04-03 21:36:01 +00:00