The uart(4) driver has the advantage of supporting a wider variety of
hardware on a greater amount of platforms. This driver has already been
the standard on platforms such as ia64, powerpc and sparc64.
I've decided not to change anything on pc98. I'd rather let people from
the pc98 team look at this.
Approved by: philip (mentor), marcel
Initialize %ds, %es, and %fs during CPU startup. Otherwise a garbage
value could leak to a 32-bit process if a process migrated to a different
CPU after exec and the new CPU had never exec'd a 32-bit process.
A more complete fix is needed, but this mitigates the most frequent
manifestations.
Obtained from: ups
page directory pages from VM_MIN_KERNEL_ADDRESS through the end of the
kernel's bss. Specifically, the dependence was in pmap_growkernel()'s one-
time initialization of kernel_vm_end, not in its main body. (I could not,
however, resist the urge to optimize the main body.)
Reduce the number of preallocated page directory pages to just those needed
to support NKPT page table pages. (In fact, this allows me to revert a
couple of my earlier changes to create_pagetables().)
page table pages have to be preallocated ...'', violates an assumption made
by minidumpsys(): kernel_vm_end is the highest virtual address that has ever
been used by the kernel. Now, however, the kernel code, data, and bss may
reside at addresses beyond kernel_vm_end. This revision modifies the upper
bound on minidumpsys()'s two page table traversals to account for this
possibility.
to vm_page_alloc() instead of VM_ALLOC_SYSTEM. VM_ALLOC_SYSTEM was the
logical choice before FreeBSD 7.0 because VM_ALLOC_INTERRUPT could not
reclaim a cached page. Simply put, there was no ordering between
VM_ALLOC_INTERRUPT and VM_ALLOC_SYSTEM as to which "dug deeper" into the
cache and free queues. Now, there is; VM_ALLOC_INTERRUPT dominates
VM_ALLOC_SYSTEM.
While I'm here, teach pmap_growkernel() to request a prezeroed page.
MFC after: 1 week
ceiling as a fraction of the kernel map's size rather than an absolute
quantity. Thus, scaling of the kmem map's size will be automatic with
changes to the kernel map's size.
in practice, the error (currently) makes no difference because the computation
performed by KVADDR() hides the error. This revision fixes the error.
Also, eliminate a (now) unused definition.
maximum size of the kmem map can be greater than 4GB, there is little point
in making the kernel virtual address space larger than 6GB.
Tested by: kris@
Now that st_rdev is being automatically generated by the kernel, there
is no need to define static major/minor numbers for the iodev and
memdev. We still need the minor numbers for the memdev, however, to
distinguish between /dev/mem and /dev/kmem.
Approved by: philip (mentor)
KERNBASE and VM_MIN_KERNEL_ADDRESS are no longer the same, the physical
memory allocated during bootstrap will be offset from the low-end of the
kernel's page table.
address space on the amd64 architecture. The amd64 architecture
requires kernel code and global variables to reside in the highest 2GB
of the 64-bit virtual address space. Thus, KERNBASE cannot change.
However, KERNBASE is sometimes used as the start of the kernel virtual
address space. Henceforth, VM_MIN_KERNEL_ADDRESS should be used
instead. Since KERNBASE and VM_MIN_KERNEL_ADDRESS are still the same
address, there should be no visible effect from this change (yet).
That said, kris@ has tested crash dumps under the full patch that
increases the kernel virtual address space on amd64 to 6GB.
Tested by: kris@
address space on the amd64 architecture. The amd64 architecture
requires kernel code and global variables to reside in the highest 2GB
of the 64-bit virtual address space. Thus, KERNBASE cannot change.
However, KERNBASE is sometimes used as the start of the kernel virtual
address space. Henceforth, VM_MIN_KERNEL_ADDRESS should be used
instead. Since KERNBASE and VM_MIN_KERNEL_ADDRESS are still the same
address, there should be no visible effect from this change (yet).
before PG_M. This sometimes prevents unnecessary removal of write access
from a PTE. Overall, the net result is fewer demotions and promotion
failures.
page table page. The direction of the traversal can matter if
pmap_promote_pde() has to remove write access (PG_RW) from a PTE that hasn't
been modified (PG_M). In general, if there are two or more such PTEs to
choose among, it is better to write protect the one nearer the high end of
the page table page rather than the low end. This is because most programs
access memory in an ascending direction. The net result of this change is a
sometimes significant reduction in the number of failed promotion attempts
and the number of pages that are write protected by pmap_promote_pde().
promotion within the kernel's address space. Specifically,
pmap_promote_pde() is only called when the page table page (PTP) that
is referenced by the given PDE has a full "use count", i.e., its
wire_count is 512. Although this guarantees for a user address space
that all 512 PTEs in the PTP hold valid mappings, the same is not true
of the kernel's address space. A kernel PTP always has a use count of
512 regardless of the state of the PTEs. Therefore,
pmap_promote_pde() should not assume (or assert) that the first PTE in
the PTP is valid.
parts relied on the now removed NET_NEEDS_GIANT.
Most of I4B has been disconnected from the build
since July 2007 in HEAD/RELENG_7.
This is what was removed:
- configuration in /etc/isdn
- examples
- man pages
- kernel configuration
- sys/i4b (drivers, layers, include files)
- user space tools
- i4b support from ppp
- further documentation
Discussed with: rwatson, re
what Linux does. This is because robust futexes are mostly
userspace thing which we cannot alter. Two syscalls maintain
pointer to userspace list and when process exits a routine
walks this list waking up processes sleeping on futexes
from that list.
Reviewed by: kib (mentor)
MFC after: 1 month