The other option would be to remove it, but I can imagine it may be useful
for the forseeable future as we fiddle with segments in KSE and thr libraries,
that while many maps can exist and be loaded per tag, bus_dmamap_load() and
friends can only be called on one map at a time from the tag. This is
enforced via the mutex arguments in the tag.
Fixing this bug means that s/g lists can be arbitrarily long in length, and
also removes an ugly GNU-ism from the code. No API or ABI change is
incurred. Similar changes for other platforms is forthcoming.
pointer in the PCB of the corresponding thread if it's not the
current thread. This is needed for thr_create() to setup the
newly created thread from the context provided by the application.
This commit finalizes supporting libthr.
created not only with UMA_ZONE_VM but also with UMA_ZONE_NOFREE. In
the i386 case in particular, the pmap code would hook a special
page allocation routine that allocated from kernel_map and not kmem_map,
and so when/if the pageout daemon drained the zones, it could actually
push out slabs from the PV ENTRY zone but call UMA's default page_free,
which resulted in pages allocated from kernel_map being freed to
kmem_map; bad. kmem_free() ignores the return value of the
vm_map_delete and just returns. I'm not sure what the exact
repercussions could be, but it doesn't look good.
In the PAE case on i386, we also set-up a zone in pmap, so be
conservative for now and make that zone also ZONE_NOFREE and
ZONE_VM. Do this for the pmap zones for the other archs too,
although in some cases it may not be entirely necessarily. We'd
rather be safe than sorry at this point.
Perhaps all UMA_ZONE_VM zones should by default be also
UMA_ZONE_NOFREE?
May fix some of silby's crashes on the PV ENTRY zone.
or free a LDT entry. The function has following prototype:
int i386_set_ldt(int start_sel, union descriptor *descs, int num_sels);
Added Features:
o If start_sel is 0, num_sels is 1 and the descriptor pointed to by descs
is legal, then i386_set_ldt() will allocate a descriptor and return its
selector numbe
o If num_descs is 1, start_sels is valid, and descs is NULL, then
i386_set_ldt() will free that descriptor (making it available to be real-
located again later).
o If num_descs is 0, start_sels is 0 and descs is NULL then, as a special
case, i386_set_ldt() will free all descriptors.
Reviewed by: julian
in sync with the backend machdep code. When cpu_thread_init() does not
have the same idea of KSTACK_PAGES as the thing that created the kstack,
all hell breaks loose.
Bad alc! no cookie! :-)
on a non-blocking pipe in cases where select(2) returns the file
descriptor as ready for write. This in turns causes libc_r, for
one, to busy wait in such cases.
Note: it is a quick performance fix, a more complex fix might be
required in case this turns out to have unexpected side effects.
Reviewed by: silby
MFC after: 3 days
thread's pid to make debugging easier for people who don't want to have to
use the intended tool for these panics (witness).
Indirectly prodded by: kris
use "\n\" instead of "\" at the end of each source line, and don't use
semicolons). Fixed some older style bugs on the same lines (mainly
English errors in comments).
1) The race has to do with zone destruction. From the zone destructor we
would lock the zone, set the working set size to 0, then unlock the zone,
drain it, and then free the structure. Within the window following the
working-set-size set to 0 and unlocking of the zone and the point where
in zone_drain we re-acquire the zone lock, the uma timer routine could
have fired off and changed the working set size to something non-zero,
thereby potentially preventing us from completely freeing slabs before
destroying the zone (and thus leaking them).
2) The leak has to do with zone destruction as well. When destroying a
zone we would take care to free all the buckets cached in the zone, but
although we would drain the pcpu cache buckets, we would not free them.
This resulted in leaking a couple of bucket structures (512 bytes each)
per cpu on SMP during zone destruction.
While I'm here, also silence GCC warnings by turning uma_slab_alloc()
from inline to real function. It's too big to be an inline.
Reviewed by: JeffR
a long-standing mistake in the way a portion of a pipe's KVA is
allocated. Specifically, kmem_alloc_pageable() is inappropriate
for use in the "direct" case because it allows a preceding vm map entry
and vm object to be extended to support the new KVA allocation.
However, the direct case KVA allocation should not have a backing
vm object. This is corrected by using kmem_alloc_nofault().
Submitted by: tegge (with the above explanation by me)
tsb_foreach(), 0 signals to terminate the tsb traversal, so when
tsb_foreach() was used in pmap_protect() (which only happens when
the area to be protected is larger than PMAP_TSB_THRESH = 16MB), only
the first tsb entry in the specified range would be protected.
Reported by: Andrew Belashov <bel@orel.ru>
("UMA Zone") carefully, because it does not have pcpu caches allocated
at all. In the UP case, we did not catch this because one pcpu cache
is always allocated with the zone, but for the MP case, we were getting
bogus stats for this zone.
Tested by: Lukas Ertl <le@univie.ac.at>
the != operator) only when needed.
This change allows me to check out the current version of release/
makefiles only (co -l) to /tmp/release, and use that directory to
build a release (supplying the correct WORLDDIR).
Without this, attempt to "make release" caused an endless fork bomb
while trying to evaluate FIXCRYPTO, and the only way I could get
away from this on a remote box was to "kill -INT 1", thanks to
tcsh(1) and its internal "kill" command.