without a few patches for the rest of the kernel to allow the image
activator to override exec_copyout_strings and setregs.
None of the syscall argument translation has been done. Possibly, this
translation layer can be shared with any platform that wants to support
running ILP32 binaries on an LP64 host (e.g. sparc32 binaries?)
to be sure that it is always correct and this was not true for the first
call to cpu_switch. When thread0 resumed later, it ended up calling
pmap_install with a null pmap, which is bad.
_BYTE_ORDER. These are far more useful than their non-underscored
equivalents as these can be used in restricted namespace environments.
Mark the non-underscored variants as deprecated.
the program headers. As a result of this, dumplo was advanced too
much causing the end of the dump and most notably the trailing
dump header to be written beyond the end of the the dump medium.
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.
Tested on: i386, alpha, sparc64
In the i386 case, options BOOTP requires options NFS_ROOT as well as
options NFSCLIENT. With *both* the NFS options, a bootpc_init()
prototype is brought in by nfsclient/nfsdiskless.h.
In the ia64 case, it just doesn't work and my change just pushes it
further away from working.
Suggested to be wrong by: bde
they aren't in the usual path of execution for syscalls and traps.
The main complication for this is that we have to set flags to control
ast() everywhere that changes the signal mask.
Avoid locking in userret() in most of the remaining cases.
Submitted by: luoqi (first part only, long ago, reorganized by me)
Reminded by: dillon
in dump byte order (=network byte order). Swap blocksize and dumptime
to avoid extraneous padding on 64-bit architectures. Use CTASSERT
instead of runtime checks to make sure the header is 512 bytes large.
Various style(9) fixes.
Reviewed by: phk, bde, mike
emitted the total number of pages it still had to dump prior to
dumping a block of up to 16 pages. For a 128MB region this would
result in 8M number of printf()s. Barf!
The problem in general is that memory typically has one really
big region and a number of "scattered" smaller regions. Some may
even be just a few pages. The twiddle works best for now, but
it doesn't really give a good progress indication for the large
regions. Those are the cases where you definitely want good PI
to avoid having the user turn into a twiddle :-)
various machdep.c's to being declared in kern_mutex.c.
- Add a new function mutex_init() used to perform early initialization
needed for mutexes such as setting up thread0's contested lock list
and initializing MI mutexes. Change the various MD startup routines
to call this function instead of duplicating all the code themselves.
Tested on: alpha, i386
constructs an ELF image, consisting of the ELF header, for
each memory region a program header, followed by the memory
contents for each region. It does blocked I/O for the headers
as they are typically smaller than DEV_BSIZE.
and cpu_critical_exit() and moves associated critical prototypes into their
own header file, <arch>/<arch>/critical.h, which is only included by the
three MI source files that need it.
Backout and re-apply improperly comitted syntactical cleanups made to files
that were still under active development. Backout improperly comitted program
structure changes that moved localized declarations to the top of two
procedures. Partially re-apply one of the program structure changes to
move 'mask' into an intermediate block rather then in three separate
sub-blocks to make the code more readable. Re-integrate bug fixes that Jake
made to the sparc64 code.
Note: In general, developers should not gratuitously move declarations out
of sub-blocks. They are where they are for reasons of structure, grouping,
readability, compiler-localizability, and to avoid developer-introduced bugs
similar to several found in recent years in the VFS and VM code.
Reviewed by: jake
general cleanup of the API. The entire API now consists of two functions
similar to the pre-KSE API. The suser() function takes a thread pointer
as its only argument. The td_ucred member of this thread must be valid
so the only valid thread pointers are curthread and a few kernel threads
such as thread0. The suser_cred() function takes a pointer to a struct
ucred as its first argument and an integer flag as its second argument.
The flag is currently only used for the PRISON_ROOT flag.
Discussed on: smp@
bootinfo block in register r8. In locore.s we save the address
in the global variable 'pa_bootinfo'. In machdep.c we compare
this value against the hardwired address, but don't depend on its
validity yet (ie: we still expect the bootinfo block to be at the
hardwired address). After a small amount of time, we'll flip the
switch and depend on the loader to pass us the address. From that
moment on the loader is free to put it anywhere it likes, provided
the machine itself likes it as well.
Add some verbosity to aid in the transition. We emit a message if
the loader didn't pass the address and we also emit a message if
there's no bootinfo block at the hardwired address.
While in locore.s, reduce the number of redundant serialization
instructions. A srlz.i is a proper superset of a srlz.d and thus
is a valid replacement. Also slightly reorder the movl instructions
to improve bundle density.
back into the calling MD code. The MD code must ensure no races between
checking the astpening flag and returning to usermode.
Submitted by: peter (ia64 bits)
Tested on: alpha (peter, jeff), i386, ia64 (peter), sparc64
with this flag. Remove the dup_list and dup_ok code from subr_witness. Now
we just check for the flag instead of doing string compares.
Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface. The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.
Approved by: jhb
disablement assumptions in kern_fork.c by adding another API call,
cpu_critical_fork_exit(). Cleanup the td_savecrit field by moving it
from MI to MD. Temporarily move cpu_critical*() from <arch>/include/cpufunc.h
to <arch>/<arch>/critical.c (stage-2 will clean this up).
Implement interrupt deferral for i386 that allows interrupts to remain
enabled inside critical sections. This also fixes an IPI interlock bug,
and requires uses of icu_lock to be enclosed in a true interrupt disablement.
This is the stage-1 commit. Stage-2 will occur after stage-1 has stabilized,
and will move cpu_critical*() into its own header file(s) + other things.
This commit may break non-i386 architectures in trivial ways. This should
be temporary.
Reviewed by: core
Approved by: core
Instead of caching the ucred reference, just go ahead and eat the
decerement and increment of the refcount. Now that Giant is pushed down
into crfree(), we no longer have to get Giant in the common case. In the
case when we are actually free'ing the ucred, we would normally free it on
the next kernel entry, so the cost there is not new, just in a different
place. This also removse td_cache_ucred from struct thread. This is
still only done #ifdef DIAGNOSTIC.
Tested on: i386, alpha