PT_ATTACH/PT_DETACH implemented now and fully operational.
PT_{GET|SET}{REGS|FPREFS} implemented now, using code shared with procfs
PT_{READ|WRITE}_{I|D} now uses code shared with procfs
ptrace opcodes now fully permission checked, including ownerships.
doing an operation to the u-area on a swapped process should no longer
panic.
running gdb as root works for me now, where it didn't before.
general cleanup..
Note, that this has some tightening of permissions/access checks etc.
Some of these may be going too far.. In particular, the "owner" of the
traced process is enforced. The process that created or attached to
the traced process is now the only one that can "do" things to it.
Speed up for vfs_bio -- addition of a routine bqrelse to greatly diminish
overhead for merged cache.
Efficiency improvement for vfs_cluster. It used to do alot of redundant
calls to cluster_rbuild.
Correct the ordering for vrele of .text and release of credentials.
Use the selective tlb update for 486/586/P6.
Numerous fixes to the size of objects allocated for files. Additionally,
fixes in the various pagers.
Fixes for proper positioning of vnode_pager_setsize in msdosfs and ext2fs.
Fixes in the swap pager for exhausted resources. The pageout code
will not as readily thrash.
Change the page queue flags (PG_ACTIVE, PG_INACTIVE, PG_FREE, PG_CACHE) into
page queue indices (PQ_ACTIVE, PQ_INACTIVE, PQ_FREE, PQ_CACHE),
thereby improving efficiency of several routines.
Eliminate even more unnecessary vm_page_protect operations.
Significantly speed up process forks.
Make vm_object_page_clean more efficient, thereby eliminating the pause
that happens every 30seconds.
Make sequential clustered writes B_ASYNC instead of B_DELWRI even in the
case of filesystems mounted async.
Fix a panic with busy pages when write clustering is done for non-VMIO
buffers.
Add more features to the one remaining to handle the job:
+ signed quantity.
# alternate format
- left padding
* read width as next arg.
n numeric in (argument specified) default radix.
Fix the DDB debugger to use these.
Use vprintf in debug routine in pcvt.
The warnings from gcc may become more wrong and intolerable because
of this.
Warning: I have not checked the entire source for unsupported or
changed constructs, but generally belive that there are only a few.
Suggested by: bde
sysv_ipc.c: add stub functions that either simply return (for the hooks
in kern_fork/kern_exit) or log() a messgae and call enosys() (for the
syscalls). sysv_ipc.c will become "standard" in conf/files and has
#ifs for all the permutations.
are about to go in. This is to fix the problem with the ibcs2 and linux
lkm's not being able to call the sysv ipc functions unless the build is
modified.
the range [210:260] by sweeping the problem under the rug. This change
has the following effects:
1) A new MIB variable in the kern branch is defined to allow modification
of the socket buffer layer's ``wastage factor'' (which determines how
much unused-but-allocated space in mbufs and mbuf clusters is allowed
in a socket buffer).
2) The default value of the wastage factor is changed from 2 to 8.
The original value was chosen when MINCLSIZE was 7*MLEN (!), and is not
appropriate for an environment where MINCLSIZE is much less.
The real solution to this problem is to scrap both mbufs and sockbufs
and completely redesign the buffering mechanism used at both levels.
sysctl handler (ouch!)
Add a "const" qualifier to the source of the copyin() and copyout()
functions - the other const warning in kern_sysctl.c was silenced when
copyout was declared as having a const source.. (which it is)
just like on SVR4.
This has no effect on any current programs in our source, but makes
the use of SVR4 code a little easier. There is no code or implementation
cost in the kernel.. This two-line change merely sets the modes on the ends
of the pipes to be bidirectional. There are no other changes.
looking at a high resolution clock for each of the following events:
function call, function return, interrupt entry, interrupt exit,
and interesting branches. The differences between the times of
these events are added at appropriate places in a ordinary histogram
(as if very fast statistical profiling sampled the pc at those
places) so that ordinary gprof can be used to analyze the times.
gmon.h:
Histogram counters need to be 4 bytes for microsecond resolutions.
They will need to be larger for the 586 clock.
The comments were vax-centric and wrong even on vaxes. Does anyone
disagree?
gprof4.c:
The standard gprof should support counters of all integral sizes
and the size of the counter should be in the gmon header. This
hack will do until then. (Use gprof4 -u to examine the results
of non-statistical profiling.)
config/*:
Non-statistical profiling is configured with `config -pp'.
`config -p' still gives ordinary profiling.
kgmon/*:
Non-statistical profiling is enabled with `kgmon -B'. `kgmon -b'
still enables ordinary profiling (and distables non-statistical
profiling) if non-statistical profiling is configured.
int shmget(key_t key, int size, int shmflg);
If the 'key' has already existed in the system and set 'shmflg'
as '(IPC_CREAT|IPC_EXC)', then shmget() must return the error 'EEXIST'.
Submitted by: m_tanaka@pa.yokogawa.co.jp (Mihoko Tanaka)
See the comments for addupc_intr() and the NetBSD implementation.
We use dummy versions of fuswintr() and susiwintr(), so addupc_intr()
always pushes the work to trap() (this is inefficient), and trap()
calls the special i386 function addupc() instead of addupc_task().
addupc() is more efficient than addupc_intr(), so some of the lost
efficiency is recovered. However, addupc() may be broken on plain
i386's since it doesn't check for write permission like copyout().
redistribute a few last routines to beter places and shoot the file
I haven't act actually 'deleted' the file yet togive people time
to
have done a config.. I.e. they are likely to have done one in a week or so
so I'll remove it then..
it's now empty.
makes the question of a USL copyright rather moot.