to the same version numbers as 2.2.x.
The problem with the way things were was:
- if you took a 2.2.x binary, it either wouldn't run on -current or
if you had the old -current version of libtermcap.so.2.1 then it could
potentially be a security problem.
- the alternative is to start a compat22 tree dist for -current with a
uuencoded binary. This makefile hack is less cost.
libtermcap.so.3.0 is provided via /usr/lib/compat to avoid transition
problems.
or aio_write can return the pid of the new thread. This is due to the
way that return values from system calls being passed by side-effect in
the proc structure now. This commit fixes the problem with aio_read and
aio_write.
remove alot of overly verbose debugging statements.
ioproclist {
int aioprocflags; /* AIO proc flags */
TAILQ_ENTRY(aioproclist) list; /* List of processes */
struct proc *aioproc; /* The AIO thread */
TAILQ_HEAD (,aiocblist) jobtorun; /* suggested job to run */
};
/*
* data-structure for lio signal management
*/
struct aio_liojob {
int lioj_flags;
int lioj_buffer_count;
int lioj_buffer_finished_count;
int lioj_queue_count;
int lioj_queue_finished_count;
struct sigevent lioj_signal; /* signal on all I/O done */
TAILQ_ENTRY (aio_liojob) lioj_list;
struct kaioinfo *lioj_ki;
};
#define LIOJ_SIGNAL 0x1 /* signal on all done (lio) */
#define LIOJ_SIGNAL_POSTED 0x2 /* signal has been posted */
/*
* per process aio data structure
*/
struct kaioinfo {
int kaio_flags; /* per process kaio flags */
int kaio_maxactive_count; /* maximum number of AIOs */
int kaio_active_count; /* number of currently used AIOs */
int kaio_qallowed_count; /* maxiumu size of AIO queue */
int kaio_queue_count; /* size of AIO queue */
int kaio_ballowed_count; /* maximum number of buffers */
int kaio_queue_finished_count; /* number of daemon jobs finished */
int kaio_buffer_count; /* number of physio buffers */
int kaio_buffer_finished_count; /* count of I/O done */
struct proc *kaio_p; /* process that uses this kaio block */
TAILQ_HEAD (,aio_liojob) kaio_liojoblist; /* list of lio jobs */
TAILQ_HEAD (,aiocblist) kaio_jobqueue; /* job queue for process */
TAILQ_HEAD (,aiocblist) kaio_jobdone; /* done queue for process */
TAILQ_HEAD (,aiocblist) kaio_bufqueue; /* buffer job queue for process */
TAILQ_HEAD (,aiocblist) kaio_bufdone; /* buffer done queue for process */
};
#define KAIO_RUNDOWN 0x1 /* process is being run down */
#define KAIO_WAKEUP 0x2 /* wakeup process when there is a significant
event */
TAILQ_HEAD (,aioproclist) aio_freeproc, aio_activeproc;
TAILQ_HEAD(,aiocblist) aio_jobs; /* Async job list */
TAILQ_HEAD(,aiocblist) aio_bufjobs; /* Phys I/O job list */
TAILQ_HEAD(,aiocblist) aio_freejobs; /* Pool of free jobs */
static void aio_init_aioinfo(struct proc *p) ;
static void aio_onceonly(void *) ;
static int aio_free_entry(struct aiocblist *aiocbe);
static void aio_process(struct aiocblist *aiocbe);
static int aio_newproc(void) ;
static int aio_aqueue(struct proc *p, struct aiocb *job, int type) ;
static void aio_physwakeup(struct buf *bp);
static int aio_fphysio(struct proc *p, struct aiocblist *aiocbe, int type);
static int aio_qphysio(struct proc *p, struct aiocblist *iocb);
static void aio_daemon(void *uproc);
SYSINIT(aio, SI_SUB_VFS, SI_ORDER_ANY, aio_onceonly, NULL);
static vm_zone_t kaio_zone=0, aiop_zone=0,
aiocb_zone=0, aiol_zone=0, aiolio_zone=0;
/*
* Single AIOD vmspace shared amongst all of them
*/
static struct vmspace *aiovmspace = NULL;
/*
* Startup initialization
*/
void
aio_onceonly(void *na)
{
TAILQ_INIT(&aio_freeproc);
TAILQ_INIT(&aio_activeproc);
TAILQ_INIT(&aio_jobs);
TAILQ_INIT(&aio_bufjobs);
TAILQ_INIT(&aio_freejobs);
kaio_zone = zinit("AIO", sizeof (struct kaioinfo), 0, 0, 1);
aiop_zone = zinit("AIOP", sizeof (struct aioproclist), 0, 0, 1);
aiocb_zone = zinit("AIOCB", sizeof (struct aiocblist), 0, 0, 1);
aiol_zone = zinit("AIOL", AIO_LISTIO_MAX * sizeof (int), 0, 0, 1);
aiolio_zone = zinit("AIOLIO",
AIO_LISTIO_MAX * sizeof (struct aio_liojob), 0, 0, 1);
aiod_timeout = AIOD_TIMEOUT_DEFAULT;
aiod_lifetime = AIOD_LIFETIME_DEFAULT;
jobrefid = 1;
}
/*
* Init the per-process aioinfo structure.
* The aioinfo limits are set per-process for user limit (resource) management.
*/
void
aio_init_aioinfo(struct proc *p)
{
struct kaioinfo *ki;
if (p->p_aioinfo == NULL) {
ki = zalloc(kaio_zone);
p->p_aioinfo = ki
break each ruleset into identified sections. (called groups).
note which groups can be reordered.
each group accepts and returns the same strings,
as much as possible.
reactivate Paul Vixie's RBL (in check_mail)
add rules to limit mail relaying to a list of hosts and domains
in the R class (check_rcpt, not active on hub.freebsd.org)
Submitted by: jmb
break each ruleset into identified sections. (called groups).
note which groups can be reordered.
each group accepts and returns the same strings,
as much as possible.
reactivate Paul Vixie's RBL (in check_mail)
add rules to limit mail relaying to a list of hosts and domains
in the R class (check_rcpt, not active on hub.freebsd.org)
Submitted by: jmb
make isa_dmacascade, isa_dmastart, isa_dmadone, and find_isadev MUCH
easier to be found by starting them at the beginging of the line...
remove braces inside of ifdef RESOURCE_CHECK... found by % in vi...
support was missing in the previous version of the AIO code. More
tunables added, and very efficient support for VCHR files has been added.
Kernel threads are not used for VCHR files, all work for such files is
done for the requesting process directly. Some attempt has been made to
charge the requesting process for resource utilization, but more work
is needed. aio_fsync is still missing (but the original fsync system
call can be used for now.) aio_cancel is essentially a noop, but that
is okay per POSIX. More aio_cancel functionality can be added later,
if it is found to be needed.
The functions implemented include:
aio_read, aio_write, lio_listio, aio_error, aio_return,
aio_cancel, aio_suspend.
The code has been implemented to support the POSIX spec 1003.1b
(formerly known as POSIX 1003.4 spec) features of the above. The
async I/O features are truly async, with the VCHR mode of operation
being essentially the same as physio (for appropriate files) for
maximum efficiency. This code also supports the signal capability,
is highly tunable, allowing management of resource usage, and
has been written to allow a per process usage quota.
Both the O'Reilly POSIX.4 book and the actual POSIX 1003.1b document
were the reference specs used. Any filedescriptor can be used with
these new system calls. I know of no exceptions where these
system calls will not work. (TTY's will also probably work.)
a similar way to libc. Sigh. This is not pretty but seems to work.
Somthing like this was needed in preference to bogusly bumping the major
library number here.
The syscall(SYS_issetugid) idea is originally Bruce's.
things so that it uses the same malloc as is used by the program
being executed. This has several advantages, the big one being
that you can now debug core dumps from dynamically linked programs
and get useful information out of them. Until now, that didn't
work. The internal malloc package placed the tables describing
the loaded shared libraries in a mapped region of high memory that
was not written to core files. Thus the debugger had no way of
determining what was loaded where in memory. Now that the dynamic
linker uses the application's malloc package (normally, but not
necessarily, the system malloc), its tables end up in the regular
heap area where they will be included in core dumps. The debugger
now works very well indeed, thank you very much.
Also ...
Bring the program a little closer to conformance with style(9).
There is still a long way to go.
Add minimal const correctness changes to get rid of compiler warnings
caused by the recent const changes in <dlfcn.h> and <link.h>.
Improve performance by eliminating redundant calculations of symbols'
hash values.
absence of full debugging symbols for the kernel, but broke it for
application programs. This commit disables that change except when
kernel debugging mode is in effect.
This needs to go into -2.2 as well, after a suitable burn-in period.