- 64 bit long type safe (wire protocols specified in explicit sized types)
- Support systems that don't do unaligned accesses
- Support for explicit int16 and int32 sizes in xdr
Obtained from: a diff of FreeBSD vs. OpenBSD/NetBSD rpc code.
suffering a bad case neglect for the last few years.
- Add full prototypes, including to function pointers.
- Make the wire protocols 64-bit type safe, eg: 32 bit quantities are
int32_t, not long. The orginal rpc code was implemented when an int
could be 16 bits.
Obtained from: a diff of FreeBSD vs. OpenBSD/NetBSD rpc code.
RELENG_2_2!
This is part#2 of the previous commit to src/lib/libc/net to contain the
potential damage.
This provides stubs so that binaries linked in 2.2 will run on 3.0
useradd -m or useradd -D -b are used.
2) Hyphen allowed in username if not first character. Fix trivial
bug in error fmt string.
3) /etc/skeykeys updating changed to do 'inplace' update, commenting
out a username rather than removing it completely.
don't just hard code them into the Makefile.
(This is the optional stuff to use perl scripts as a vi scripting language.
eg, to load a sample script, type: :perl do 'wc.pl';
this loads /usr/share/vi/perl/wc.pl to add the "wc" command. Then, one can
do this: :perl wc Yes, this is a trivial example. There are more
useful examples, eg 'make' output parsing along the lines of emacs's
"compile" mode. The tcl extension is similar and enabled by default since
we ship with tcl.)
needed, as he discovered when he tried to run vi. :-]
These files used to be stubs which used #ifdef PIC to decide whether to
use the real dlopen() version or the stub version from the src/contrib/tcl
sources. Now, with the our stubs gone, the .PATH directive causes them to
be compiled directly from src/contrib/tcl/{unix,generic}. You might need
to rebuild your depend rules though as they may have stale paths.
Also, this is a generated file. This should not have been edited here.
also implies VM_PROT_EXEC. We support it that way for now,
since the break system call by default gives VM_PROT_ALL. Now
we have a better chance of coalesing map entries when mixing
mmap/break type operations. This was contributing to excessive
numbers of map entries on the modula-3 runtime system. The
problem is still not "solved", but the situation makes more
sense.
Eventually, when we work on architectures where VM_PROT_READ
is orthogonal to VM_PROT_EXEC, we will have to visit this
issue carefully (esp. regarding security issues.)
maps. Additionally, eliminate the map->hint distortion
associated with useracc. That may/may-not be the "right"
thing to do -- but time will tell.
Submitted by: Partially by Alan Cox <alc@cs.rice.edu>
also add the missing declaration of forkpty() to libutil.h.
Btw., the calling interface for login(3) is crude. Some better
abstraction is needed, perhaps similar to logwtmp(3).
2.2 candidate, but i'll wait for the spelling police first. :)
in a different location. (Sigh, the initial import gratuitously
changed the directory structure here, rendering the vendor branch a
little useless.)
Note: the French message catalog needs updating. By now, i've simply
appended the English messages. NB: French message # 123 has been
wrong, please correct whoever is going to deal with this.
Firstly, now our read-ahead clustering is on a file descriptor basis and not
on a per-vnode basis. This will allow multiple processes reading the
same file to take advantage of read-ahead clustering. Secondly, there
previously was a problem with large reads still using the ramp-up
algorithm. Of course, that was bogus, and now we read the entire
"chunk" off of the disk in one operation. The read-ahead clustering
algorithm should use less CPU than the previous also (I hope :-)).
NOTE: THAT LKMS MUST BE REBUILT!!!
Firstly, now our read-ahead clustering is on a file descriptor basis and not
on a per-vnode basis. This will allow multiple processes reading the
same file to take advantage of read-ahead clustering. Secondly, there
previously was a problem with large reads still using the ramp-up
algorithm. Of course, that was bogus, and now we read the entire
"chunk" off of the disk in one operation. The read-ahead clustering
algorithm should use less CPU than the previous also (I hope :-)).