case has very little to do with the output size being larger than
INT_MAX.
2. The new #include of <limits.h> was disordered.
3. The new declaration of `on' was disordered (integer types go together).
4. Testing an unsigned value for > 0 was fishy.
Submitted by: bde
instead of Singe Unix, thanx Bruce for explaining, I am not realize
standards war was there.
But now, fix n == 0 case to not return error and fix check for too
big n.
Things left to do: check for overflow in arguments.
Final word is Bruce's quote:
C9x specifies the BSD4.4-Lite behaviour:
[#3] ... Thus, the
null-terminated output has been completely written if and
only if the returned value is less than n.
It means that if we not have any null-terminated output as for n == 0
we can't return value less than n, so we forced to return value
equal to n i.e. 0
The next good thing is glibc compatibility, of course.
2) Do check for too big n in machine-independent way.
3) Minor optimization assuming EOF is < 0
The main argument is that it is impossible to determine if %n evaluated or not
when snprintf return 0, because it can happens for both n == 0 and n == 1.
Although EOF here is good indication of the end of process, if n is
decreased in the loop...
Since it is already supposed in many places that EOF *is* negative, f.e.
from Single Unix specs for snprintf
"return ... a negative value if an output error was encountered"
this not makes situation worse.
to pass not more than buffer size to %n agrument, old variant
always assume infinite buffer.
%n is for actually transmitted characters, not for planned ones.
"return the number of bytes needed, rather the number used"
According to Single Unix specs:
Upon successful completion, these functions return the number of bytes
transmitted excluding the terminating null
1) if buffer size is smaller than arguments size, return buffer
size, not arguments size as before.
2) if buffer size is 0, return 0, not EOF as before.
(now it is compatible with Linux and Apache implementations too).
NOTE: Single Unix specs says:
If the value of n {buffer size} is zero on a call to snprintf(), an
unspecified value less than 1 is returned.
It means we can't return EOF since EOF can take *any* value in general
not especially < 1. Better variant will be return -1 (it is less then
1 and different with n == 1 case) but -1 value is already occuped by
EOF in our implementation, so we can't distinguish true IO error
in that case. So 0 here is only possible case still conforming
to Single Unix specs.
on systems where long doubles are just doubles. FreeBSD hasn't
been such a system since it started using gcc-2.5 many years ago.
The fix is of low quality. It loses precision.
scanf() of long doubles doesn't seem to be used much, but gdb-4.16
uses %Lg format in its expression parser if it thinks that the
system supports printf'ing of long doubles. The symptom was that
floating point literals were usually interpreted to be 0.0.
and forgot what I was trying to do originally and accidently zapped
a feature. :-] The problem is that we are converting a counted buffer in
a malloc pool into a null terminated C-style string. I was calling realloc
originally to shrink the buffer to the desired size. If realloc failed, we
still returned the valid buffer - the only thing wrong was it was a tad
too large. The previous commit disabled this.
This commit now handles the three cases..
1: the buffer is exactly right for the null byte to terminate the
string (we don't call realloc).
2: it's got h.left = 0, so we must expand it to make room. If realloc
fails here, it's fatal.
3: if there's too much room, we realloc to shrink it - a failed realloc
is not fatal, we use the original buffer which is still valid.
so that all these makefiles can be used to build libc_r too.
Added .if ${LIB} == "c" tests to restrict man page builds to libc
to avoid needlessly building them with libc_r too.
Split libc Makefile into Makefile and Makefile.inc to allow the
libc_r Makefile to include Makefile.inc too.
- 0 was returned instead of EOF when an input failure occured while
skipping white-space after 0 assignments. This fixes PR2606. The
diagnosis in PR2606 is wrong.
- EOF was returned instead of 0 when an input failure occurred after
zero assignments and nonzero suppressed assignments.
- EOF was spelled -1.
This should be in 2.2.
a manner consistent with other implementations. Its done in a way that
adds only a tiny amount of overhead when positional arguments are not used.
I also have a test program to go with this, but don't know where it belongs
in the tree.
Submitted-By: Bill Fenner <fenner@FreeBSD.ORG>
This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
in a bunch of man pages.
Use the correct .Bx (BSD UNIX) or .At (AT&T UNIX) macros
instead of explicitly specifying the version in the text
in a bunch of man pages.
refilled) a file that was either line- or un-buffered, all files were
flushed. According to the code comment, the flush (according to ANSI)
is supposed to happen on write + line buffered output files, not _all_
files.
Obtained from: OpenBSD / Theo de Raadt, possibly from proven@cygnus.com
- buffer expansions were not working right due to a return code botch.
- signed types instead of size_t's meant somebody else went and put
casts in, I've changed the types to what they should have been.
Added $Id$'s to files that were lacking them (gpalmer), made some
cosmetic changes to conform to style guidelines (bde) and checked
against NetBSD and Lite2 to remove unnecessary divergences (hsu, bde)
One last code cleanup:-
Removed spurious casts in fseek.c and stdio.c.
Added missing function argument in fwalk.c.
Added missing header include in flags.c and rget.c.
Put in casts where int's were being passed as size_t's.
Put in missing prototypes for static functions.
Changed second args of __sflags() inflags.c and writehook() in vasprintf.c
from char * to const char * to conform to prototypes.
This directory now compiles with no warnings with -Wall under
gcc-2.6.3 and with considerably less warnings than before with the
ultra-pedantic script I used for testing. (Most of the remaining ones
are due to const poisoning).
The usual stuff, adding missing function prototypes, argument types,
return values, etc.
This directory now compiles with no warnings with -Wall on gcc2.6.3!
The usual stuff, adding missing function prototypes, argument types,
return values, etc. In mktemp.c, convert pid from u_int to pid_t, and
get rid of "extern int errno".