used, and caused a reference to an uninitialised variable (state).
I think I've fixed it now, but since nothing in the tree seems to use it,
I'm not sure.
under -current. The actual preparation of the next track will now be
deferred until just before the first write operation. Otherwise,
opening the device with write intent will cause the execution of
commands that are illegal in `limited command set mode' (i.e., after
the write channel has been opened).
While i was at it, cleaned up the worm_open() function a bit.
Removed the volume overflow pre-check in worm_strategy(). It was
time-consuming, and rather useless in many cases anyway (with the size
being reported for just the entire volume only), so we can as well let
the actual SCSI command fail instead, where it'll properly be reported
as EIO.
Partially submitted by & discussed with: jmz
fix PR#3618 weren't sufficient since malloc() can block - allowing the
net interrupts in and leading to the same problem mentioned in the
PR (a panic). The order of operations has been changed so that this
is no longer a problem.
Needs to be brought into the 2.2.x branch.
PR: 3618
instead of Singe Unix, thanx Bruce for explaining, I am not realize
standards war was there.
But now, fix n == 0 case to not return error and fix check for too
big n.
Things left to do: check for overflow in arguments.
Final word is Bruce's quote:
C9x specifies the BSD4.4-Lite behaviour:
[#3] ... Thus, the
null-terminated output has been completely written if and
only if the returned value is less than n.
It means that if we not have any null-terminated output as for n == 0
we can't return value less than n, so we forced to return value
equal to n i.e. 0
The next good thing is glibc compatibility, of course.
2) Do check for too big n in machine-independent way.
3) Minor optimization assuming EOF is < 0
The main argument is that it is impossible to determine if %n evaluated or not
when snprintf return 0, because it can happens for both n == 0 and n == 1.
Although EOF here is good indication of the end of process, if n is
decreased in the loop...
Since it is already supposed in many places that EOF *is* negative, f.e.
from Single Unix specs for snprintf
"return ... a negative value if an output error was encountered"
this not makes situation worse.
to pass not more than buffer size to %n agrument, old variant
always assume infinite buffer.
%n is for actually transmitted characters, not for planned ones.
"return the number of bytes needed, rather the number used"
According to Single Unix specs:
Upon successful completion, these functions return the number of bytes
transmitted excluding the terminating null
1) if buffer size is smaller than arguments size, return buffer
size, not arguments size as before.
2) if buffer size is 0, return 0, not EOF as before.
(now it is compatible with Linux and Apache implementations too).
NOTE: Single Unix specs says:
If the value of n {buffer size} is zero on a call to snprintf(), an
unspecified value less than 1 is returned.
It means we can't return EOF since EOF can take *any* value in general
not especially < 1. Better variant will be return -1 (it is less then
1 and different with n == 1 case) but -1 value is already occuped by
EOF in our implementation, so we can't distinguish true IO error
in that case. So 0 here is only possible case still conforming
to Single Unix specs.