Builtins (including variable assignments without command word), function
calls and redirected compound commands need to restore file descriptors
to their original state after execution. This is handled by allocating a
redirtab structure. These mallocs and frees show up heavily in pmcstat.
Only allocate a redirtab if there are actually redirections and maintain a
count of how many levels of REDIR_PUSH there are without redirtabs.
A simple loop without external programs like
sh -c 'i=0; w=$(printf %0100d 7); while [ "$i" -lt 1000000 ]; do
i=$((i+1)); done'
is over 25% faster on an amd64 bhyve VM.
Although <&0 does nothing, it is a redirection affecting standard input and
should therefore disable the </dev/null redirection implicit in a background
command.
Replace the RESET blocks with regular functions and a reset() function that
calls them all.
This code generation tool is unusual and does not appear to provide much
benefit. I do not think isolating the knowledge about which modules need to
be reset is worth an almost 500-line build tool and wider scope for
variables used by the reset functions.
Also, relying on reset functions is often wrong: the cleanup should be done
in exception handlers so that no stale state remains after 'command eval'
and the like.
Use non-blocking I/O to write as much as the pipe will accept (often 64K,
but it can be as little as 4K), avoiding the need for the ugly PIPESIZE
constant. If PIPESIZE was set too high, a deadlock would occur.
Expand here documents at the same point other redirections are expanded but
use a non-fork subshell environment (like simple command substitutions) for
compatibility. Substitition errors result in an empty here document like
before.
As a result, a fork is avoided for short (<4K) expanded here documents.
Unexpanded here documents (with quoted end marker after <<) are not affected
by this change. They already only forked when >4K.
Side effects:
* Order of expansion is slightly different.
* Slow expansions are not executed in parallel with the redirected command.
* A non-fork subshell environment is subtly different from a forked process.
These are called "shell procedures" in the source.
If execve() failed with [ENOEXEC], the shell would reinitialize itself
and execute the program as a script. This requires a fair amount of code
which is not frequently used (most scripts have a #! magic number).
Therefore just execute a new instance of sh (_PATH_BSHELL) to run the
script.
- Redirecting fds that were not open before kept two copies of the
redirected file.
sh -c '{ :; } 7>/dev/null; fstat -p $$; true'
(both fd 7 and 10 remained open)
- File descriptors used to restore things after redirection were not
set close-on-exec, instead they were explicitly closed before executing
a program normally and before executing a shell procedure. The latter
must remain but the former is replaced by close-on-exec.
sh -c 'exec 7</; { exec fstat -p $$; } 7>/dev/null; true'
(fd 10 remained open)
The examples above are simpler than the testsuite because I do not want to
use fstat or procstat in the testsuite.
* exception handlers are now run with interrupts disabled, which avoids
many race conditions
* fix some cases where SIGINT only aborts one command and continues the
script, in particular if a SIGINT causes an EINTR error which trumped the
interrupt.
Example:
sh -c 'echo < /some/fifo; echo This should not be printed'
The fifo should not have writers. When pressing ctrl+c to abort the open,
the shell used to continue with the next command.
Example:
sh -c '/bin/echo < /some/fifo; echo This should not be printed'
Similar. Note, however, that this particular case did not and does not work
in interactive mode with job control enabled.
Formerly, it was possible for the file to be created between the check if it
existed and the open; the contents would then be lost.
Because this must use O_EXCL, noclobber > will not create a file through a
symlink anymore. This agrees with behaviour of other shells.
Approved by: ed (mentor) (implicit)
should slightly reduce the number of system calls in critical portions of
the shell, and select a more efficient path through the fdalloc code.
Reviewed by: bde
- Removed dead declarations
- Made objects that should have been declared as static, static.
The changes use STATIC instead of static, following the existing
convention in the rest of the code.
Approved by: schweikh (mentor)
MFC after: 2 weeks
o Old-style K&R declarations have been converted to new C89 style
o register has been removed
o prototype for main() has been removed (gcc3 makes it an error)
o int main(int argc, char *argv[]) is the preferred main definition.
o Attempt to not break style(9) conformance for declarations more than
they already are.
o Change
int
foo() {
...
to
int
foo(void)
{
...
This will make a number of things easier in the future, as well as (finally!)
avoiding the Id-smashing problem which has plagued developers for so long.
Boy, I'm glad we're not using sup anymore. This update would have been
insane otherwise.
merge of parallel duplicate work by Steve Price and myself. :-]
There are some changes to the build that are my fault... mkinit.c was
trying (poorly) to duplicate some of the work that make(1) is designed to
do. The Makefile hackery is my fault too, the depend list was incomplete
because of some explicit OBJS+= entries, so mkdep wasn't picking up their
source file #includes.
This closes a pile of /bin/sh PR's, but not all of them..
Submitted by: Steve Price <steve@bonsai.hiwaay.net>, peter
o fix brokeness for 1>&5 redirection, where `5' was an invalid file
descriptor, but no error message has been generated
o fix brokeness for redirect to/from myself case