While STACKSIZE macro is indeed problematic on some systems, the commits
were wrong to shrink il[] and cstk[], because they need to be of the same
size as p_stack[] as they're accessed with the same index ps.tos.
Remove procedural code that did the scanning, which was faulty and didn't
support complex constants such as 0x1p-61. Replace it with a finite state
machine expressed as a transition table. The table was rewritten by hand
from lx's output, given parts of grammar expressed as regular expressions.
lx is Katherine Flavel's lexer generator, currently available at
https://github.com/katef/libfsm and the parts of grammar were taken from
http://quut.com/c/ANSI-C-grammar-l-2011.html and extended to support binary
integer constants which are a popular GCC extension.
Reported by: bde
It's clearer now when a variable represents a toggable command line option.
Many options were stored in the parser's state structure, so fix also that.
There exist multi-platform programs that check indent's version in order to
know what they can expect from it. GNU indent provides that via --version,
so implement the same option here.
With -lpl, code surrounded by parentheses in continuation lines is lined up
even if it would extend past the right margin.
With -nlpl (the default), such a line that would extend past the right
margin is moved left to keep it within the margin, if that does not require
placing it to the left of the prevailing indentation level.
These switches have no effect if -nlp is selected.
Submitted by: Tom Lane
With -lp, if a line has an opening paren which is not closed on that line,
then continuation lines will be lined up to start at the character position
just after the opening paren.
Submitted by: Tom Lane
Rewrite the macros so that they take a parameter. Consumers use it to signal
how much room in the buffer they need; this lets them do that once when
required space is known instead of doing the check once every loop step.
Also take the parameter value into consideration when resizing the buffer;
the requested space may be larger than the constant 400 bytes that the
previous version used - now it's the sum of those two values.
On the consumer side, don't copy strings byte by byte - use memcpy().
Deduplicate code that copied base 2, base 8 and base 16 literals.
Don't advance the e_token pointer once the token has been copied into
s_token. This allows easy calculation of the token's length.
The troff output in indent was invented at Sun and the online documentation
for some post-SunOS operating system includes this:
The usual way to get a troffed listing is with the command
indent -troff program.c | troff -mindent
The indent manual page in FreeBSD 1.0 already lacks that information and
troff -mindent complains about not being able to find the macro file.
It seems that the file did exist on SunOS and was supposed to be imported
into 4.3BSD together with the feature, but that has never happened.
Removal of troff output support simplifies a lot of indent's code.
vgrind(1) seems to be a promising replacement.
It was a shorthand for checking if ps.procname is a non-empty string; the
same can be done with ps.procname[0] which avoids the need for updating
is_procname after every call to lexi().
The trick is to copy everything from the start of the line into the buffer
that stores newlines and comments until indent finds a brace or an else.
pr_comment() will use that information to calculate the original indentation
of the boxed comment.
This requires storing two pieces of information: the real start of the
buffer (sc_buf) and the start of the comment (save_com).
lexi() reads the input stream and categorizes the next token. indent will
sometimes buffer up a sequence of tokens in order rearrange them. That is
needed for properly cuddling else or placing braces correctly according to
the chosen style (KNF vs Allman) when comments are around. The loop that
buffers tokens up uses lexi() to decide if it's time to stop buffering. Then
the temporary buffer is used to feed lexi() the same tokens again, this time
for normal processing.
The problem is that lexi() apart from recognizing the token, can change
a lot of information about the current state, for example ps.last_nl,
ps.keyword, buf_ptr. It also abandons leading whitespace, which is needed
mainly for comment-related considerations. So the call to lexi() while
tokens are buffered up and categorized can change the state before they're
read again for normal processing which may easily result in changing
interpretation of the current state and lead to incorrect output.
To work around the problems:
1) copy the whitespace into the save_com buffer so that it will be read
again when processed
2) trick lexi() into modifying a temporary copy of the parser state instead
of the original.
"while (...)" and "else" or "{"
* Don't flush newlines - there can be multiple of them and they can happen
before a token that isn't else or {. Instead, always store them in save_com.
* Don't dump the buffer's contents on newline assuming that there is only
one comment before else or {.
* Avoid producing surplus newlines, especially before else when -ce is on.
* When -bl is on, don't treat { as a comment (was implemented by falling
through "case lbrace:" to "case comment:").
This commit fixes the above, but exposes another bug and thus breaks several
other tests. Another commit will make them pass again.
My previous indent(1) commit accidentally broke the -pcs option (which adds
space between function name and opening parenthesis in function calls) by
copying all but one of a few conditions in an if clause. Reinstate the
condition.
Add a regression test to lower the chances of breaking it again.
Correct a comment with description of what the option does.
If the current token is an opening parenthesis, it's either a function call
(or sizeof or offsetof) or a declaration. The former doesn't need a space
before the parenthesis.
Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
No functional change intended.
The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.
Special thanks to Wind River for providing access to "The Duke of
Highlander" tool: an older (2014) run over FreeBSD tree was useful as a
starting point.
Initially, only tag files that use BSD 4-Clause "Original" license.
RelNotes: yes
Differential Revision: https://reviews.freebsd.org/D13133
Non-tests/... changes:
- Add HAS_TESTS= to Makefiles with libraries and programs to enable iteration
and propagate the appropriate environment down to *.test.mk.
tests/... changes:
- Add appropriate support Makefile.inc's to set HAS_TESTS in a minimal manner,
since tests/... is a special subdirectory tree compared to the others.
MFC after: 2 months
MFC with: r322511
Reviewed by: arch (silence), testing (silence)
Differential Revision: D12014
Instead of using a non-configurable ".BAK" suffix, respect the
SIMPLE_BACKUP_SUFFIX environment variable also used by patch(1). This
simplifies cleanup operations in some patch/indent workflows.
Reviewed by: cem (earlier version), emaste, pstef
Approved by: emaste (mentor)
Differential Revision: https://reviews.freebsd.org/D10921
directories to SUBDIR.${MK_TESTS} idiom
This is being done to pave the way for future work (and homogenity) in
^/projects/make-check-sandbox .
No functional change intended.
MFC after: 1 weeks