An additional one coming from http://www.research.att.com/~gsf/testregex/
was not added; at some point the entire AT&T regression test harness
should be imported here.
But that would also mean commitment to fix the uncovered errors.
PR: 130504
Submitted by: Chris Kuklewicz
that belong in a character class, and (2) one for matching all
the characters *not* in a character class.
Submitted by: Mark B, mkbucc at gmail.com
MFC after: 3 days
inadvertently match a negative char in the RE being compiled.
This fixes compilation of "\376" (as an ERE) and "\376\376" (as a BRE).
PR: 84740
MFC after: 1 week
reading past 'stop' in various places when converting multibyte characters.
Reading too far caused truncation to not be detected when it should have
been, eventually causing regexec() to loop infinitely in with certain
combinations of patterns and strings in multibyte locales.
PR: 74020
MFC after: 4 weeks
multibyte character support:
- In CHadd(), avoid writing past the end of the character set bitmap when
the opposite-case counterpart of wide characters with values less than
NC have values greater than or equal to NC.
- In CHaddtype(), fix a braino that caused alphabetic characters to be
added to all character classes! (but only with REG_ICASE)
PR: 71367
idea is that we perform multibyte->wide character conversion while parsing
and compiling, then convert byte sequences to wide characters when they're
needed for comparison and stepping through the string during execution.
As with tr(1), the main complication is to efficiently represent sets of
characters in bracket expressions. The old bitmap representation is replaced
by a bitmap for the first 256 characters combined with a vector of individual
wide characters, a vector of character ranges (for [A-Z] etc.), and a vector
of character classes (for [[:alpha:]] etc.).
One other point of interest is that although the Boyer-Moore algorithm had
to be disabled in the general multibyte case, it is still enabled for UTF-8
because of its self-synchronizing nature. This greatly speeds up matching
by reducing the number of multibyte conversions that need to be done.
Only warnings that could be fixed without changing the generated object
code and without restructuring the source code have been handled.
Reviewed by: /sbin/md5
access an array beyond it's length. This only happens in the last iteration of
a loop, and the value fetched is not used then, so the bug is a relatively
innocent one. Fix this by not fetching any value on the last iteration of said
loop.
Submitted by: MKI <mki@mozone.net>
MFC after: 1 week
Avoid using parenthesis enclosure macros (.Pq and .Po/.Pc) with plain text.
Not only this slows down the mdoc(7) processing significantly, but it also
has an undesired (in this case) effect of disabling hyphenation within the
entire enclosed block.
of the processing of the recursion, "scan" would be pointing to O_CH
(or O_QUEST), which would then be interpreted as being the end character
for altoffset().
We avoid this by properly increasing scan before leaving the switch.
Without this, something like (a?b?)?cc would result in a g->moffset of
1 instead of 2.
I added a case to the soon-to-be-imported regex(3) test code to catch
this error.
string may be found (from the beginning of the pattern), the point
at which must is found minus that offset may actually point to some
place before the start of the text.
In that case, make start = start.
Alternatively, this could be tested for in the preceding if, but it
did not occur to me. :-)
Caught by: regex(3) test code
use a CHAR_MIN-based array, like elsewhere in the code.
Remove a number of unused variables (some due to the above change, one
that was left after a number of optimizing steps through the source).
Brucified by: bde
previous commits.
At the time we search the pattern for the "must" string, we now compute
the longest offset from the beginning of the pattern at which the must
string might be found. If that offset is found to be infinite (through
use of "+" or "*"), we set it to -1 to disable the heuristics applied
later.
After we are done with pre-matching, we use that offset and the point in
the text at which the must string was found to compute the earliest
point at which the pattern might be found.
Special care should be taken here. The variable "start" is passed to the
automata-processing functions fast() and slow() to indicate the point in
the text at which they should start working from. The real beginning of
the text is passed in a struct match variable m, which is used to check
for anchors. That variable, though, is initialized with "start", so we
must not adjust "start" before "m" is properly initialized.
Simple tests showed a speed increase from 100% to 400%, but they were
biased in that regexec() was called for the whole file instead of line
by line, and parenthized subexpressions were not searched for.
This change adds a single integer to the size of the "guts" structure,
and does not change the ABI.
Further improvements possible:
Since the speed increase observed here is so huge, one intuitive
optimization would be to introduce a bias in the function that computes
the "must" string so as to prefer a smaller string with a finite offset
over a larger one with an infinite offset. Tests have shown this to be a
bad idea, though, as the cost of false pre-matches far outweights the
benefits of a must offset, even in biased situations.
A number of other improvements suggest themselves, though:
* identify the cases where the pattern is identical to the must
string, and avoid entering fast() and slow() in these cases.
* compute the maximum offset from the must string to the end of
the pattern, and use that to set the point at which fast() and
slow() should give up trying to find a match, and return then
return to pre-matching.
* return all the way to pre-matching if a "match" was found and
later invalidated by back reference processing. Since back
references are evil and should be avoided anyway, this is of
little use.
The BM algorithm works by scanning the pattern from right to left,
and jumping as many characters as viable based on the text's mismatched
character and the pattern's already matched suffix.
This typically enable us to test only a fraction of the text's characters,
but has a worse performance than the straight-forward method for small
patterns. Because of this, the BM algorithm will only be used if the
pattern size is at least 4 characters.
Notice that this pre-matching is done on the largest substring of the
regular expression that _must_ be present on the text for a succesful
match to be possible at all.
For instance, "(xyzzy|grues)" will yield a null "must" substring, and,
therefore, not benefit from the BM algorithm at all. Because of the
lack of intelligence of the algorithm that finds the "must" string,
things like "charjump|matchjump" will also yield a null string. To
optimize that, "(char|match)jump" should be used.
The setup time (at regcomp()) for the BM algorithm will most likely
outweight any benefits for one-time matches. Given the slow regex(3)
we have, this is unlikely to be even perceptible, though.
The size of a regex_t structure is increased by 2*sizeof(char*) +
256*sizeof(int) + strlen(must)*sizeof(int). This is all inside the
regex_t's "guts", which is allocated dynamically by regcomp(). If
allocation of either of the two tables fail, the other one is freed.
In this case, the straight-forward algorithm is used for pre-matching.
Tests exercising the code path affected have shown a speed increase of
50% for "must" strings of length four or five.
API and ABI remain unchanged by this commit.
The patch submitted on the PR was not used, as it was non-functional.
PR: 14342
track.
The $Id$ line is normally at the bottom of the main comment block in the
man page, separated from the rest of the manpage by an empty comment,
like so;
.\" $Id$
.\"
If the immediately preceding comment is a @(#) format ID marker than the
the $Id$ will line up underneath it with no intervening blank lines.
Otherwise, an additional blank line is inserted.
Approved by: bde
In some cases replace if (a == null) a = malloc(x); else a =
realloc(a, x); with simple reallocf(a, x). Per ANSI-C, this is
guaranteed to be the same thing.
I've been running these on my system here w/o ill effects for some
time. However, the CTM-express is at part 6 of 34 for the CAM
changes, so I've not been able to do a build world with the CAM in the
tree with these changes. Shouldn't impact anything, but...
so that all these makefiles can be used to build libc_r too.
Added .if ${LIB} == "c" tests to restrict man page builds to libc
to avoid needlessly building them with libc_r too.
Split libc Makefile into Makefile and Makefile.inc to allow the
libc_r Makefile to include Makefile.inc too.
in a bunch of man pages.
Use the correct .Bx (BSD UNIX) or .At (AT&T UNIX) macros
instead of explicitly specifying the version in the text
in a bunch of man pages.