string may be found (from the beginning of the pattern), the point
at which must is found minus that offset may actually point to some
place before the start of the text.
In that case, make start = start.
Alternatively, this could be tested for in the preceding if, but it
did not occur to me. :-)
Caught by: regex(3) test code
use a CHAR_MIN-based array, like elsewhere in the code.
Remove a number of unused variables (some due to the above change, one
that was left after a number of optimizing steps through the source).
Brucified by: bde
previous commits.
At the time we search the pattern for the "must" string, we now compute
the longest offset from the beginning of the pattern at which the must
string might be found. If that offset is found to be infinite (through
use of "+" or "*"), we set it to -1 to disable the heuristics applied
later.
After we are done with pre-matching, we use that offset and the point in
the text at which the must string was found to compute the earliest
point at which the pattern might be found.
Special care should be taken here. The variable "start" is passed to the
automata-processing functions fast() and slow() to indicate the point in
the text at which they should start working from. The real beginning of
the text is passed in a struct match variable m, which is used to check
for anchors. That variable, though, is initialized with "start", so we
must not adjust "start" before "m" is properly initialized.
Simple tests showed a speed increase from 100% to 400%, but they were
biased in that regexec() was called for the whole file instead of line
by line, and parenthized subexpressions were not searched for.
This change adds a single integer to the size of the "guts" structure,
and does not change the ABI.
Further improvements possible:
Since the speed increase observed here is so huge, one intuitive
optimization would be to introduce a bias in the function that computes
the "must" string so as to prefer a smaller string with a finite offset
over a larger one with an infinite offset. Tests have shown this to be a
bad idea, though, as the cost of false pre-matches far outweights the
benefits of a must offset, even in biased situations.
A number of other improvements suggest themselves, though:
* identify the cases where the pattern is identical to the must
string, and avoid entering fast() and slow() in these cases.
* compute the maximum offset from the must string to the end of
the pattern, and use that to set the point at which fast() and
slow() should give up trying to find a match, and return then
return to pre-matching.
* return all the way to pre-matching if a "match" was found and
later invalidated by back reference processing. Since back
references are evil and should be avoided anyway, this is of
little use.
The BM algorithm works by scanning the pattern from right to left,
and jumping as many characters as viable based on the text's mismatched
character and the pattern's already matched suffix.
This typically enable us to test only a fraction of the text's characters,
but has a worse performance than the straight-forward method for small
patterns. Because of this, the BM algorithm will only be used if the
pattern size is at least 4 characters.
Notice that this pre-matching is done on the largest substring of the
regular expression that _must_ be present on the text for a succesful
match to be possible at all.
For instance, "(xyzzy|grues)" will yield a null "must" substring, and,
therefore, not benefit from the BM algorithm at all. Because of the
lack of intelligence of the algorithm that finds the "must" string,
things like "charjump|matchjump" will also yield a null string. To
optimize that, "(char|match)jump" should be used.
The setup time (at regcomp()) for the BM algorithm will most likely
outweight any benefits for one-time matches. Given the slow regex(3)
we have, this is unlikely to be even perceptible, though.
The size of a regex_t structure is increased by 2*sizeof(char*) +
256*sizeof(int) + strlen(must)*sizeof(int). This is all inside the
regex_t's "guts", which is allocated dynamically by regcomp(). If
allocation of either of the two tables fail, the other one is freed.
In this case, the straight-forward algorithm is used for pre-matching.
Tests exercising the code path affected have shown a speed increase of
50% for "must" strings of length four or five.
API and ABI remain unchanged by this commit.
The patch submitted on the PR was not used, as it was non-functional.
PR: 14342