Commit Graph

8 Commits

Author SHA1 Message Date
Daniel C. Sobral
c5e125bbbf Deal with the signed/unsigned chars issue in a more proper manner. We
use a CHAR_MIN-based array, like elsewhere in the code.

Remove a number of unused variables (some due to the above change, one
that was left after a number of optimizing steps through the source).

Brucified by: bde
2000-07-07 07:46:36 +00:00
Daniel C. Sobral
e6a886d8db Enhance the optimization provided by pre-matching. Fix style bugs with
previous commits.

At the time we search the pattern for the "must" string, we now compute
the longest offset from the beginning of the pattern at which the must
string might be found. If that offset is found to be infinite (through
use of "+" or "*"), we set it to -1 to disable the heuristics applied
later.

After we are done with pre-matching, we use that offset and the point in
the text at which the must string was found to compute the earliest
point at which the pattern might be found.

Special care should be taken here. The variable "start" is passed to the
automata-processing functions fast() and slow() to indicate the point in
the text at which they should start working from. The real beginning of
the text is passed in a struct match variable m, which is used to check
for anchors. That variable, though, is initialized with "start", so we
must not adjust "start" before "m" is properly initialized.

Simple tests showed a speed increase from 100% to 400%, but they were
biased in that regexec() was called for the whole file instead of line
by line, and parenthized subexpressions were not searched for.

This change adds a single integer to the size of the "guts" structure,
and does not change the ABI.

Further improvements possible:

Since the speed increase observed here is so huge, one intuitive
optimization would be to introduce a bias in the function that computes
the "must" string so as to prefer a smaller string with a finite offset
over a larger one with an infinite offset. Tests have shown this to be a
bad idea, though, as the cost of false pre-matches far outweights the
benefits of a must offset, even in biased situations.

A number of other improvements suggest themselves, though:

	* identify the cases where the pattern is identical to the must
	string, and avoid entering fast() and slow() in these cases.

	* compute the maximum offset from the must string to the end of
	the pattern, and use that to set the point at which fast() and
	slow() should give up trying to find a match, and return then
	return to pre-matching.

	* return all the way to pre-matching if a "match" was found and
	later invalidated by back reference processing. Since back
	references are evil and should be avoided anyway, this is of
	little use.
2000-07-02 10:58:07 +00:00
Daniel C. Sobral
6049d9f0eb Add Boyler-Moore algorithm to pre-matching test.
The BM algorithm works by scanning the pattern from right to left,
and jumping as many characters as viable based on the text's mismatched
character and the pattern's already matched suffix.

This typically enable us to test only a fraction of the text's characters,
but has a worse performance than the straight-forward method for small
patterns. Because of this, the BM algorithm will only be used if the
pattern size is at least 4 characters.

Notice that this pre-matching is done on the largest substring of the
regular expression that _must_ be present on the text for a succesful
match to be possible at all.

For instance, "(xyzzy|grues)" will yield a null "must" substring, and,
therefore, not benefit from the BM algorithm at all. Because of the
lack of intelligence of the algorithm that finds the "must" string,
things like "charjump|matchjump" will also yield a null string. To
optimize that, "(char|match)jump" should be used.

The setup time (at regcomp()) for the BM algorithm will most likely
outweight any benefits for one-time matches. Given the slow regex(3)
we have, this is unlikely to be even perceptible, though.

The size of a regex_t structure is increased by 2*sizeof(char*) +
256*sizeof(int) + strlen(must)*sizeof(int). This is all inside the
regex_t's "guts", which is allocated dynamically by regcomp(). If
allocation of either of the two tables fail, the other one is freed.
In this case, the straight-forward algorithm is used for pre-matching.

Tests exercising the code path affected have shown a speed increase of
50% for "must" strings of length four or five.

API and ABI remain unchanged by this commit.

The patch submitted on the PR was not used, as it was non-functional.

PR: 14342
2000-06-29 04:48:34 +00:00
Andrey A. Chernov
b5363c4a3b Use locale for character classes instead of hardcoded values
Misc 8bit cleanup
1996-08-11 11:42:03 +00:00
Jordan K. Hubbard
51295a4d3e General -Wall warning cleanup, part I.
Submitted-By: Kent Vander Velden <graphix@iastate.edu>
1996-07-12 18:57:58 +00:00
Poul-Henning Kamp
16252f1166 More cleanup.
Uhm, I also forgot: I took "EXTRA_SANITY" out of malloc.c
1995-10-22 14:40:55 +00:00
Rodney W. Grimes
6c06b4e2aa Remove trailing whitespace. 1995-05-30 05:51:47 +00:00
Rodney W. Grimes
58f0484fa2 BSD 4.4 Lite Lib Sources 1994-05-27 05:00:24 +00:00