Remove CVS from the base system.

Discussed with:	many
Reviewed by:	peter, zi
Approved by:	core
This commit is contained in:
Eitan Adler 2013-06-15 20:29:07 +00:00
parent 65c7c08716
commit 1cbff2a999
257 changed files with 34 additions and 228896 deletions

View File

@ -38,6 +38,33 @@
# xargs -n1 | sort | uniq -d;
# done
# 20130614: remove CVS from base
OLD_FILES+=usr/bin/cvs
OLD_FILES+=usr/bin/cvsbug
OLD_FILES+=usr/share/examples/cvs/contrib/README
OLD_FILES+=usr/share/examples/cvs/contrib/clmerge
OLD_FILES+=usr/share/examples/cvs/contrib/cln_hist
OLD_FILES+=usr/share/examples/cvs/contrib/commit_prep
OLD_FILES+=usr/share/examples/cvs/contrib/cvs2vendor
OLD_FILES+=usr/share/examples/cvs/contrib/cvs_acls
OLD_FILES+=usr/share/examples/cvs/contrib/cvscheck
OLD_FILES+=usr/share/examples/cvs/contrib/cvscheck.man
OLD_FILES+=usr/share/examples/cvs/contrib/cvshelp.man
OLD_FILES+=usr/share/examples/cvs/contrib/descend.man
OLD_FILES+=usr/share/examples/cvs/contrib/easy-import
OLD_FILES+=usr/share/examples/cvs/contrib/intro.doc
OLD_FILES+=usr/share/examples/cvs/contrib/log
OLD_FILES+=usr/share/examples/cvs/contrib/log_accum
OLD_FILES+=usr/share/examples/cvs/contrib/mfpipe
OLD_FILES+=usr/share/examples/cvs/contrib/rcs-to-cvs
OLD_FILES+=usr/share/examples/cvs/contrib/rcs2log
OLD_FILES+=usr/share/examples/cvs/contrib/rcslock
OLD_FILES+=usr/share/examples/cvs/contrib/sccs2rcs
OLD_FILES+=usr/share/info/cvs.info.gz
OLD_FILES+=usr/share/info/cvsclient.info.gz
OLD_FILES+=usr/share/man/man1/cvs.1.gz
OLD_FILES+=usr/share/man/man5/cvs.5.gz
OLD_FILES+=usr/share/man/man8/cvsbug.8.gz
# 20130417: nfs fha moved from nfsserver to nfs
OLD_FILES+=usr/include/nfsserver/nfs_fha.h
# 20130411: new clang import which bumps version from 3.2 to 3.3.

View File

@ -31,6 +31,10 @@ NOTE TO PEOPLE WHO THINK THAT FreeBSD 10.x IS SLOW:
disable the most expensive debugging functionality run
"ln -s 'abort:false,junk:false' /etc/malloc.conf".)
20130615:
CVS has been removed from the base system. An exact copy
of the code is available from the devel/cvs port.
20130613:
Some people report the following error after the switch to bmake:

View File

@ -1,90 +0,0 @@
Authors of GNU CVS
The conflict-resolution algorithms and much of the administrative file
definitions of CVS were based on the original package written by Dick Grune
at Vrije Universiteit in Amsterdam <dick@cs.vu.nl>, and posted to
comp.sources.unix in the volume 6 release sometime in 1986. This original
version was a collection of shell scripts. I am thankful that Dick made
his work available.
Brian Berliner from Prisma, Inc. (now at Sun Microsystems, Inc.)
<berliner@sun.com> converted the original CVS shell scripts into reasonably
fast C and added many, many features to support software release control
functions. See the manual page in the "man" directory. A copy of the
USENIX article presented at the Winter 1990 USENIX Conference, Washington
D.C., is included in the "doc" directory.
Jeff Polk from BSDI <polk@bsdi.com> converted the CVS 1.2
sources into much more readable and maintainable C code. He also added a
whole lot of functionality and modularity to the code in the process.
See the bottom of the NEWS file (from about 1992).
david d `zoo' zuhn <zoo@armadillo.com> contributed the working base code
for CVS 1.4 Alpha. His work carries on from work done by K. Richard Pixley
and others at Cygnus Support. The CVS 1.4 upgrade is due in large part to
Zoo's efforts.
David G. Grubbs <dgg@odi.com> contributed the CVS "history" and "release"
commands. As well as the ever-so-useful "-n" option of CVS which tells CVS
to show what it would do, without actually doing it. He also contributed
support for the .cvsignore file.
The Free Software Foundation (GNU) contributed most of the portability
framework that CVS now uses. This can be found in the "configure" script,
the Makefile's, and basically most of the "lib" directory.
K. Richard Pixley, Cygnus Support <rich@cygnus.com> contributed many bug
fixes/enhancement as well as completing early reviews of the CVS 1.3 manual
pages.
Roland Pesch, then of Cygnus Support <roland@wrs.com> contributed
brand new cvs(1) and cvs(5) manual pages. Thanks to him for saving us
from poor use of our language!
Paul Sander, HaL Computer Systems, Inc. <paul@hal.com> wrote and
contributed the code in lib/sighandle.c. I added support for POSIX, BSD,
and non-POSIX/non-BSD systems.
Jim Kingdon and others at Cygnus Support <info@cygnus.com> wrote the
remote repository access code.
Larry Jones and Derek Price <derek@ximbiot.com> have been maintaining and
enhancing CVS for some years. Mark D. Baushke <mdb@gnu.org> came on in
2003.
Conrad Pino <Conrad@Pino.com> began maintaining the Windows port in 2004.
There have been many, many contributions not listed here. Consult the
individual ChangeLog files in each directory for a more complete idea.
In addition to the above contributors, the following Beta testers
deserve special mention for their support. This is only a partial
list; if you have helped in this way and would like to be listed, let
bug-cvs know (as described in the Cederqvist manual).
Mark D. Baushke <mdb@cisco.com>
Per Cederqvist <ceder@signum.se>
J.T. Conklin <jtc@cygnus.com>
Vince DeMarco <vdemarco@fdcsrvr.cs.mci.com>
Paul Eggert <eggert@twinsun.com>
Lal George <george@research.att.com>
Dean E. Hardi <Dean.E.Hardi@ccmail.jpl.nasa.gov>
Mike Heath <mike@pencom.com>
Jim Kingdon <kingdon@cygnus.com>
Bernd Leibing <bernd.leibing@rz.uni-ulm.de>
Benedict Lofstedt <benedict@tusc.com.au>
Dave Love <d.love@dl.ac.uk>
Robert Lupton the Good <rhl@astro.princeton.edu>
Tom McAliney <tom@hilco.com>
Eberhard Mattes <mattes@azu.informatik.uni-stuttgart.de>
Jim Meyering <meyering@comco.com>
Thomas Mohr <mohr@lts.sel.alcatel.de>
Thomas Nilsson <thoni@softlab.se>
Raye Raskin <raye.raskin@lia.com>
Harlan Stenn <harlan@landmark.com>
Gunnar Tornblom <gunnar.tornblom@senet.abb.se>
Greg A. Woods <woods@planix.com>
Many contributors have added code to the "contrib" directory. See the
README file there for a list of what is available. There is also a
contributed GNU Emacs CVS-mode in tools/pcl-cvs.

View File

@ -1,100 +0,0 @@
See the Cederqvist manual (cvs.texinfo) for information on how to
report bugs (and what will happen to your bug reports if you do).
The following is a list of some of the known bugs. It may or may not
be comprehensive. We would dearly love for people to volunteer to
help us keep it up to date (for starters, if you notice any
inaccuracies, please let bug-cvs know as described in the Cederqvist
manual). There are some other reported bugs in MINOR-BUGS; the
difference, at least in theory, is that those bugs are less serious.
* For platform-specific information (in some cases including known
bugs), see README.VMS, windows-NT/README, or os2/README. There is no
similar file for the unix-like operating systems (not yet, at least).
This file also might contain some platform-specific bugs.
* If your login name contains a space or various other characters
(particularly an issue on Windows), CVS will have trouble (it will
write invalid RCS files, probably). The fix would be to have CVS
change such characters to underscores before writing them to the RCS
file. Furthermore, the LOGNAME or USER environment variables usually
won't override the system login name, so this can be hard to work
around.
* If you specify the -w global option to client/server CVS, it only
overrides a CVSREAD environment variable set on the client, not a
CVSREAD variable which was set on the server (for example, in .bashrc
when the server was run via rsh). The fix of course will be to
provide a "Option-read-write" request which sends -w, in addition to
"Global_option -r" which sends -r.
* Symbolic links to files will not work with or without LockDir. In the
repository, you should avoid using symbolic links to files since this issue
can cause data loss. Symlinks are only a problem when writing files. If your
repository does not allow any write access, symlinks are not a problem.
* Symbolic links to directories will not work with LockDir. In the
repository, you should avoid using symbolic links to directories if
you intend to use LockDir as the correct directory will NOT be locked
by CVS during write. Directory symlinks are not recommended, but should work
as long as LockDir is not being used. Symlinks are only a problem when
writing files. If your repository does not allow any write access, symlinks
are never a problem, whether or not LockDir is in use.
* The -m option to "cvs add" does not work with client/server CVS.
CVS will accept the option, but it won't actually set the
file's description.
* cvs update walks into a user's work directory if there's a directory
of the same name in the repository even if the user's directory
doesn't yet have a CVS admin sub-directory. This can greatly confuse
users who try to add the same directory at nearly the same time.
* From: "Charles M. Hannum" <mycroft@ai.mit.edu>
To: info-cvs@prep.ai.mit.edu
Subject: Still one more bug
Date: Sat, 25 Feb 1995 17:01:15 -0500
mycroft@duality [1]; cd /usr/src/lib/libc
mycroft@duality [1]; cvs diff -C2 '-D1 day ago' -Dnow
cvs server: Diffing .
cvs server: Diffing DB
cvs [server aborted]: could not chdir to DB: No such file or directory
mycroft@duality [1];
`DB' is an old directory, which no longer has files in it, and is
removed automatically when I use the `-P' option to checkout.
This error doesn't occur when run locally.
P.S. Is anyone working on fixing these bugs?
* CVS does not always seem to be waiting to the next filesystem timestamp
quanta after commits. So far this has only shown up in testing under the BSDI
OS. The symptoms are that ocassionally CVS will not notice that modified files
are modified, though the file must be modified within a short time after the
commit, probably milliseconds or seconds, for this symptom to be noticed. One
suspected cause is that one of the calls to sleep_past() is being called with
an incorrect value, though this does not explain why symptoms have only been
noticed under BSDI.
* Status
/*-------.
| Stable |
`-------*/
/*-------------------------.
| Sane for full scale use. |
`-------------------------*/

View File

@ -1,251 +0,0 @@
[I have snipped the snail mail address of the FSF because it has
changed in the past and is likely to change again. The current
address should be at http://www.gnu.org/]
GNU GENERAL PUBLIC LICENSE
Version 1, February 1989
Copyright (C) 1989 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The license agreements of most software companies try to keep users
at the mercy of those companies. By contrast, our General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. The
General Public License applies to the Free Software Foundation's
software and to any other program whose authors commit to using it.
You can use it for your programs, too.
When we speak of free software, we are referring to freedom, not
price. Specifically, the General Public License is designed to make
sure that you have the freedom to give away or sell copies of free
software, that you receive source code or can get it if you want it,
that you can change the software or use pieces of it in new free
programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of a such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must tell them their rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any program or other work which
contains a notice placed by the copyright holder saying it may be
distributed under the terms of this General Public License. The
"Program", below, refers to any such program or work, and a "work based
on the Program" means either the Program or any work containing the
Program or a portion of it, either verbatim or with modifications. Each
licensee is addressed as "you".
1. You may copy and distribute verbatim copies of the Program's source
code as you receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice and
disclaimer of warranty; keep intact all the notices that refer to this
General Public License and to the absence of any warranty; and give any
other recipients of the Program a copy of this General Public License
along with the Program. You may charge a fee for the physical act of
transferring a copy.
2. You may modify your copy or copies of the Program or any portion of
it, and copy and distribute such modifications under the terms of Paragraph
1 above, provided that you also do the following:
a) cause the modified files to carry prominent notices stating that
you changed the files and the date of any change; and
b) cause the whole of any work that you distribute or publish, that
in whole or in part contains the Program or any part thereof, either
with or without modifications, to be licensed at no charge to all
third parties under the terms of this General Public License (except
that you may choose to grant warranty protection to some or all
third parties, at your option).
c) If the modified program normally reads commands interactively when
run, you must cause it, when started running for such interactive use
in the simplest and most usual way, to print or display an
announcement including an appropriate copyright notice and a notice
that there is no warranty (or else, saying that you provide a
warranty) and that users may redistribute the program under these
conditions, and telling the user how to view a copy of this General
Public License.
d) You may charge a fee for the physical act of transferring a
copy, and you may at your option offer warranty protection in
exchange for a fee.
Mere aggregation of another independent work with the Program (or its
derivative) on a volume of a storage or distribution medium does not bring
the other work under the scope of these terms.
3. You may copy and distribute the Program (or a portion or derivative of
it, under Paragraph 2) in object code or executable form under the terms of
Paragraphs 1 and 2 above provided that you also do one of the following:
a) accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of
Paragraphs 1 and 2 above; or,
b) accompany it with a written offer, valid for at least three
years, to give any third party free (except for a nominal charge
for the cost of distribution) a complete machine-readable copy of the
corresponding source code, to be distributed under the terms of
Paragraphs 1 and 2 above; or,
c) accompany it with the information you received as to where the
corresponding source code may be obtained. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form alone.)
Source code for a work means the preferred form of the work for making
modifications to it. For an executable file, complete source code means
all the source code for all modules it contains; but, as a special
exception, it need not include source code for modules which are standard
libraries that accompany the operating system on which the executable
file runs, or for standard header files or definitions files that
accompany that operating system.
4. You may not copy, modify, sublicense, distribute or transfer the
Program except as expressly provided under this General Public License.
Any attempt otherwise to copy, modify, sublicense, distribute or transfer
the Program is void, and will automatically terminate your rights to use
the Program under this License. However, parties who have received
copies, or rights to use copies, from you under this General Public
License will not have their licenses terminated so long as such parties
remain in full compliance.
5. By copying, distributing or modifying the Program (or any work based
on the Program) you indicate your acceptance of this license to do so,
and all its terms and conditions.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the original
licensor to copy, distribute or modify the Program subject to these
terms and conditions. You may not impose any further restrictions on the
recipients' exercise of the rights granted herein.
7. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of the license which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
the license, you may choose any version ever published by the Free Software
Foundation.
8. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
Appendix: How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to humanity, the best way to achieve this is to make it
free software which everyone can redistribute and change under these
terms.
To do so, attach the following notices to the program. It is safest to
attach them to the start of each source file to most effectively convey
the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) 19yy <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 1, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) 19xx name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License. Of course, the
commands you use may be called something other than `show w' and `show
c'; they could even be mouse-clicks or menu items--whatever suits your
program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the
program `Gnomovision' (a program to direct compilers to make passes
at assemblers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
That's all there is to it!

View File

@ -1,484 +0,0 @@
[I have snipped the snail mail address of the FSF because it has
changed in the past and is likely to change again. The current
address should be at http://www.gnu.org/]
GNU LIBRARY GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1991 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the library GPL. It is
numbered 2 because it goes with version 2 of the ordinary GPL.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Library General Public License, applies to some
specially designated Free Software Foundation software, and to any
other libraries whose authors decide to use it. You can use it for
your libraries, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if
you distribute copies of the library, or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link a program with the library, you must provide
complete object files to the recipients so that they can relink them
with the library, after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
Our method of protecting your rights has two steps: (1) copyright
the library, and (2) offer you this license which gives you legal
permission to copy, distribute and/or modify the library.
Also, for each distributor's protection, we want to make certain
that everyone understands that there is no warranty for this free
library. If the library is modified by someone else and passed on, we
want its recipients to know that what they have is not the original
version, so that any problems introduced by others will not reflect on
the original authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that companies distributing free
software will individually obtain patent licenses, thus in effect
transforming the program into proprietary software. To prevent this,
we have made it clear that any patent must be licensed for everyone's
free use or not licensed at all.
Most GNU software, including some libraries, is covered by the ordinary
GNU General Public License, which was designed for utility programs. This
license, the GNU Library General Public License, applies to certain
designated libraries. This license is quite different from the ordinary
one; be sure to read it in full, and don't assume that anything in it is
the same as in the ordinary license.
The reason we have a separate public license for some libraries is that
they blur the distinction we usually make between modifying or adding to a
program and simply using it. Linking a program with a library, without
changing the library, is in some sense simply using the library, and is
analogous to running a utility program or application program. However, in
a textual and legal sense, the linked executable is a combined work, a
derivative of the original library, and the ordinary General Public License
treats it as such.
Because of this blurred distinction, using the ordinary General
Public License for libraries did not effectively promote software
sharing, because most developers did not use the libraries. We
concluded that weaker conditions might promote sharing better.
However, unrestricted linking of non-free programs would deprive the
users of those programs of all benefit from the free status of the
libraries themselves. This Library General Public License is intended to
permit developers of non-free programs to use free libraries, while
preserving your freedom as a user of such programs to change the free
libraries that are incorporated in them. (We have not seen how to achieve
this as regards changes in header files, but we have achieved it as regards
changes in the actual functions of the Library.) The hope is that this
will lead to faster development of free libraries.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, while the latter only
works together with the library.
Note that it is possible for a library to be covered by the ordinary
General Public License rather than by this special one.
GNU LIBRARY GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library which
contains a notice placed by the copyright holder or other authorized
party saying it may be distributed under the terms of this Library
General Public License (also called "this License"). Each licensee is
addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also compile or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
c) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
d) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the source code distributed need not include anything that is normally
distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Library General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
Appendix: How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).
To apply these terms, attach the following notices to the library. It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the library's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Library General Public
License as published by the Free Software Foundation; either
version 2 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Library General Public License for more details.
You should have received a copy of the GNU Library General Public
License along with this library; if not, write to the Free
Software Foundation, Inc.
Also add information on how to contact you by electronic and paper mail.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the library, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the
library `Frob' (a library for tweaking knobs) written by James Random Hacker.
<signature of Ty Coon>, 1 April 1990
Ty Coon, President of Vice
That's all there is to it!

File diff suppressed because it is too large Load Diff

View File

@ -1,700 +0,0 @@
Thu Sep 15 14:19:21 1994 david d `zoo' zuhn <zoo@monad.armadillo.com>
* Makefile.in: define TEXI2DVI, add it to FLAGS_TO_PASS; remove
old comments about parameters for DEFS
Wed Jul 13 21:54:46 1994 david d `zoo' zuhn (zoo@monad.armadillo.com)
* contrib/rcs-to-cvs: rewritten for Bourne shell (thanks to David
MacKenzie <djm@cygnus.com>)
Wed Jul 13 21:48:38 1994 Ken Raeburn (raeburn@cujo.cygnus.com)
* Makefile.in: Deleted line consisting of only whitespace; it
confuses some versions of make.
Mon Jan 24 12:26:47 1994 david d zuhn (zoo@monad.armadillo.com)
* configure.in: check for <sys/select.h> and <ndbm.h>
* Makefile.in: define YACC and not BISON
Sat Dec 18 00:52:04 1993 david d zuhn (zoo@monad.armadillo.com)
* config.h.in: handle HAVE_SYS_WAIT_H, HAVE_ERRNO_H
* configure.in: check for memmove, <errno.h>
* Makefile.in (VPATH): don't use $(srcdir), but @srcdir@ instead
* configure.in (AC_HAVE_HEADERS): check for <sys/wait.h>
Mon Nov 29 15:05:43 1993 K. Richard Pixley (rich@sendai.cygnus.com)
* lib/Makefile.in, src/Makefile.in (CFLAGS): default to -g.
* src/log.c (log_fileproc): if a file has been added, but not
committed, then say so rather than reporting that nothing is
known.
* src/sanity.el: update for emacs-19.
* src/RCS-patches, src/README-rm-add: update for rcs-5.6.6.
* src/Makefile.in: removed some gratuitous diffs from cvs-1.3.
* src/cvsrc.c: strdup -> xstrdup, malloc -> xmalloc, comment about
fgets lossage.
* configure, configure.in, Makefile.in: support man and doc
directories and info and dvi targets.
* doc/cvs.texinfo: comment out include of gpl.texinfo.
* doc/Makefile.in: added dvi & info targets.
* doc/cvsclient.texi: added @setfilename.
* lib/Makefile.in: remove some extraneous diffs against the
patched cvs-1.3.
* doc/Makefile.in, man/Makefile.in: update for autoconf.
Fri Nov 19 12:56:34 1993 K. Richard Pixley (rich@sendai.cygnus.com)
* Many files: added configure.in, updated configure based on
autoconf.
Tue Jun 1 17:02:41 1993 david d `zoo' zuhn (zoo at cirdan.cygnus.com)
* configure: add support for alloca and sys/select.h
Wed May 19 19:34:48 1993 Jim Kingdon (kingdon@lioth.cygnus.com)
* cvs-format.el: Don't set c-tab-always-indent.
Mon Mar 22 23:25:33 1993 david d `zoo' zuhn (zoo at cirdan.cygnus.com)
* Makefile.in: installcheck: recurse into src directory to run tests
Mon Jan 18 17:21:16 1993 K. Richard Pixley (rich@rtl.cygnus.com)
* Makefile.in (check): recur into src directory in order to pick
up the sanity check.
Thu Dec 17 19:41:22 1992 david d `zoo' zuhn (zoo at cirdan.cygnus.com)
* Makefile.in: added blank 'dvi' target
Tue Apr 7 15:55:25 1992 Brian Berliner (berliner at sun.com)
* Changes between CVS 1.3 Beta-3 and official CVS 1.3!
* A new shell script is provided, "./cvsinit", which can be run at
install time to help setup your $CVSROOT area. This can greatly
ease your entry into CVS usage.
* The INSTALL file has been updated to include the machines on
which CVS has compiled successfully. I think CVS 1.3 is finally
portable. Thanks to all the Beta testers!
* Support for the "editinfo" file was contributed. This file
(located in $CVSROOT/CVSROOT) can be used to specify a special
"editor" to run on a per-directory basis within the repository,
instead of the usual user's editor. As such, it can verify that
the log message entered by the user is of the appropriate form
(contains a bugid and test validation, for example).
* The manual pages cvs(1) and cvs(5) have been updated.
* The "mkmodules" command now informs you when your modules file
has duplicate entries.
* The "add" command now preserves any per-directory sticky tag when
you add a new directory to your checked-out sources.
* The "admin" command is now a fully recursive interface to the
"rcs" program which operates on your checked-out sources. It no
longer requires you to specify the full path to the RCS file.
* The per-file sticky tags can now be effectively removed with
"cvs update -A file", even if you had checked out the whole
directory with a per-directory sticky tag. This allows a great
deal of flexibility in managing the revisions that your checked-out
sources are based upon (both per-directory and per-file sticky
tags).
* The "cvs -n commit" command now works, to show which files are
out-of-date and will cause the real commit to fail, or which files
will fail any pre-commit checks. Also, the "cvs -n import ..."
command will now show you what it would've done without actually
doing it.
* Doing "cvs commit modules" to checkin the modules file will no
properly run the "mkmodules" program (assuming you have setup your
$CVSROOT/CVSROOT/modules file to do so).
* The -t option in the modules file (which specifies a program to
run when you do a "cvs rtag" operation on a module) now gets the
symbolic tag as the second argument when invoked.
* When the source repository is locked by another user, that user's
login name will be displayed as the holder of the lock.
* Doing "cvs checkout module/file.c" now works even if
module/file.c is in the Attic (has been removed from main-line
development).
* Doing "cvs commit */Makefile" now works as one would expect.
Rather than trying to commit everything recursively, it will now
commit just the files specified.
* The "cvs remove" command is now fully recursive. To schedule a
file for removal, all you have to do is "rm file" and "cvs rm".
With no arguments, "cvs rm" will schedule all files that have been
physically removed for removal from the source repository at the
next "cvs commit".
* The "cvs tag" command now prints "T file" for each file that was
tagged by this invocation and "D file" for each file that had the
tag removed (as with "cvs tag -d").
* The -a option has been added to "cvs rtag" to force it to clean
up any old, matching tags for files that have been removed (in the
Attic) that may not have been touched by this tag operation. This
can help keep a consistent view with your tag, even if you re-use
it frequently.
Sat Feb 29 16:02:05 1992 Brian Berliner (berliner at sun.com)
* Changes between CVS 1.3 Beta-2 and CVS 1.3 Beta-3
* Many portability fixes, thanks to all the Beta testers! With any
luck, this Beta release will compile correctly on most anything.
Hey, what are we without our dreams.
* CVS finally has support for doing isolated development on a
branch off the current (or previous!) revisions. This is also
extremely nice for generating patches for previously released
software while development is progressing on the next release.
Here's an example of creating a branch to fix a patch with the 2.0
version of the "foo" module, even though we are already well into
the 3.0 release. Do:
% cvs rtag -b -rFOO_2_0 FOO_2_0_Patch foo
% cvs checkout -rFOO_2_0_Patch foo
% cd foo
[[ hack away ]]
% cvs commit
A physical branch will be created in the RCS file only when you
actually commit the change. As such, forking development at some
random point in time is extremely light-weight -- requiring just a
symbolic tag in each file until a commit is done. To fork
development at the currently checked out sources, do:
% cvs tag -b Personal_Hack
% cvs update -rPersonal_Hack
[[ hack away ]]
% cvs commit
Now, if you decide you want the changes made in the Personal_Hack
branch to be merged in with other changes made in the main-line
development, you could do:
% cvs commit # to make Personal_Hack complete
% cvs update -A # to update sources to main-line
% cvs update -jPersonal_Hack # to merge Personal_Hack
to update your checked-out sources, or:
% cvs checkout -jPersonal_Hack module
to checkout a fresh copy.
To support this notion of forked development, CVS reserves
all even-numbered branches for its own use. In addition, CVS
reserves the ".0" and ".1" branches. So, if you intend to do your
own branches by hand with RCS, you should use odd-numbered branches
starting with ".3", as in "1.1.3", "1.1.5", 1.2.9", ....
* The "cvs commit" command now supports a fully functional -r
option, allowing you to commit your changes to a specific numeric
revision or symbolic tag with full consistency checks. Numeric
tags are useful for bringing your sources all up to some revision
level:
% cvs commit -r2.0
For symbolic tags, you can only commit to a tag that references a
branch in the RCS file. One created by "cvs rtag -b" or from
"cvs tag -b" is appropriate (see below).
* Roland Pesch <pesch@cygnus.com> and K. Richard Pixley
<rich@cygnus.com> were kind enough to contribute two new manual
pages for CVS: cvs(1) and cvs(5). Most of the new CVS 1.3 features
are now documented, with the exception of the new branch support
added to commit/rtag/tag/checkout/update.
* The -j options of checkout/update have been added. The "cvs join"
command has been removed.
With one -j option, CVS will merge the changes made between the
resulting revision and the revision that it is based on (e.g., if
the tag refers to a branch, CVS will merge all changes made in
that branch into your working file).
With two -j options, CVS will merge in the changes between the two
respective revisions. This can be used to "remove" a certain delta
from your working file. E.g., If the file foo.c is based on
revision 1.6 and I want to remove the changes made between 1.3 and
1.5, I might do:
% cvs update -j1.5 -j1.3 foo.c # note the order...
In addition, each -j option can contain on optional date
specification which, when used with branches, can limit the chosen
revision to one within a specific date. An optional date is
specified by adding a colon (:) to the tag, as in:
-jSymbolic_Tag:Date_Specifier
An example might be what "cvs import" tells you to do when you have
just imported sources that have conflicts with local changes:
% cvs checkout -jTAG:yesterday -jTAG module
which tells CVS to merge in the changes made to the branch
specified by TAG in the last 24 hours. If this is not what is
intended, substitute "yesterday" for whatever format of date that
is appropriate, like:
% cvs checkout -jTAG:'1 week ago' -jTAG module
* "cvs diff" now supports the special tags "BASE" and "HEAD". So,
the command:
% cvs diff -u -rBASE -rHEAD
will effectively show the changes made by others (in unidiff
format) that will be merged into your working sources with your
next "cvs update" command. "-rBASE" resolves to the revision that
your working file is based on. "-rHEAD" resolves to the current
head of the branch or trunk that you are working on.
* The -P option of "cvs checkout" now means to Prune empty
directories, as with "update". The default is to not remove empty
directories. However, if you do "checkout" with any -r options, -P
will be implied. I.e., checking out with a tag will cause empty
directories to be pruned automatically.
* The new file INSTALL describes how to install CVS, including
detailed descriptions of interfaces to "configure".
* The example loginfo file in examples/loginfo has been updated to
use the perl script included in contrib/log.pl. The nice thing
about this log program is that it records the revision numbers of
your change in the log message.
Example files for commitinfo and rcsinfo are now included in the
examples directory.
* All "#if defined(__STDC__) && __STDC__ == 1" lines have been
changed to be "#if __STDC__" to fix some problems with the former.
* The lib/regex.[ch] files have been updated to the 1.3 release of
the GNU regex package.
* The ndbm emulation routines included with CVS 1.3 Beta-2 in the
src/ndbm.[ch] files has been moved into the src/myndbm.[ch] files
to avoid any conflict with the system <ndbm.h> header file. If
you had a previous CVS 1.3 Beta release, you will want to "cvs
remove ndbm.[ch]" form your copy of CVS as well.
* "cvs add" and "cvs remove" are a bit more verbose, telling you
what to do to add/remove your file permanently.
* We no longer mess with /dev/tty in "commit" and "add".
* More things are quiet with the -Q option set.
* New src/config.h option: If CVS_BADROOT is set, CVS will not
allow people really logged in as "root" to commit changes.
* "cvs diff" exits with a status of 0 if there were no diffs, 1 if
there were diffs, and 2 if there were errors.
* "cvs -n diff" is now supported so that you can still run diffs
even while in the middle of committing files.
* Handling of the CVS/Entries file is now much more robust.
* The default file ignore list now includes "*.so".
* "cvs import" did not expand '@' in the log message correctly. It
does now. Also, import now uses the ignore file facility
correctly.
Import will now tell you whether there were conflicts that need to
be resolved, and how to resolve them.
* "cvs log" has been changed so that you can "log" things that are
not a part of the current release (in the Attic).
* If you don't change the editor message on commit, CVS now prompts
you with the choice:
!)reuse this message unchanged for remaining dirs
which allows you to tell CVS that you have no intention of changing
the log message for the remainder of the commit.
* It is no longer necessary to have CVSROOT set if you are using
the -H option to get Usage information on the commands.
* Command argument changes:
checkout: -P handling changed as described above.
New -j option (up to 2 can be specified)
for doing rcsmerge kind of things on
checkout.
commit: -r option now supports committing to a
numeric or symbolic tags, with some
restrictions. Full consistency checks will
be done.
Added "-f logfile" option, which tells
commit to glean the log message from the
specified file, rather than invoking the
editor.
rtag: Added -b option to create a branch tag,
useful for creating a patch for a previous
release, or for forking development.
tag: Added -b option to create a branch tag,
useful for creating a patch for a previous
release, or for forking development.
update: New -j option (up to 2 can be specified)
for doing rcsmerge kind of things on
update.
Thu Jan 9 10:51:35 MST 1992 Jeff Polk (polk at BSDI.COM)
* Changes between CVS 1.3 Beta-1 and CVS 1.3 Beta-2
* Thanks to K. Richard Pixley at Cygnus we now have function
prototypes in all the files
* Some small changes to configure for portability. There have
been other portability problems submitted that have not been fixed
(Brian will be working on those). Additionally all __STDC__
tests have been modified to check __STDC__ against the constant 1
(this is what the Second edition of K&R says must be true).
* Lots of additional error checking for forked processes (run_exec)
(thanks again to K. Richard Pixley)
* Lots of miscellaneous bug fixes - including but certainly not
limited to:
various commit core dumps
various update core dumps
bogus results from status with numeric sticky tags
commitprog used freed memory
Entries file corruption caused by No_Difference
commit to revision broken (now works if branch exists)
ignore file processing broken for * and !
ignore processing didn't handle memory reasonably
miscellaneous bugs in the recursion processor
file descriptor leak in ParseInfo
CVSROOT.adm->CVSROOT rename bug
lots of lint fixes
* Reformatted all the code in src (with GNU indent) and then
went back and fixed prototypes, etc since indent gets confused. The
rationale is that it is better to do it sooner than later and now
everything is consistent and will hopefully stay that way.
The basic options to indent were: "-bad -bbb -bap -cdb -d0 -bl -bli0
-nce -pcs -cs -cli4 -di1 -nbc -psl -lp -i4 -ip4 -c41" and then
miscellaneous formatting fixes were applied. Note also that the
"-nfc1" or "-nfca" may be appropriate in files where comments have
been carefully formatted (e.g, modules.c).
Sat Dec 14 20:35:22 1991 Brian Berliner (berliner at sun.com)
* Changes between CVS 1.2 and CVS 1.3 Beta are described here.
* Lots of portability work. CVS now uses the GNU "configure"
script to dynamically determine the features provided by your
system. It probably is not foolproof, but it is better than
nothing. Please let me know of any portability problems. Some
file names were changed to fit within 14-characters.
* CVS has a new RCS parser that is much more flexible and
extensible. It should read all known RCS ",v" format files.
* Most of the commands now are fully recursive, rather than just
operating on the current directory alone. This includes "commit",
which makes it real easy to do an "atomic" commit of all the
changes made to a CVS hierarchy of sources. Most of the commands
also correctly handle file names that are in directories other than
".", including absolute path names. Commands now accept the "-R"
option to force recursion on (though it is always the default now)
and the "-l" option to force recursion off, doing just "." and not
any sub-directories.
* CVS supports many of the features provided with the RCS 5.x
distribution - including the new "-k" keyword expansion options. I
recommend using RCS 5.x (5.6 is the current official RCS version)
and GNU diff 1.15 (or later) distributions with CVS.
* Checking out files with symbolic tags/dates is now "sticky", in
that CVS remembers the tag/date used for each file (and directory)
and will use that tag/date automatically on the next "update" call.
This stickyness also holds for files checked out with the the new
RCS 5.x "-k" options.
* The "cvs diff" command now recognizes all of the rcsdiff 5.x
options. Unidiff format is available by installing the GNU
diff 1.15 distribution.
* The old "CVS.adm" directories created on checkout are now called
"CVS" directories, to look more like "RCS" and "SCCS". Old CVS.adm
directories are automagically converted to CVS directories. The
old "CVSROOT.adm" directory within the source repository is
automagically changed into a "CVSROOT" directory as well.
* Symbolic links in the source repository are fully supported ONLY
if you use RCS 5.6 or later and (of course) your system supports
symlinks.
* A history database has been contributed which maintains the
history of certain CVS operations, as well as providing a wide array
of querying options.
* The "cvs" program has a "-n" option which can be used with the
"update" command to show what would be updated without actually
doing the update, like: "cvs -n update". All usage statements
have been cleaned up and made more verbose.
* The module database parsing has been rewritten. The new format
is compatible with the old format, but with much more
functionality. It allows modules to be created that grab pieces or
whole directories from various different parts of your source
repository. Module-relative specifications are also correctly
recognized now, like "cvs checkout module/file.c".
* A configurable template can be specified such that on a "commit",
certain directories can supply a template that the user must fill
before completing the commit operation.
* A configurable pre-commit checking program can be specified which
will run to verify that a "commit" can happen. This feature can be
used to restrict certain users from changing certain pieces of the
source repository, or denying commits to the entire source
repository.
* The new "cvs export" command is much like "checkout", but
establishes defaults suitable for exporting code to others (expands
out keywords, forces the use of a symbolic tag, and does not create
"CVS" directories within the checked out sources.
* The new "cvs import" command replaces the deprecated "checkin"
shell script and is used to import sources into CVS control. It is
also much faster for the first-time import. Some algorithmic
improvements have also been made to reduce the number of
conflicting files on next-time imports.
* The new "cvs admin" command is basically an interface to the
"rcs" program. (Not yet implemented very well).
* Signal handling (on systems with BSD or POSIX signals) is much
improved. Interrupting CVS now works with a single interrupt!
* CVS now invokes RCS commands by direct fork/exec rather than
calling system(3). This improves performance by removing a call to
the shell to parse the arguments.
* Support for the .cvsignore file has been contributed. CVS will
now show "unknown" files as "? filename" as the result of an "update"
command. The .cvsignore file can be used to add files to the
current list of ignored files so that they won't show up as unknown.
* Command argument changes:
cvs: Added -l to turn off history logging.
Added -n to show what would be done without actually
doing anything.
Added -q/-Q for quiet and really quiet settings.
Added -t to show debugging trace.
add: Added -k to allow RCS 5.x -k options to be specified.
admin: New command; an interface to rcs(1).
checkout: Added -A to reset sticky tags/date/options.
Added -N to not shorten module paths.
Added -R option to force recursion.
Changed -p (prune empty directories) to -P option.
Changed -f option; forcing tags match is now default.
Added -p option to checkout module to standard output.
Added -s option to cat the modules db with status.
Added -d option to checkout in the specified directory.
Added -k option to use RCS 5.x -k support.
commit: Removed -a option; use -l instead.
Removed -f option.
Added -l option to disable recursion.
Added -R option to force recursion.
If no files specified, commit is recursive.
diff: Now recognizes all RCS 5.x rcsdiff options.
Added -l option to disable recursion.
Added -R option to force recursion.
history: New command; displays info about CVS usage.
import: Replaces "checkin" shell script; imports sources
under CVS control. Ignores files on the ignore
list (see -I option or .cvsignore description above).
export: New command; like "checkout", but w/special options
turned on by default to facilitate exporting sources.
join: Added -B option to join from base of the branch;
join now defaults to only joining with the top two
revisions on the branch.
Added -k option for RCS 5.x -k support.
log: Supports all RCS 5.x options.
Added -l option to disable recursion.
Added -R option to force recursion.
patch: Changed -f option; forcing tags match is now default.
Added -c option to force context-style diffs.
Added -u option to support unidiff-style diffs.
Added -V option to support RCS specific-version
keyword expansion formats.
Added -R option to force recursion.
remove: No option changes. It's a bit more verbose.
rtag: Equivalent to the old "cvs tag" command.
No option changes. It's a lot faster for re-tag.
status: New output formats with more information.
Added -l option to disable recursion.
Added -R option to force recursion.
Added -v option to show symbolic tags for files.
tag: Functionality changed to tag checked out files
rather than modules; use "rtag" command to get the
old "cvs tag" behaviour.
update: Added -A to reset sticky tags/date/options.
Changed -p (prune empty directories) to -P option.
Changed -f option; forcing tags match is now default.
Added -p option to checkout module to standard output.
Added -I option to add files to the ignore list.
Added -R option to force recursion.
Major Contributors:
* Jeff Polk <polk@bsdi.com> rewrote most of the grody code of CVS
1.2. He made just about everything dynamic (by using malloc),
added a generic hashed list manager, re-wrote the modules database
parsing in a compatible - but extended way, generalized directory
hierarchy recursion for virtually all the commands (including
commit!), generalized the loginfo file to be used for pre-commit
checks and commit templates, wrote a new and flexible RCS parser,
fixed an uncountable number of bugs, and helped in the design of
future CVS features. If there's anything gross left in CVS, it's
probably my fault!
* David G. Grubbs <dgg@ksr.com> contributed the CVS "history" and
"release" commands. As well as the ever-so-useful "-n" option of
CVS which tells CVS to show what it would do, without actually
doing it. He also contributed support for the .cvsignore file.
* Paul Sander, HaL Computer Systems, Inc. <paul@hal.com> wrote and
contributed the code in lib/sighandle.c. I added support for
POSIX, BSD, and non-POSIX/non-BSD systems.
* Free Software Foundation contributed the "configure" script and
other compatibility support in the "lib" directory, which will help
make CVS much more portable.
* Many others have contributed bug reports and enhancement requests.
Some have even submitted actual code which I have not had time yet
to integrate into CVS. Maybe for the next release.
* Thanks to you all!
Wed Feb 6 10:10:58 1991 Brian Berliner (berliner at sun.com)
* Changes from CVS 1.0 Patchlevel 1 to CVS 1.0 Patchlevel 2; also
known as "Changes from CVS 1.1 to CVS 1.2".
* Major new support with this release is the ability to use the
recently-posted RCS 5.5 distribution with CVS 1.2. See below for
other assorted bug-fixes that have been thrown in.
* ChangeLog (new): Added Emacs-style change-log file to CVS 1.2
release. Chronological description of changes between release.
* README: Small fixes to installation instructions. My email
address is now "berliner@sun.com".
* src/Makefile: Removed "rcstime.h". Removed "depend" rule.
* src/partime.c: Updated to RCS 5.5 version with hooks for CVS.
* src/maketime.c: Updated to RCS 5.5 version with hooks for CVS.
* src/rcstime.h: Removed from the CVS 1.2 distribution.
Thanks to Paul Eggert <eggert@twinsun.com> for these changes.
* src/checkin.csh: Support for RCS 5.5 parsing.
Thanks to Paul Eggert <eggert@twinsun.com> for this change.
* src/collect_sets.c (Collect_Sets): Be quieter if "-f" option is
specified. When checking out files on-top-of other files that CVS
doesn't know about, run a diff in the hopes that they are really
the same file before aborting.
* src/commit.c (branch_number): Fix for RCS 5.5 parsing.
Thanks to Paul Eggert <eggert@twinsun.com> for this change.
* src/commit.c (do_editor): Bug fix - fprintf missing argument
which sometimes caused core dumps.
* src/modules.c (process_module): Properly NULL-terminate
update_dir[] in all cases.
* src/no_difference.c (No_Difference): The wrong RCS revision was
being registered in certain (strange) cases.
* src/patch.c (get_rcsdate): New algorithm. No need to call
maketime() any longer.
Thanks to Paul Eggert <eggert@twinsun.com> for this change.
* src/patchlevel.h: Increased patch level to "2".
* src/subr.c (isdir, islink): Changed to compare stat mode bits
correctly.
* src/tag.c (tag_file): Added support for following symbolic links
that are in the master source repository when tagging. Made tag
somewhat quieter in certain cases.
* src/update.c (update_process_lists): Unlink the user's file if it
was put on the Wlist, meaning that the user's file is not modified
and its RCS file has been removed by someone else.
* src/update.c (update): Support for "cvs update dir" to correctly
just update the argument directory "dir".
* src/cvs.h: Fixes for RCS 5.5 parsing.
* src/version_number.c (Version_Number): Fixes for parsing RCS 5.5
and older RCS-format files.
Thanks to Paul Eggert <eggert@twinsun.com> for these changes.
* src/version_number.c (Version_Number): Bug fixes for "-f" option.
Bug fixes for parsing with certain branch numbers. RCS
revision/symbol parsing is much more solid now.
Wed Feb 14 10:01:33 1990 Brian Berliner (berliner at sun.com)
* Changes from CVS 1.0 Patchlevel 0 to CVS 1.0 Patchlevel 1; also
known as "Changes from CVS 1.0 to CVS 1.1".
* src/patch.c (get_rcsdate): Portability fix. Replaced call to
timelocal() with call to maketime().
Mon Nov 19 23:15:11 1990 Brian Berliner (berliner at prisma.com)
* Sent CVS 1.0 release to comp.sources.unix moderator and FSF.
* Special thanks to Dick Grune <dick@cs.vu.nl> for his work on the
1986 version of CVS and making it available to the world. Dick's
version is available on uunet.uu.net in the
comp.sources.unix/volume6/cvs directory.
@(#)ChangeLog 1.17 92/04/10

View File

@ -1,30 +0,0 @@
CVS Development Policies
This file, DEVEL-CVS, contains the policies by which the CVS
development group operates. Also see the HACKING file.
----------------------------------------------------------------------
Policies regarding the CVS source repository:
By checking items into the repository, developers agree to permit
distribution of such items under the terms of the GNU Public License.
----------------------------------------------------------------------
Procedure for dealing with people who want to be developers:
People who want checkin access are first requested to send
patches and have them reviewed by a developer. If they submit some
good ones (preferably over a period of time, to demonstrate sustained
interest), then one of the developers can ask the devel-cvs mailing
list whether it is OK to make this person a developer (after first
sending the prospective developer a copy of this file and then having
the prospective developer say they want to be a developer). If there
are no objections, the person will be made a developer.
----------------------------------------------------------------------
Policy regarding checkout-only access:
Checkout-only access to the CVS repository is available to all, on an
anonymous basis (no need for registration or other complications).
The exact technical mechanisms by which it is available are not
covered by this policy.

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +0,0 @@
$FreeBSD$
*/*.com
*/*.dep
*/*.dsp
*/*.mak
*/.cvsignore
.cvsignore
README.VMS
build.com
cvs.spec*
cvsnt.*
doc/*.info*
doc/*.pdf
doc/*.ps
doc/texinfo.tex
emx
lib/getdate.c
os2
vms
windows-NT
ylwrap
zlib

View File

@ -1,45 +0,0 @@
$FreeBSD$
MAINTAINER= peter@FreeBSD.org
This directory contains the virgin CVS source on the vendor branch. Do
not under any circumstances commit new versions onto the mainline, new
versions or official-patch versions must be imported.
To prepare a new cvs dist for import, extract it into a fresh directory;
then delete the files and directories listed in FREEBSD-Xlist.
CVS is imported from its top level directory something like this:
cvs -n import src/contrib/cvs CVSHOME v<version>
The -n option is "don't do anything" so you can see what is about to happen
first. Remove it when it looks ok.
The initial import was done with:
cvs import src/contrib/cvs CVSHOME v1_11_22
When new versions are imported, cvs will give instructions on how to merge
the local and vendor changes when/if conflicts arise.
The developers can be reached at: <devel-cvs@nongnu.org>. Local changes
that are suitable for public consumption should be submitted for inclusion
in future releases.
peter@freebsd.org - 20 Aug 1996
obrien@freebsd.org - 12 Jan 2008
Current local changes:
- CVS_LOCAL_BRANCH_NUM environment variable support for choosing the
magic branch number. (for CVSup local-commit support)
- CVSREADONLYFS environment variable and global option -R to enable
no-locking readonly mode (eg: cvs repo is a cdrom or mirror)
- the verify message script can edit the submitted log message.
- CVSROOT/options file
- Variable keyword expansion controls including custom keywords.
- $ CVSHeader$ keyword - like Header, but with $CVSROOT stripped off.
- 'CVS_OPTIONS' environmental variable support.
- Allow -D with -r on checkout.
- Support for "diff -j", allowing tag:date based diffs.
- iso8601 option keyword.
- Comprehensive "-T" CVS/Template support.
- We use the cvs.1 manpage from man/, not the offical one in doc/

View File

@ -1,11 +0,0 @@
$FreeBSD$
src/buffer.c
src/commit.c
src/filesubr.c
src/import.c
src/login.c
src/mkmodules.c
src/patch.c
src/rcscmds.c
src/recurse.c
contrib/sccs2rcs.in

View File

@ -1,256 +0,0 @@
How to write code for CVS
* License of CVS
CVS is Copyright (C) 1986-2006 The Free Software Foundation, Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 1, or (at your option)
any later version.
More details are available in the COPYING file but, in simplified
terms, this means that any distributed modifications you make to
this software must also be released under the GNU General Public
License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
* Source
Patches against the development version of CVS are most likely to be accepted:
$ cvs -z3 -d:pserver:anonymous@cvs.sv.nongnu.org:/sources/cvs co ccvs
See the Savannah sources page <http://savannah.nongnu.org/cvs/?group=cvs> for
more information.
* Compiler options
If you are using GCC, you'll want to configure with -Wall, which can
detect many programming errors. This is not the default because it
might cause spurious warnings, but at least on some machines, there
should be no spurious warnings. For example:
$ CFLAGS="-g -Wall" ./configure
Configure is not very good at remembering this setting; it will get
wiped out whenever you do a ./config.status --recheck, so you'll need
to use:
$ CFLAGS="-g -Wall" ./config.status --recheck
* Backwards Compatibility
Only bug fixes are accepted into the stable branch. New features should be
applied to the trunk.
If it is not inextricable from a bug fix, CVS's output (to stdout/stderr)
should not be changed on the stable branch in order to best support scripts and
other tools which parse CVS's output. It is ok to change output between
feature releases (on the trunk), though such changes should be noted in the
NEWS file.
Changes in the way CVS responds to command line options, config options, etc.
should be accompanied by deprecation warnings for an entire stable series of
releases before being changed permanently, if at all possible.
* Indentation style
CVS mostly uses a consistent indentation style which looks like this:
void
foo (arg)
char *arg;
{
if (arg != NULL)
{
bar (arg);
baz (arg);
}
switch (c)
{
case 'A':
aflag = 1;
break;
}
}
The file cvs-format.el contains settings for emacs and the NEWS file
contains a set of options for the indent program which I haven't tried
but which are correct as far as I know. You will find some code which
does not conform to this indentation style; the plan is to reindent it
as those sections of the code are changed (one function at a time,
perhaps).
In a submitted patch it is acceptable to refrain from changing the
indentation of large blocks of code to minimize the size of the patch;
the person checking in such a patch should reindent it.
* Portability
The general rule for portability is that it is only worth including
portability cruft for systems on which people are actually testing and
using new CVS releases. Without testing, CVS will fail to be portable
for any number of unanticipated reasons.
The current consequence of that general rule seems to be that if it
is in ANSI C and it is in SunOS4 (using /bin/cc), generally it is OK
to use it without ifdefs (for example, assert() and void * as long as
you add more casts to and from void * than ANSI requires. But not
function prototypes). Such constructs are generally portable enough,
including to NT, OS/2, VMS, etc.
* Run-time behaviors
Use assert() to check "can't happen" conditions internal to CVS. We
realize that there are functions in CVS which instead return NULL or
some such value (thus confusing the meaning of such a returned value),
but we want to fix that code. Of course, bad input data, a corrupt
repository, bad options, etc., should always print a real error
message instead.
Do not use arbitrary limits (such as PATH_MAX) except perhaps when the
operating system or some external interface requires it. We spent a
lot of time getting rid of them, and we don't want to put them back.
If you find any that we missed, please report it as with other bugs.
In most cases such code will create security holes (for example, for
anonymous readonly access via the CVS protocol, or if a WWW cgi script
passes client-supplied arguments to CVS).
Although this is a long-term goal, it also would be nice to move CVS
in the direction of reentrancy. This reduces the size of the data
segment and will allow a multi-threaded server if that is desirable.
It is also useful to write the code so that it can be easily be made
reentrant later. For example, if you need to pass data from a
Parse_Info caller to its callproc, you need a static variable. But
use a single pointer so that when Parse_Info is fixed to pass along a
void * argument, then the code can easily use that argument.
* Coding standards in general
Generally speaking the GNU coding standards are mostly used by CVS
(but see the exceptions mentioned above, such as indentation style,
and perhaps an exception or two we haven't mentioned). This is the
file standards.text at the GNU FTP sites.
* Regenerating Build Files
On UNIX, if you wish to change the Build files, you will need Autoconf and
Automake.
Some combinations of Automake and Autoconf versions may break the
CVS build if file timestamps aren't set correctly and people don't
have the same versions the developers do, so the rules to run them
automatically aren't included in the generated Makefiles unless you run
configure with the --enable-maintainer-mode option.
The CVS Makefiles and configure script were built using Automake 1.10 and
Autoconf 2.61, respectively.
There is a known bug in Autoconf 2.57 that will prevent the configure
scripts it generates from working on some platforms. Other combinations of
autotool versions may or may not work. If you get other versions to work,
please send a report to <bug-cvs@nongnu.org>.
* Writing patches (strategy)
Only some kinds of changes are suitable for inclusion in the
"official" CVS. Bugfixes, where CVS's behavior contradicts the
documentation and/or expectations that everyone agrees on, should be
OK (strategically). For features, the desirable attributes are that
the need is clear and that they fit nicely into the architecture of
CVS. Is it worth the cost (in terms of complexity or any other
tradeoffs involved)? Are there better solutions?
If the design is not yet clear (which is true of most features), then
the design is likely to benefit from more work and community input.
Make a list of issues, or write documentation including rationales for
how one would use the feature. Discuss it with coworkers, a
newsgroup, or a mailing list, and see what other people think.
Distribute some experimental patches and see what people think. The
intention is arrive at some kind of rough community consensus before
changing the "official" CVS. Features like zlib, encryption, and
the RCS library have benefitted from this process in the past.
If longstanding CVS behavior, that people may be relying on, is
clearly deficient, it can be changed, but only slowly and carefully.
For example, the global -q option was introduced in CVS 1.3 but the
command -q options, which the global -q replaced, were not removed
until CVS 1.6.
* Writing patches (tactics)
When you first distribute a patch it may be suitable to just put forth
a rough patch, or even just an idea. But before the end of the
process the following should exist:
- ChangeLog entry (see the GNU coding standards for details).
- Changes to the NEWS file and cvs.texinfo, if the change is a
user-visible change worth mentioning.
- Somewhere, a description of what the patch fixes (often in
comments in the code, or maybe the ChangeLog or documentation).
- Most of the time, a test case (see TESTS). It can be quite
frustrating to fix a bug only to see it reappear later, and adding
the case to the testsuite, where feasible, solves this and other
problems. See the TESTS file for notes on writing new tests.
If you solve several unrelated problems, it is generally easier to
consider the desirability of the changes if there is a separate patch
for each issue. Use context diffs or unidiffs for patches.
Include words like "I grant permission to distribute this patch under
the terms of the GNU Public License" with your patch. By sending a
patch to bug-cvs@nongnu.org, you implicitly grant this permission.
Submitting a patch to bug-cvs is the way to reach the people who have
signed up to receive such submissions (including CVS developers), but
there may or may not be much (or any) response. If you want to pursue
the matter further, you are probably best off working with the larger
CVS community. Distribute your patch as widely as desired (mailing
lists, newsgroups, web sites, whatever). Write a web page or other
information describing what the patch is for. It is neither practical
nor desirable for all/most contributions to be distributed through the
"official" (whatever that means) mechanisms of CVS releases and CVS
developers. Now, the "official" mechanisms do try to incorporate
those patches which seem most suitable for widespread usage, together
with test cases and documentation. So if a patch becomes sufficiently
popular in the CVS community, it is likely that one of the CVS
developers will eventually try to do something with it. But dealing
with the CVS developers may be the last step of the process rather
than the first.
* What is the schedule for the next release?
There isn't one. That is, upcoming releases are not announced (or
even hinted at, really) until the feature freeze which is
approximately 2 weeks before the final release (at this time test
releases start appearing and are announced on info-cvs). This is
intentional, to avoid a last minute rush to get new features in.
* Mailing lists
In addition to the mailing lists listed in the README file, developers should
take particular note of the following mailling lists:
bug-cvs: This is the list which users are requested to send bug reports
to. General CVS development and design discussions also take place on
this list.
info-cvs: This list is intended for user questions, but general CVS
development and design discussions sometimes take place on this list.
cvs-cvs: The only messages sent to this list are sent
automatically, via the CVS `loginfo' mechanism, when someone
checks something in to the master CVS repository.
cvs-test-results: The only messages sent to this list are sent
automatically, daily, by a script which runs "make check"
and "make remotecheck" on the master CVS sources.
To subscribe to any of these lists, send mail to <list>-request@nongnu.org
or visit http://savannah.nongnu.org/mail/?group=cvs and follow the instructions
for the list you wish to subscribe to.

View File

@ -1,517 +0,0 @@
-------------------------------------------------------------------------------
CVS is Copyright (C) 1986-2006 The Free Software Foundation, Inc.
CVS is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 1, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-------------------------------------------------------------------------------
Now back to our regularly scheduled program:
Please read the README file before reading this INSTALL file. Then, to
install CVS:
First you need to obtain and install the CVS executables. If you got
a distribution which contains executables, consult the installation
instructions for that distribution. If you got source code, do not
panic. On many platforms building CVS from source code is a
straightforward process requiring no programming knowledge. See the
section BUILDING FROM SOURCE CODE at the end of this file, which
includes a list of platforms which have been tested.
-------------------------------------------------------------------------------
1) Take a look at the CVS documentation, if desired. For most
purposes you want doc/cvs.texinfo, also known as _Version Management
with CVS_ by Per Cederqvist et al. Looking at it might be as simple
as "info cvs" but this will depend on your installation; see README
for more details.
See what CVS can do for you, and if it fits your environment (or can
possibly be made to fit your environment). If things look good,
continue on. Alternately, just give CVS a try first then figure out
what it is good for.
2) Set the CVSROOT environment variable to where you want to put your
source repository. See the "Setting up the repository" section of
the Cederqvist manual for details, but the quick summary is just to
pick some directory. We'll use /src/master as an example. For
users of a POSIX shell (sh/bash/ksh) on unix, the following
commands can be placed in user's ~/.profile, ~/.bash_profile file;
or in the site-wide /etc/profile:
CVSROOT=/src/master; export CVSROOT
For C shell users on unix place the following commands in the
user's ~/.cshrc, ~/.login, or /etc/chsrc file:
setenv CVSROOT /src/master
For Windows users, supposing the repository will be in
d:\src\master, place the following line in c:\autoexec.bat. On
Windows 95, autoexec.bat might not already exist. In that case,
just create a new file containing the following line.
set CVSROOT=:local:d:\src\master
If these environment variables are not already set in your current
shell, set them now by typing the above line at the command prompt
(or source the login script you just edited).
The instructions for the remaining steps assume that you have set
the CVSROOT environment variable.
3) Create the master source repository. Again, the details are in
the "Setting up the repository" section of cvs.texinfo; the
one-line summary is:
$ cvs init
In this and subsequent examples we use "$" to indicate the command
prompt; do not type the "$".
4) It might be a good idea to jump right in and put some sources or
documents directly under CVS control. From within the top-level
directory of your source tree, run the following commands:
$ cvs import -m "test distribution" ccvs CVS_DIST CVS-TEST
(Those last three items are, respectively, a repository location, a
"vendor tag", and a "release tag". You don't need to understand
them yet, but read the section "Starting new projects" in the
Cederqvist manual for details).
5) Having done step 4, one should be able to checkout a fresh copy of the
sources you just imported and hack away at the sources with the
following command:
$ cd
$ cvs checkout ccvs
This will make the directory "ccvs" in your current directory and
populate it with the appropriate files and directories.
6) You may wish to customize the various administrative files, in particular
modules. See the Cederqvist manual for details.
7) Read the NEWS file to see what's new.
8) Hack away.
-------------------------------------------------------------------------------
BUILDING FROM SOURCE CODE
Tested platforms
CVS has been tested on the following platforms. The most recent
version of CVS reported to have been tested is indicated, but more
recent versions of CVS probably will work too. Please send updates to
this list to bug-cvs@nongnu.org (doing so in the form of a diff
to this file, or at least exact suggested text, is encouraged).
"tested" means, at a minimum, that CVS compiles and appears to work on
simple (manual) testing. In many cases it also means "make check"
and/or "make remotecheck" passes, but we don't try to list the
platforms for which that is true.
Alpha:
DEC Alpha running OSF/1 version 1.3 using cc (about 1.4A2)
DEC Alpha running OSF/1 version 2.0 (1.8)
DEC Alpha running OSF/1 version 2.1 (about 1.4A2)
DEC Alpha running OSF/1 version 3.0 (1.5.95) (footnote 7)
DEC Alpha running OSF/1 version 3.2 (1.9)
Alpha running alpha-dec-osf4.0 (1.10)
DEC Alpha running Digital UNIX v4.0C using gcc 2.7.2.2 (1.9.14)
DEC Alpha running VMS 6.2 (1.8.85 client-only)
Alpha running NetBSD 1.2E (1.10)
Cray:
J90 (CVS 970215 snapshot)
T3E (CVS 970215 snapshot)
HPPA:
HP 9000/710 running HP-UX 8.07A using gcc (about 1.4A2)
HPPA running HP-UX 9 (1.8)
HPPA 1.1 running HP-UX A.09.03 (1.5.95) (footnote 8)
HPPA 1.1 running HP-UX A.09.04 (1.7.1)
HPPA running HP-UX 9.05 (1.9)
HPPA running HP-UX 10.01 (1.7)
HPPA running HP-UX 10.20 (1.10.7)
HPPA running HP-UX 11.11 (1.11.13) (footnote 12)
HPPA 2.0 running HP-UX 10.20 (1.10.9) (footnote 13)
NextSTEP 3.3 (1.7)
i386 family:
Solaris 2.4 using gcc (about 1.4A2)
Solaris 2.6 (1.9)
UnixWare v1.1.1 using gcc (about 1.4A2)
Unixware 2.1 (1.8.86)
Unixware 7 (1.9.29)
ISC 4.0.1 (1.8.87)
Linux (kernel 1.2.x) (1.8.86)
Linux (kernel 2.0.x, RedHat 4.2) (1.10)
Linux (kernel 2.0.x, RedHat 5.x) (1.10)
Linux (kernel 2.2.x, RedHat 6.x) (1.10.8)
Linux (kernel 2.2.x, RedHat 7.x) (1.11)
BSDI 4.0 (1.10.7)
FreeBSD 2.1.5-stable (1.8.87)
NextSTEP 3.3 (1.7)
SCO Unix 3.2.4.2, gcc 2.7.2 (1.8.87) (footnote 4)
SCO OpenServer 5.0.5 (1.10.2)
Sequent DYNIX/ptx4.0 (1.10 or so) (remove -linet)
Sequent Dynix/PTX 4.1.4 (1.9.20 or so + patches)
Lynx 2.3.0 080695 (1.6.86) (footnote 9)
Windows NT 3.51 (1.8.86 client; 1.8.3 local)
Windows NT 3.51 service pack 4 (1.9)
Windows NT 3.51 service pack 5 (1.9) -- DOES NOT WORK (footnote 11)
Windows NT 4.0 (1.9 client and local)
Windows NT 4.0 (1.11 client and local - build & test, but no test suite)
Windows 95 (1.9 client and local)
QNX (1.9.1 + patches for strippath() and va_list)
OS/2 Version 3 using IBM C/C++ Tools 2.01 (1.8.86 + patches, client)
OS/2 Version 3 using EMX 0.9c (1.9.22, client)
OS/2 Version 3 using Watcom version ? (? - has this been tested?)
m68k:
Sun 3 running SunOS 4.1.1_U1 w/ bundled K&R /usr/5bin/cc (1.8.86+)
NextSTEP 3.3p1 (1.8.87)
Lynx 2.3.0 062695 (1.6.86) (footnote 9)
NetBSD/mac68k (1.9.28)
m88k:
Data General AViiON running dgux 5.4R2.10 (1.5)
Data General AViiON running dgux 5.4R3.10 (1.7.1)
Harris Nighthawk 5800 running CX/UX 7.1 (1.5) (footnote 6)
MIPS:
DECstation running Ultrix 4.2a (1.4.90)
DECstation running Ultrix 4.3 (1.10)
SGI running Irix 4.0.5H using gcc and cc (about 1.4A2) (footnote 2)
SGI running Irix 5.3 (1.10)
SGI running Irix 6.2 using SGI MIPSpro 6.2 and beta 7.2 compilers (1.9)
SGI running Irix-6.2 (1.9.8)
SGI running IRIX 6.4 (1.10)
SGI running IRIX 6.5 (1.10.7)
Siemens-Nixdorf RM600 running SINIX-Y (1.6)
PowerPC or RS/6000:
IBM RS/6000 running AIX 3.1 using gcc and cc (1.6.86)
IBM RS/6000 running AIX 3.2.5 (1.8)
IBM RS/6000 running AIX 4.1 (1.9)
IBM RS/6000 running AIX 4.3 (1.10.7)
Lynx 2.3.1 120495 (1.6.86) (footnote 9)
Lynx 2.5 (1.9) (footnote 10)
Linux DR3 GENERIC #6 (1.10.5.1) (presumably LinuxPPC too)
Mac OS X ALL (footnote 14)
Mac OS X Darwin 6.6 Darwin Kernel Version 6.6 (1.11.1p1)
Mac OS X Darwin 5.5 Darwin Kernel Version 5.5 (1.11.6) (footnote 12)
Mac OS X Darwin 5.5 Darwin Kernel Version 5.5 (1.12.1) (footnote 12)
SPARC:
Sun SPARC running SunOS 4.1.x (1.10)
Sun SPARCstation 10 running Solaris 2.3 using gcc and cc (about 1.4A2)
Sun SPARCstation running Solaris 2.4 using gcc and cc (about 1.5.91)
Sun SPARC running Solaris 2.5 (1.8.87)
Sun SPARC running Solaris 2.5.1 using gcc 2.7.2.2 (1.9.14)
Sun SPARC running Solaris 2.6 (1.10.7)
Sun UltraSPARC running Solaris 2.6 using gcc 2.8.1 (1.10)
NextSTEP 3.3 (1.7)
Sun SPARC running Linux 2.0.17, gcc 2.7.2 (1.8.87)
Sun UltraSPARC running Solaris 2.8 using gcc 2.95.3
VAX:
VAX running VMS 6.2 (1.9+patches, client-only)
(see README.VMS for information on necessary hacks).
(footnote 2)
Some Irix 4.0 systems may core dump in malloc while running
CVS. We believe this is a bug in the Irix malloc. You can
workaround this bug by linking with "-lmalloc" if necessary.
(about 1.4A2).
(footnote 4) Comment out the include of sys/time.h in src/server.c. (1.4.93)
You also may have to make sure TIME_WITH_SYS_TIME is undef'ed.
(footnote 6) Build in ucb universe with COFF compiler tools. Put
/usr/local/bin first in PATH while doing a configure, make
and install of GNU diffutils-2.7, rcs-5.7, then cvs-1.5.
(footnote 7) Manoj Srivastava <srivasta@pilgrim.umass.edu> reports
success with this configure command:
CC=cc CFLAGS='-O2 -Olimit 2000 -std1' ./configure --verbose alpha-dec-osf
(footnote 8) Manoj Srivastava <srivasta@pilgrim.umass.edu> reports
success with this configure command:
CC=cc CFLAGS='+O2 -Aa -D_HPUX_SOURCE' ./configure --verbose hppa1.1-hp-hpux
(footnote 9)
Had to configure with ./configure --host=<arch>-lynx.
In src/cvs.h, protected the waitpid prototype with ifdef _POSIX_SOURCE.
(I might try building with gcc -mposix -D_POSIX_SOURCE.)
LynxOS has <dirent.h>, but you don't want to use it.
You want to use <sys/dir.h> instead.
So after running configure I had to undef HAVE_DIRENT_H and
define HAVE_SYS_DIR_H.
(footnote 10)
Had to compile with "make LIBS=-lbsd" (to get gethostbyname
and getservbyname).
(footnote 11)
when I do a `cvs init' I get this message:
ci: 'RCS/loginfo,v' is not a regular file
ci: RCS/loginfo,v: Invalid argument
cvs [init aborted]: failed to checkin n:/safe/CVSROOT/loginfo
(footnote 12)
Need to `configure --without-gssapi' unless you have installed Kerberos 5
libraries on the system yourself. For some reason Apple ships OS X with
the Kerberos 5 headers installed and not the libraries, which confuses the
current configure script. Some HP, BSD, & Sun boxes have similar problems.
(footnote 13)
A build under HP PA-RISC 2.0 will probably not run under PA-RISC 1.1
unless "+DAportable" is added to the HP ANSI cc compiler flags.
(footnote 14)
Because of the case-insensitive file system on Mac OS X, you cannot build
CVS directly from a checkout from CVS. The name of the built executable,
`cvs', conflicts with name of the CVS administration directory, `CVS'.
The work-around is to build the executable from a build directory separate
from the source directory. i.e.:
cvs co ccvs; cd ccvs
mkdir build; cd build
../configure && make
-------------------------------------------------------------------------------
Building from source code under Unix:
1) Run "configure":
$ ./configure
You can specify an alternate destination to override the default with
the --prefix option:
$ ./configure --prefix=/usr/local/gnu
or some path that is more appropriate for your site. The default prefix
value is "/usr/local", with binaries in sub-directory "bin", manual
pages in sub-directory "man", and libraries in sub-directory "lib".
A normal build of CVS will create an executable which supports
local, server, or client CVS (if you don't know the difference,
it is described in the Repository chapter of doc/cvs.texinfo). If
you do not intend to use client or server CVS, you may want to
prevent these features from being included in the executable you
build. You can do this with the --disable-client and
--disable-server options:
$ ./configure --disable-client --disable-server
Typically this can reduce the size of the executable by around 30%.
If you are building CVS with the server enabled, you can disable
server flow control using the --disable-server-flow-control
If you are working with a large remote repository and a 'cvs
checkout' is swamping your network and memory, enable flow control.
You will end up with even less probability of a consistent checkout
(see Concurrency in cvs.texinfo), but CVS doesn't try to guarantee
that anyway. The master server process will monitor how far it is
getting behind, if it reaches the high water mark, it will signal
the child process to stop generating data when convenient (ie: no
locks are held, currently at the beginning of a new directory).
Once the buffer has drained sufficiently to reach the low water
mark, it will be signalled to start again. You may override the
default hi/low watermarks here too by passing
'<lowwater>,<highwater>', in bytes, as an argument to
--enable-server-flow-control. The low water mark defaults to one
megabyte and the high water mark defaults to two megabytes.
$ ./configure --enable-server-flow-control=1M,2M
The --with-tmpdir argument to configure may be used to set a
specific directory for use as a default temporary directory. If not
set, configure will pick the first directory it finds which it has
read, write, and execute permissions to from $TMPDIR, $TMP, $TEMP,
/tmp, and /var/tmp, in that order. Failing that, it will use /tmp.
The --with-umask argument to configure can be used to change
the default umask used by the CVS server executable.
Unlike previous versions of CVS, you do not need to install RCS
or GNU diff.
If you are using gcc and are planning to modify CVS, you may want to
configure with -Wall; see the file HACKING for details.
If you have Kerberos 4 installed, you can specify the location of
the header files and libraries using the --with-krb4=DIR option.
DIR should be a directory with subdirectories include and lib
holding the Kerberos 4 header files and libraries, respectively.
The default value is /usr/kerberos.
If you want to enable support for encryption over Kerberos, use
the --enable-encryption option. This option is disabled by
default.
If you want to disable automatic dependency tracking in the makefiles,
use the '--disable-dependency-tracking' option:
$ ./configure --disable-dependency-tracking
This avoids problems on some platforms. See the note at the end of this
file on BSD.
Try './configure --help' for further information on its usage.
NOTE ON CVS's USE OF NDBM:
By default, CVS uses some built-in ndbm emulation code to allow
CVS to work in a heterogeneous environment. However, if you have
a very large modules database, this may not work well. You will
need to supply the --disable-cvs-ndbm option to configure to
accomplish this. If you do this, the following comments apply. If
not, you may safely skip these comments.
If you configure CVS to use the real ndbm(3) libraries and
you do not have them installed in a "normal" place, you will
probably want to get the GNU version of ndbm (gdbm) and install
that before running the CVS configure script. Be aware that the
GDBM 1.5 release does NOT install the <ndbm.h> header file included
with the release automatically. You may have to install it by hand.
If you configure CVS to use the ndbm(3) libraries, you cannot
compile CVS with GNU cc (gcc) on Sun-4 SPARC systems. However, gcc
2.0 may have fixed this limitation if -fpcc-struct-return is
defined. When using gcc on other systems to compile CVS, you *may*
need to specify the -fpcc-struct-return option to gcc (you will
*know* you have to if "cvs checkout" core dumps in some ndbm
function). You can do this as follows:
$ CC='gcc -fpcc-struct-return' ./configure
for sh, bash, and ksh users and:
% setenv CC 'gcc -fpcc-struct-return'
% ./configure
for csh and tcsh users.
END OF NOTE FOR NDBM GUNK.
2) Try to build it:
$ make
This will (hopefully) make the needed CVS binaries within the
"src" directory. If something fails for your system, and you want
to submit a bug report, you may wish to include your
"config.status" file, your host type, operating system and
compiler information, make output, and anything else you think
will be helpful.
3) Run the regression tests (optional).
You may also wish to validate the correctness of the new binary by
running the regression tests. If they succeed, that is nice to
know. However, if they fail, it doesn't tell you much. Often it
will just be a problem with running the tests on your machine,
rather than a problem with CVS. Unless you will have the time to
determine which of the two it is in case of failure, you might
want to save yourself the time and just not run the tests.
If you want to run the tests, see the file TESTS for more information.
4) Install the binaries/documentation:
$ make install
Depending on your installation's configuration, you may need to be
root to do this.
-------------------------------------------------------------------------------
Detailed information about your interaction with "configure":
The "configure" script and its interaction with its options and the
environment is described here. For more detailed documentation about
"configure", please run `./configure --help' or refer to the GNU Autoconf
documentation.
Supported options are:
--srcdir=DIR Useful for compiling on many different
machines sharing one source tree.
--prefix=DIR The root of where to install the
various pieces of CVS (/usr/local).
--exec_prefix=DIR If you want executables in a
host-dependent place and shared
things in a host-independent place.
The following environment variables override configure's default
behaviour:
CC If not set, tries to use gcc first,
then cc. Also tries to use "-g -O"
as options, backing down to -g
alone if that doesn't work.
INSTALL If not set, tries to use "install", then
"./install-sh" as a final choice.
RANLIB If not set, tries to determine if "ranlib"
is available, choosing "echo" if it doesn't
appear to be.
YACC If not set, tries to determine if "bison"
is available, choosing "yacc" if it doesn't
appear to be.
-------------------------------------------------------------------------------
Building from source code under Windows NT/95/98/2000:
You may find interesting information in windows-NT/README.
* Using Microsoft Visual C++ 5.x (this is currently broken - someone with
MVC++ 5.x needs to regenerate the project files, but the builds using `nmake'
below will work).
1) Using Microsoft Visual C++ 5.x, open the project `cvsnt.dsw',
in the top directory of the CVS distribution. If you have an older
version of Visual C++, take a look at windows-NT/README.
2) Choose "Build cvs.exe" from the "Project" menu.
3) MSVC will place the executable file cvs.exe in WinRel, or whatever
your target directory is.
* From the top level directory, with MSVC++ 6.0 installed, something like the
following also works:
C:\> vcvars32
C:\> nmake /f cvsnt.mak CFG="cvsnt - Win32 Debug"
* Using the Cygwin development environment <http://cygwin.com>, Windows clients
and servers can be built using the instructions for building on UNIX. For
deploying the CVS server on Windows NT, see the `cygrunsrv' executable that
comes with Cygwin.
* You might also try <http://wincvs.org> & <http://www.cvsnt.org>.
-------------------------------------------------------------------------------
Building from source code under other platforms:
For OS/2, see os2/README and emx/README.
For VMS, see README.VMS
Mac OS X: Builds fine, just like UNIX.
For older versions of Mac OS, you might try <http://wincvs.org>.
For a Java client, see jCVS (which is a separate package from CVS
itself, but which might be preferable to the Macintosh port mentioned
above, for example).
-------------------------------------------------------------------------------

View File

@ -1,61 +0,0 @@
Low-priority bugs go here. Actually, most every documented bug is
"low-priority"--in the sense that if it is documented it means noone
has gotten around to fixing it.
* "cvs update -ko -p -r REV file" doesn't seem to pay attention to the
'-ko', at least in client/server mode. A simple work around is to
temporarily change the db file with "cvs admin -ko file", then switch
it back to the original modes after the checkout (probably '-kkv').
* "cvs status" has a difference in its output between local and
client/server mode. Namely there's a tab character followed by a
ctime(3)-style date string at the end of the "Working revision:"
field.
* commands which don't work in a local working directory should probably
ignore any CVS/Root values and revert to using CVSROOT alone. The
current use of CVS/Root can be very confusing if you forget you're in
a working directory for a remote module -- something that's very easy
to do since CVS hides the client operation very well, esp. for
commands which fail for this reason. The only clue might be the word
"server" in a message such as this:
cvs server: cannot find module `patch' - ignored
* cvs init may gave a strange error at times:
ttyp4:<woods@clapton> $ cvs -d /local/src-CVS init
cvs [init aborted]: cannot open CVS/Root: No such file or directory
but it seemed to work just the same.... Note that at the time CVSROOT
was set to point to a CVS server using the ":server:" option.
* If a ~/CVS/Root file exists on the server and you are using rsh to
connect to the server, CVS may loose its mind (this was reported in
May 1995 and I suspect the symptoms have changed, but I have no
particular reason to think the bug is fixed -kingdon, Sep 96).
* (Jeff Johnson <jbj@jbj.org>)
I tried a "cvs status -v" and received the following:
? CVS
? programs/CVS
? tests/CVS
cvs server: Examining .
===================================================================
File: Install.dec Status: Up-to-date
...
I claim that CVS dirs should be ignored.
(This reportedly happens if "cvs add CVS" (or "cvs add *")
is followed by "cvs status", in client/server mode - CVS 1.9).
* On remote checkout, files don't have the right time/date stamps in
the CVS/Entries files. Doesn't look like the C/S protocol has any
way to send this information along (according to cvsclient.texi).
Perhaps we can spiff it up a bit by using the conflict field for the
stamp on the checkout/update command. Please note that this really
doesn't do very much for us even if we get it done.
* Does the function that lists the available modules in the repository
belong under the "checkout" function? Perhaps it is more logically
grouped with the "history" function or we should create a new "info"
function?

View File

@ -1,58 +0,0 @@
## Process this file with automake to produce Makefile.in
# Master Makefile for the GNU Concurrent Versions System.
# Copyright (C) 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994,
# 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003
# Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
## Subdirectories to run make in for the primary targets.
# Unix source subdirs, where we'll want to run lint and etags:
# This is a legacy variable from b4 Automake
USOURCE_SUBDIRS = lib zlib diff src
# All other subdirs:
SUBDIRS = $(USOURCE_SUBDIRS) man doc contrib tools \
windows-NT os2 emx vms
EXTRA_DIST = \
.cvsignore \
BUGS \
ChangeLog.zoo \
DEVEL-CVS \
FAQ \
HACKING \
MINOR-BUGS \
PROJECTS \
README.VMS \
TESTS \
build.com \
cvs-format.el \
cvsnt.dep \
cvsnt.dsp \
cvsnt.dsw \
cvsnt.mak \
cvs.spec \
mktemp.sh
## MAINTAINER Targets
.PHONY: localcheck remotecheck
localcheck remotecheck: all
cd src && $(MAKE) $(AM_MAKEFLAGS) "$@"
.PHONY: doc
doc:
cd doc && $(MAKE) $(AM_MAKEFLAGS) "$@"
# for backwards compatibility with the old makefiles
.PHONY: realclean
realclean: maintainer-clean

View File

@ -1,675 +0,0 @@
# Makefile.in generated by automake 1.10 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
# Master Makefile for the GNU Concurrent Versions System.
# Copyright (C) 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994,
# 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003
# Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = .
DIST_COMMON = README $(am__configure_deps) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(srcdir)/config.h.in \
$(srcdir)/cvs.spec.in $(top_srcdir)/configure \
$(top_srcdir)/emx/Makefile.in $(top_srcdir)/os2/Makefile.in \
$(top_srcdir)/zlib/Makefile.in AUTHORS COPYING COPYING.LIB \
ChangeLog INSTALL NEWS TODO compile depcomp install-sh \
mdate-sh missing mkinstalldirs ylwrap
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.in
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno config.status.lineno
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = config.h
CONFIG_CLEAN_FILES = cvs.spec emx/Makefile os2/Makefile zlib/Makefile
SOURCES =
DIST_SOURCES =
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-dvi-recursive install-exec-recursive \
install-html-recursive install-info-recursive \
install-pdf-recursive install-ps-recursive install-recursive \
installcheck-recursive installdirs-recursive pdf-recursive \
ps-recursive uninstall-recursive
RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \
distclean-recursive maintainer-clean-recursive
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
{ test ! -d $(distdir) \
|| { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -fr $(distdir); }; }
DIST_ARCHIVES = $(distdir).tar.gz $(distdir).tar.bz2
GZIP_ENV = --best
distuninstallcheck_listfiles = find . -type f -print
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CSH = @CSH@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EDITOR = @EDITOR@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
KRB4 = @KRB4@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAINT = @MAINT@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
MKTEMP = @MKTEMP@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PERL = @PERL@
PR = @PR@
PS2PDF = @PS2PDF@
RANLIB = @RANLIB@
ROFF = @ROFF@
SENDMAIL = @SENDMAIL@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TEXI2DVI = @TEXI2DVI@
VERSION = @VERSION@
YACC = @YACC@
YFLAGS = @YFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
ac_prefix_program = @ac_prefix_program@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
includeopt = @includeopt@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
with_default_rsh = @with_default_rsh@
with_default_ssh = @with_default_ssh@
# Unix source subdirs, where we'll want to run lint and etags:
# This is a legacy variable from b4 Automake
USOURCE_SUBDIRS = lib zlib diff src
# All other subdirs:
SUBDIRS = $(USOURCE_SUBDIRS) man doc contrib tools \
windows-NT os2 emx vms
EXTRA_DIST = \
.cvsignore \
BUGS \
ChangeLog.zoo \
DEVEL-CVS \
FAQ \
HACKING \
MINOR-BUGS \
PROJECTS \
README.VMS \
TESTS \
build.com \
cvs-format.el \
cvsnt.dep \
cvsnt.dsp \
cvsnt.dsw \
cvsnt.mak \
cvs.spec \
mktemp.sh
all: config.h
$(MAKE) $(AM_MAKEFLAGS) all-recursive
.SUFFIXES:
am--refresh:
@:
$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --gnu '; \
cd $(srcdir) && $(AUTOMAKE) --gnu \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
cd $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
config.h: stamp-h1
@if test ! -f $@; then \
rm -f stamp-h1; \
$(MAKE) $(AM_MAKEFLAGS) stamp-h1; \
else :; fi
stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status
@rm -f stamp-h1
cd $(top_builddir) && $(SHELL) ./config.status config.h
$(srcdir)/config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
cd $(top_srcdir) && $(AUTOHEADER)
rm -f stamp-h1
touch $@
distclean-hdr:
-rm -f config.h stamp-h1
cvs.spec: $(top_builddir)/config.status $(srcdir)/cvs.spec.in
cd $(top_builddir) && $(SHELL) ./config.status $@
emx/Makefile: $(top_builddir)/config.status $(top_srcdir)/emx/Makefile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
os2/Makefile: $(top_builddir)/config.status $(top_srcdir)/os2/Makefile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
zlib/Makefile: $(top_builddir)/config.status $(top_srcdir)/zlib/Makefile.in
cd $(top_builddir) && $(SHELL) ./config.status $@
# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
# To change the values of `make' variables: instead of editing Makefiles,
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.
$(RECURSIVE_TARGETS):
@failcom='exit 1'; \
for f in x $$MAKEFLAGS; do \
case $$f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
esac; \
done; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
dot_seen=yes; \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| eval $$failcom; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"
$(RECURSIVE_CLEAN_TARGETS):
@failcom='exit 1'; \
for f in x $$MAKEFLAGS; do \
case $$f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
esac; \
done; \
dot_seen=no; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| eval $$failcom; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: tags-recursive $(HEADERS) $(SOURCES) config.h.in $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS) config.h.in $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) config.h.in $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) config.h.in $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
$(am__remove_distdir)
test -d $(distdir) || mkdir $(distdir)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -d "$(distdir)/$$subdir" \
|| $(MKDIR_P) "$(distdir)/$$subdir" \
|| exit 1; \
distdir=`$(am__cd) $(distdir) && pwd`; \
top_distdir=`$(am__cd) $(top_distdir) && pwd`; \
(cd $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$top_distdir" \
distdir="$$distdir/$$subdir" \
am__remove_distdir=: \
am__skip_length_check=: \
distdir) \
|| exit 1; \
fi; \
done
-find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r $(distdir)
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
$(am__remove_distdir)
dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2
$(am__remove_distdir)
dist-tarZ: distdir
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__remove_distdir)
dist-shar: distdir
shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
$(am__remove_distdir)
dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__remove_distdir)
dist dist-all: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2
$(am__remove_distdir)
# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bunzip2 -c $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
GZIP=$(GZIP_ENV) gunzip -c $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
esac
chmod -R a-w $(distdir); chmod a+w $(distdir)
mkdir $(distdir)/_build
mkdir $(distdir)/_inst
chmod a-w $(distdir)
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& cd $(distdir)/_build \
&& ../configure --srcdir=.. --prefix="$$dc_install_base" \
$(DISTCHECK_CONFIGURE_FLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) dvi \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck
$(am__remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x'
distuninstallcheck:
@cd $(distuninstallcheck_dir) \
&& test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-recursive
all-am: Makefile config.h
installdirs: installdirs-recursive
installdirs-am:
install: install-recursive
install-exec: install-exec-recursive
install-data: install-data-recursive
uninstall: uninstall-recursive
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive
clean-am: clean-generic mostlyclean-am
distclean: distclean-recursive
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f Makefile
distclean-am: clean-am distclean-generic distclean-hdr distclean-tags
dvi: dvi-recursive
dvi-am:
html: html-recursive
info: info-recursive
info-am:
install-data-am:
install-dvi: install-dvi-recursive
install-exec-am:
install-html: install-html-recursive
install-info: install-info-recursive
install-man:
install-pdf: install-pdf-recursive
install-ps: install-ps-recursive
installcheck-am:
maintainer-clean: maintainer-clean-recursive
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-recursive
mostlyclean-am: mostlyclean-generic
pdf: pdf-recursive
pdf-am:
ps: ps-recursive
ps-am:
uninstall-am:
.MAKE: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) install-am \
install-strip
.PHONY: $(RECURSIVE_CLEAN_TARGETS) $(RECURSIVE_TARGETS) CTAGS GTAGS \
all all-am am--refresh check check-am clean clean-generic \
ctags ctags-recursive dist dist-all dist-bzip2 dist-gzip \
dist-shar dist-tarZ dist-zip distcheck distclean \
distclean-generic distclean-hdr distclean-tags distcleancheck \
distdir distuninstallcheck dvi dvi-am html html-am info \
info-am install install-am install-data install-data-am \
install-dvi install-dvi-am install-exec install-exec-am \
install-html install-html-am install-info install-info-am \
install-man install-pdf install-pdf-am install-ps \
install-ps-am install-strip installcheck installcheck-am \
installdirs installdirs-am maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-generic pdf \
pdf-am ps ps-am tags tags-recursive uninstall uninstall-am
.PHONY: localcheck remotecheck
localcheck remotecheck: all
cd src && $(MAKE) $(AM_MAKEFLAGS) "$@"
.PHONY: doc
doc:
cd doc && $(MAKE) $(AM_MAKEFLAGS) "$@"
# for backwards compatibility with the old makefiles
.PHONY: realclean
realclean: maintainer-clean
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

File diff suppressed because it is too large Load Diff

View File

@ -1,53 +0,0 @@
This is a list of projects for CVS. In general, unlike the things in
the TODO file, these need more analysis to determine if and how
worthwhile each task is.
I haven't gone through TODO, but it's likely that it has entries that
are actually more appropriate for this list.
0. Improved Efficency
* CVS uses a single doubly linked list/hash table data structure for
all of its lists. Since the back links are only used for deleting
list nodes it might be beneficial to use singly linked lists or a
tree structure. Most likely, a single list implementation will not
be appropriate for all uses.
One easy change would be to remove the "type" field out of the list
and node structures. I have found it to be of very little use when
debugging, and each instance eats up a word of memory. This can add
up and be a problem on memory-starved machines.
Profiles have shown that on fast machines like the Alpha, fsortcmp()
is one of the hot spots.
* Dynamically allocated character strings are created, copied, and
destroyed throughout CVS. The overhead of malloc()/strcpy()/free()
needs to be measured. If significant, it could be minimized by using a
reference counted string "class".
* File modification time is stored as a character string. It might be
worthwile to use a time_t internally if the time to convert a time_t
(from struct stat) to a string is greater that the time to convert a
ctime style string (from the entries file) to a time_t. time_t is
an machine-dependant type (although it's pretty standard on UN*X
systems), so we would have to have different conversion routines.
Profiles show that both operations are called about the same number
of times.
* stat() is one of the largest performance bottlenecks on systems
without the 4.4BSD filesystem. By spliting information out of
the filesystem (perhaps the "rename database") we should be
able to improve performance.
* Parsing RCS files is very expensive. This might be unnecessary if
RCS files are only used as containers for revisions, and tag,
revision, and date information was available in easy to read
(and modify) indexes. This becomes very apparent with files
with several hundred revisions.
1. Improved testsuite/sanity check script
* Need to use a code coverage tool to determine how much the sanity
script tests, and fill in the holes.

View File

@ -1,144 +0,0 @@
CVS Kit
Copyright (C) 1986-2006 Free Software Foundation, Inc.
Portions Copyright (C) 1998-2006 Derek Price,
& Ximbiot <http://ximbiot.com>.
Portions Copyright (C) 1993-1994 Brian Berliner.
Portions Copyright (C) 1992 Brian Berliner and Jeff Polk.
Portions Copyright (C) 1989-1992 Brian Berliner.
All Rights Reserved
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 1, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
-------------------------------------------------------------------------------
Welcome to CVS!
If you have problems or think you have found a bug in CVS, see the
section BUGS in the CVS manual (also known as Version Management with
CVS by Per Cederqvist et al, or cvs.texinfo--see below for details).
If you are thinking of submitting changes to CVS, see the
file HACKING.
Please consult the INSTALL file for information on tested
configurations. If you have a comment about an already tested
configuration, or have tried CVS on a new configuration, please let us
know as described in INSTALL. Free software only works if we all help
out.
Finally, we cannot guarantee that this release will not completely wipe out
all of your work from your system. We do some simple testing before each
release, but you are completely on your own. We recommend testing this
release on a source repository that is not critical to your work. THIS
SOFTWARE IS SUPPLIED COMPLETELY "AS IS". NO WARRANTY....
Thanks for your support!
-The CVS Team
-------------------------------------------------------------------------------
What Is CVS?
CVS is a version control system, which allows you to keep old versions
of files (usually source code), keep a log of who, when, and why
changes occurred, etc., like RCS or SCCS. It handles multiple
developers, multiple directories, triggers to enable/log/control
various operations, and can work over a wide area network. The
following tasks are not included; they can be done in conjunction with
CVS but will tend to require some script-writing and software other
than CVS: bug-tracking, build management (that is, make and make-like
tools), and automated testing.
And a whole lot more. See the manual for more information.
-------------------------------------------------------------------------------
Notes to people upgrading from a previous release of CVS:
See the NEWS file for a description of features new in this version.
See the Compatibility section of the manual for information on
compatibility between CVS versions. The quick summary is that as long
as you not using the optional watch features, there are no
compatibility problems with CVS 1.5 or later.
-------------------------------------------------------------------------------
Verifying the Integrity of Downloads:
The official CVS source and binary releases are signed by the CVS maintainer
who generated them. This does not imply any sort of warranty, but it does mean
that you can verify that the file you downloaded did, in fact, come from a CVS
maintainer.
The OpenPGP keys of the CVS maintainers who have submitted them are in the KEYS
file of the CVS distribution and are also available from many OpenPGP key
servers. It is recommended that you verify the key fingerprints against an
external source, however you obtain the key.
-------------------------------------------------------------------------------
Installation:
Please read the INSTALL file for installation instructions. Brief summary:
$ ./configure
$ make
(run the regression tests if desired)
$ make install
(create a repository if you don't already have one)
The documentation is in the doc subdirectory. cvs.texinfo is the main
manual; cvs.info* and cvs.ps are the info and postscript versions,
respectively, generated from cvs.texinfo. The postscript version is
for US letter size paper; we do this not because we consider this size
"better" than A4, but because we believe that the US letter version
will print better on A4 paper than the other way around. If you want a
version formatted for A4, add the line @afourpaper near the start of
cvs.texinfo and re-generate cvs.ps using TeX.
-------------------------------------------------------------------------------
* How do I get up-to-date information and information about other
versions of CVS?
See also
http://cvs.nongnu.org
http://www.cvsnt.org
Anyone can add themselves to the following mailing lists:
bug-cvs: This is the list which users are requested to send bug reports
to. General CVS development and design discussions also tend to take
place on this list.
info-cvs: This list is intended for user questions, including general
help requests.
cvs-announce: CVS release announcements and other major
announcements about the project are sent to this list.
cvs-announce-binaries: Announcements are made to this list
when binaries for various platforms are built and initially
posted for download.
To subscribe to any of these lists, send mail to <list>-request@nongnu.org
or visit http://savannah.nongnu.org/mail/?group=cvs and follow the instructions
for the list you wish to subscribe to.
The newsgroup for CVS (and other configuration management systems) is
comp.software.config-mgmt. The gnu.cvs.help newsgroup is a 2-way mirror
of the info-cvs@nongnu.org mailing list and gnu.cvs.bug is similarly a 2-way
mirror of bug-cvs@nongnu.org.
-------------------------------------------------------------------------------
Credits: See the AUTHORS file.

View File

@ -1,240 +0,0 @@
To run the tests:
$ make check
Note that if your /bin/sh doesn't support shell functions, you'll
have to try something like this, where "/bin/sh5" is replaced by the
pathname of a shell which handles normal shell functions:
$ make SHELL=/bin/sh5 check
Also note that you must be logged in as a regular user, not root.
WARNING: This test can take quite a while to run, esp. if your
disks are slow or over-loaded.
The tests work in /tmp/cvs-sanity (which the tests create) by default.
If for some reason you want them to work in a different directory, you
can set the TESTDIR environment variable to the desired location
before running them.
The tests use a number of tools (awk, expr, id, tr, etc.) that are not
required for running CVS itself. In most cases, the standard vendor-
supplied versions of these tools work just fine, but there are some
exceptions -- expr in particular is heavily used and many vendor
versions are deficient in one way or another. Note that some vendors
provide multiple versions of tools (typically an ancient, traditional
version and a new, standards-conforming version), so you may already
have a usable version even if the default version isn't. If you don't
have a suitable tool, you can probably get one from the GNU Project (see
http://www.gnu.org). At this writting, expr and id are both part of the
GNU shellutils package, tr is part of the GNU textutils package, and awk
is part of the GNU gawk package. The test script tries to verify that
the tools exist and are usable; if not, it tries to find the GNU
versions and use them instead. If it can't find the GNU versions
either, it will print an error message and, depending on the severity of
the deficiency, it may exit. There are environment variables you can
set to use a particular version of a tool -- see the test script
(src/sanity.sh) for details.
Some of the tests use fairly long command lines -- this usually isn't a
problem, but if you have a very short command line length limit (or a
lot of environment variables), you may run into trouble. Also, some of
the tests expect your local timezone to be an integral number of hours
from UTC -- if you usually use a fractional timezone, use a different
(integral) timezone when running the tests to avoid spurious failures.
If running the tests produces the output "FAIL:" followed by the name
of the test that failed, then the details on the failure are in the
file check.log. If it says "exit status is " followed by a number,
then the exit status of the command under test was not what the test
expected. If it says "** expected:" followed by a regular expression
followed by "** got:" followed by some text, then the regular
expression is the output which the test expected, and the text is the
output which the command under test actually produced. In some cases
you'll have to look closely to see how they differ.
If output from "make remotecheck" is out of order compared to what is
expected (for example,
a
b
cvs foo: this is a demo
is expected and
a
cvs foo: this is a demo
b
is output), this is probably a well-known bug in the CVS server
(search for "out-of-order" in src/server.c for a comment explaining
the cause). It is a real pain in running the testsuite, but if you
are lucky and/or your machine is fast and/or lightly loaded, you won't
run into it. Running the tests again might succeed if the first run
failed in this manner.
For more information on what goes in check.log, and how the tests are
run in general, you'll have to read sanity.sh. Depending on just what
you are looking for, and how familiar you are with the Bourne shell
and regular expressions, it will range from relatively straightforward
to obscure.
If you choose to submit a bug report based on tests failing, be
aware that, as with all bug reports, you may or may not get a
response, and your odds might be better if you include enough
information to reproduce the bug, an analysis of what is going
wrong (if you have the time to provide one), etc. The check.log
file is the first place to look.
ABOUT STDOUT AND STDERR
***********************
The sanity.sh test framework combines stdout and stderr and for tests
to pass requires that output appear in the given order. Some people
suggest that ordering between stdout and stderr should not be
required, or to put it another way, that the out-of-order bug referred
to above, and similar behaviors, should be considered features, or at
least tolerable. The reasoning behind the current behavior is that
having the output appear in a certain order is the correct behavior
for users using CVS interactively--that users get confused if the
order is unpredictable.
ABOUT TEST FRAMEWORKS
*********************
People periodically suggest using dejagnu or some other test
framework. A quick look at sanity.sh should make it clear that there
are indeed reasons to be dissatisfied with the status quo. Ideally a
replacement framework would achieve the following:
1. Widely portable, including to a wide variety of unices, NT, Win95,
OS/2, VMS, probably DOS and Win3, etc.
2. Nicely match extended regular expressions of unlimited length.
3. Be freely redistributable, and if possible already the kind of
thing people might have already installed. The harder it is to get
and install the framework, the less people will run the tests.
The various contenders are:
* Bourne shell and GNU expr (the status quo). Falls short on #1
(we've only tried unix and NT, although MKS might help with other DOS
mutants). #3 is pretty good (the main dependency is GNU expr which is
fairly widely available).
* Bourne shell with a new regexp matcher we would distribute with
CVS. This means maintaining a regexp matcher and the makefiles which
go with it. Not clearly a win over Bourne shell and GNU expr.
* Bourne shell, and use sed to remove variable portions of output, and
thus produce a form that can be compared with cmp or diff (this
sidesteps the need for a full regular expression matcher as mentioned
in #2 above). The C News tests are said to work this way. This would
appear to rely on variable portions of output having a certain syntax
and might spuriously recognize them out of context (this issue needs
more investigation; it isn't clear how big a problem it is in
practice). Same portability issues as the other choices based on the
Bourne shell.
* Dejagnu. This is overkill; most of dejagnu is either unnecessary
(e.g. libraries for communicating with target boards) or undesirable
(e.g. the code which stats every file in sight to find the tests). On
the plus side, dejagnu is probably closer than any of the other
choices to having everything which is needed already there.
* Write our own small framework directly in tcl and distribute with
CVS. The tests would look much like dejagnu tests, but we'd avoid the
unnecessary baggage. The only dependency would be on tcl (that is,
wish).
* perl or python or <any other serious contenders here?>
It is worth thinking about how to:
a. include spaces in arguments which we pass to the program under
test (sanity.sh dotest cannot do this; see test rcs-9 for a
workaround).
b. pass stdin to the program under test (sanity.sh, again, handles
this by bypassing dotest).
c. have a send-expect type dialog with the program under test
(e.g. see server-7 or pserver-4 which want to talk the CVS
protocol, or the many tests which need to answer the prompt of "cvs
release", e.g. deep-5).
ABOUT ADDING YOUR OWN TESTS
***************************
As stated in the HACKING file, patches are not accepted without documentation
and tests. Many people seem to be scared off by the large size of the
sanity.sh script, but it is not really very complicated.
You can probably ignore most of the begining of the script. This section
just sets some environment variables and finds the tools the script needs to
run.
There is one main loop you can find by grepping for "The big loop". This loop
repeatedly calls a case statement where the individual cases are of the form:
testname)
...
;;
If you add a complete new test be sure to add it into the default list of tests
(grep for 'tests=' near the begining of the script) as well as the case
statement. During debugging, be aware that the sanity.sh usage allows for a '-f
testname' option to continue through the default list "from" a particular test
as well as interpreting everything in argv past the required options as test
names to run individual tests.
Within each major test section, individual tests usually look like:
dotest testname-subtestname "shell command" "optionally multiline regexp"
Tests should always start in $testdir and create a subdirectory to operate in
and remove their cruft and end back in $testdir. The dotest functions output
failure messages and exit if the shell command exits with the wrong exit code or
its stdin/stderr output doesn't match the regexp. There are a few dotest
variations, most notably dotest_fail for expected non-zero exit codes.
Other than that the script is mostly vanilla Bourne shell. There are a few
constructs used for versatility and portability. You can grep for the ones I
miss, but here are a few important ones. I'm leaving off long explanations
after the first few since it probably gives you the idea and the data is in
sanity.sh.
Note that the boolean variables contain shell commands which return true or
false when executed and are intended to be used like,
"if $remote; then ... ; else ... ; fi"
* $testdir = the directory this test is taking place in
(CVSROOT=$testdir/cvsroot or
CVSROOT=:fork:$testdir/cvsroot)
* $testcvs = full path to the cvs executable we are testing
* $PLUS = expr dependant uninterpreted '+' since this can vary
* $DOTSTAR = expr dependant _interpreted_ .* since some exprs don't
match EOL
* $username = the username of the user running the tests
* $username8 = the first 8 characters of $username, output by some
system and CVS commands
* $anyusername = regexp to match any valid system or CVS username
* $hostname = regexp to match a hostname
* $PROG = regexp to match progname in CVS error messages
* $remote = ':' (true) or 'false', depending on whether the script is
running with a remote CVSROOT
* $keep = ':' (true) or 'false'. When set, the first test run will
leave any files and directories it created in $testdir and
exit when complete.
And, of course, some characters like '.' in regexps need to be '\' escaped when
you mean them literally. Some characters may be interpreted by the shell,
e.g. backquotes and '$', are usually either escaped or replaced with '.'.
dotest adds the final '$' anchor to the regexp itself and all the expr
implementations I know of implicitly supply the start anchor ('^').
If you only make a few mistakes, the work is, of course, still usable, though we
may send the patch back to you for repair. :)

View File

@ -1,862 +0,0 @@
The "TODO" file! -*-Indented-Text-*-
38. Think hard about using RCS state information to allow one to checkin
a new vendor release without having it be accessed until it has been
integrated into the local changes.
39. Think about a version of "cvs update -j" which remembers what from
that other branch is already merged. This has pitfalls--it could
easily lead to invisible state which could confuse users very
rapidly--but having to create a tag or some such mechanism to keep
track of what has been merged is a pain. Take a look at PRCS 1.2.
PRCS 1.0 was particularly bad the way it handled the "invisible
state", but 1.2 is significantly better.
52. SCCS has a feature that I would *love* to see in CVS, as it is very
useful. One may make a private copy of SCCS suid to a particular user,
so other users in the authentication list may check files in and out of
a project directory without mucking about with groups. Is there any
plan to provide a similar functionality to CVS? Our site (and, I'd
imagine, many other sites with large user bases) has decided against
having the user-groups feature of unix available to the users, due to
perceived administrative, technical and performance headaches. A tool
such as CVS with features that provide group-like functionality would
be a huge help.
62. Consider using revision controlled files and directories to handle the
new module format -- consider a cvs command front-end to
add/delete/modify module contents, maybe.
63. The "import" and vendor support commands (co -j) need to be documented
better.
66. Length of the CVS temporary files must be limited to 14 characters for
System-V stupid support. As well as the length on the CVS.adm files.
72. Consider re-design of the module -t options to use the file system more
intuitively.
73. Consider an option (in .cvsrc?) to automatically add files that are new
and specified to commit.
79. Might be nice to have some sort of interface to Sun's Translucent
(?) File System and tagged revisions.
82. Maybe the import stuff should allow an arbitrary revision to be
specified.
84. Improve the documentation about administration of the repository and
how to add/remove files and the use of symbolic links.
85. Make symbolic links a valid thing to put under version control.
Perhaps use one of the tag fields in the RCS file? Note that we
can only support symlinks that are relative and within the scope of
the sources being controlled.
93. Need to think hard about release and development environments. Think
about execsets as well.
98. If diff3 bombs out (too many differences) cvs then thinks that the file
has been updated and is OK to be commited even though the file
has not yet been merged.
100. Checked out files should have revision control support. Maybe.
102. Perhaps directory modes should be propagated on all import check-ins.
Not necessarily uid/gid changes.
103. setuid/setgid on files is suspect.
104. cvs should recover nicely on unreadable files/directories.
105. cvs should have administrative tools to allow for changing permissions
and modes and what not. In particular, this would make cvs a
more attractive alternative to rdist.
107. It should be possible to specify a list of symbolic revisions to
checkout such that the list is processed in reverse order looking for
matches within the RCS file for the symbolic revision. If there is
not a match, the next symbolic rev on the list is checked, and so on,
until all symbolic revs are exhausted. This would allow one to, say,
checkout "4.0" + "4.0.3" + "4.0.3Patch1" + "4.0.3Patch2" to get the
most recent 4.x stuff. This is usually handled by just specifying the
right release_tag, but most people forget to do this.
108. If someone creates a whole new directory (i.e. adds it to the cvs
repository) and you happen to have a directory in your source farm by
the same name, when you do your cvs update -d it SILENTLY does
*nothing* to that directory. At least, I think it was silent;
certainly, it did *not* abort my cvs update, as it would have if the
same thing had happened with a file instead of a directory.
109. I had gotten pieces of the sys directory in the past but not a
complete tree. I just did something like:
cvs get *
Where sys was in * and got the message
cvs get: Executing 'sys/tools/make_links sys'
sh: sys/tools/make_links: not found
I suspect this is because I didn't have the file in question,
but I do not understand how I could fool it into getting an
error. I think a later cvs get sys seemed to work so perhaps
something is amiss in handling multiple arguments to cvs get?
119. When importing a directory tree that is under SCCS/RCS control,
consider an option to have import checkout the SCCS/RCS files if
necessary. (This is if someone wants to import something which
is in RCS or SCCS without preserving the history, but makes sure
they do get the latest versions. It isn't clear to me how useful
that is -kingdon, June 1996).
122. If Name_Repository fails, it currently causes CVS to die completely. It
should instead return NULL and have the caller do something reasonable
(??? -what is reasonable? I'm not sure there is a real problem here.
-kingdon, June 1996).
123. Add a flag to import to not build vendor branches for local code.
(See `importb' tests in src/sanity.sh for more details).
124. Anyway, I thought you might want to add something like the following
to the cvs man pages:
BUGS
The sum of the sizes of a module key and its contents are
limited. See ndbm(3).
126. Do an analysis to see if CVS is forgetting to close file descriptors.
Especially when committing many files (more than the open file limit
for the particular UNIX).
127. Look at *info files; they should all be quiet if the files are not
there. Should be able to point at a RCS directory and go.
130. cvs diff with no -r arguments does not need to look up the current RCS
version number since it only cares about what's in the Entries file.
This should make it much faster.
It should ParseEntries itself and access the entries list much like
Version_TS does (sticky tags and sticky options may need to be
supported here as well). Then it should only diff the things that
have the wrong time stamp (the ones that look modified).
134. Make a statement about using hard NFS mounts to your source
repository. Look into checking NULL fgets() returns with ferror() to
see if an error had occurred. (we should be checking for errors, quite
aside from NFS issues -kingdon, June 1996).
137. Some sites might want CVS to fsync() the RCS ,v file to protect
against nasty hardware errors. There is a slight performance hit with
doing so, though, so it should be configurable in the .cvsrc file.
Also, along with this, we should look at the places where CVS itself
could be a little more synchronous so as not to lose data.
[[ I've done some of this, but it could use much more ]]
138. Some people have suggested that CVS use a VPATH-like environment
variable to limit the amount of sources that need to be duplicated for
sites with giant source trees and no disk space.
141. Import should accept modules as its directory argument. If we're
going to implement this, we should think hard about how modules
might be expanded and how to handle those cases.
143. Update the documentation to show that the source repository is
something far away from the files that you work on. (People who
come from an RCS background are used to their `repository' being
_very_ close to their working directory.)
144. Have cvs checkout look for the environment variable CVSPREFIX
(or CVSMODPREFIX or some such). If it's set, then when looking
up an alias in the modules database, first look it up with the
value of CVSPREFIX attached, and then look for the alias itself.
This would be useful when you have several projects in a single
repository. You could have aliases abc_src and xyz_src and
tell people working on project abc to put "setenv CVSPREFIX abc_"
in their .cshrc file (or equivalent for other shells).
Then they could do "cvs co src" to get a copy of their src
directory, not xyz's. (This should create a directory called
src, not abc_src.)
145. After you create revision 1.1.1.1 in the previous scenario, if
you do "cvs update -r1 filename" you get revision 1.1, not
1.1.1.1. It would be nice to get the later revision. Again,
this restriction comes from RCS and is probably hard to
change in CVS. Sigh.
|"cvs update -r1 filename" does not tell RCS to follow any branches. CVS
|tries to be consistent with RCS in this fashion, so I would not change
|this. Within CVS we do have the flexibility of extending things, like
|making a revision of the form "-r1HEAD" find the most recent revision
|(branch or not) with a "1." prefix in the RCS file. This would get what
|you want maybe.
This would be very useful. Though I would prefer an option
such as "-v1" rather than "-r1HEAD". This option might be
used quite often.
146. The merging of files should be controlled via a hook so that programs
other than "rcsmerge" can be used, like Sun's filemerge or emacs's
emerge.el. (but be careful in making this work client/server--it means
doing the interactive merging at the end after the server is done).
(probably best is to have CVS do the non-interactive part and
tell the user about where the files are (.#foo.c.working and
.#foo.c.1.5 or whatever), so they can do the interactive part at
that point -kingdon, June 1996).
149. Maybe there should be an option to cvs admin that allows a user to
change the Repository/Root file with some degree of error checking?
Something like "cvs admin reposmv /old/path /new/pretty/path". Before
it does the replace it check to see that the files
/new/pretty/path/<dir>/<files> exist.
The obvious cases are where one moves the repository to another
machine or directory. But there are other cases, like where the
user might want to change from :pserver: to :ext:, use a different
server (if there are two server machines which share the
repository using a networked file system), etc.
The status quo is a bit of a mess (as of, say, CVS 1.9). It is
that the -d global option has two moderately different uses. One
is to use a totally different repository (in which case we'd
probably want to give an error if it disagreed with CVS/Root, as
CVS 1.8 and earlier did). The other is the "reposmv"
functionality above (in which the two repositories really are the
same, and we want to update the CVS/Root files). In CVS 1.9 and
1.10, -d rewrites the CVS/Root file (but not in subdirectories).
This behavior was not particularly popular and has been since
reverted.
This whole area is a rather bad pile of individual decisions which
accumulated over time, some of them probably bad decisions with
hindsight. But we didn't get into this mess overnight, and we're
not going to get out of it overnight (that is, we need to come up
with a replacement behavior, document what parts of the status
quo are deprecated, probably circulate some unofficial patches, &c).
(this item originally added 2 Feb 1992 but revised since).
150. I have a customer request for a way to specify log message per
file, non-interactively before the commit, such that a single, fully
recursive commit prompts for one commit message, and concatenates the
per file messages for each file. In short, one commit, one editor
session, log messages allowed to vary across files within the commit.
Also, the per file messages should be allowed to be written when the
files are changed, which may predate the commit considerably.
A new command seems appropriate for this. The state can be saved in the
CVS directory. I.e.,
% cvs message foo.c
Enter log message for foo.c
>> fixed an uninitialized variable
>> ^D
The text is saved as CVS/foo.c,m (or some such name) and commit
is modified to append (prepend?) the text (if found) to the log
message specified at commit time. Easy enough. (having cvs
commit be non-interactive takes care of various issues like
whether to connect to the server before or after prompting for a
message (see comment in commit.c at call to start_server). Also
would clean up the kludge for what to do with the message from
do_editor if the up-to-date check fails (see commit.c client code).
I'm not sure about the part above about having commit prompt
for an overall message--part of the point is having commit
non-interactive and somehow combining messages seems like (excess?)
hair.
Would be nice to do this so it allows users more flexibility in
specifying messages per-directory ("cvs message -l") or per-tree
("cvs message") or per-file ("cvs message foo.c"), and fixes the
incompatibility between client/server (per-tree) and
non-client/server (per-directory).
A few interesting issues with this: (1) if you do a cvs update or
some other operation which changes the working directory, do you
need to run "cvs message" again (it would, of course, bring up
the old message which you could accept)? Probably yes, after all
merging in some conflicts might change the situation. (2) How do
you change the stored messages if you change your mind before the
commit (probably run "cvs message" again, as hinted in (1))?
151. Also, is there a flag I am missing that allows replacing Ulrtx_Build
by Ultrix_build? I.E. I would like a tag replacement to be a one step
operation rather than a two step "cvs rtag -r Ulrtx_Build Ultrix_Build"
followed by "cvs rtag -d Ulrtx_Build"
152. The "cvs -n" option does not work as one would expect for all the
commands. In particular, for "commit" and "import", where one would
also like to see what it would do, without actually doing anything.
153. There should be some command (maybe I just haven't figured out
which one...) to import a source directory which is already
RCS-administered without losing all prior RCS gathered data.
Thus, it would have to examine the RCS files and choose a
starting version and branch higher than previous ones used.
(Check out rcs-to-cvs and see if it addresses this issue.)
154. When committing the modules file, a pre-commit check should be done to
verify the validity of the new modules file before allowing it to be
committed.
155. The options for "cvs history" are mutually exclusive, even though
useful queries can be done if they are not, as in specifying both
a module and a tag. A workaround is to specify the module, then
run the output through grep to only display lines that begin with
T, which are tag lines. (Better perhaps if we redesign the whole
"history" business -- check out doc/cvs.texinfo for the entire
rant.)
156. Also, how hard would it be to allow continuation lines in the
{commit,rcs,log}info files? It would probably be useful with all of
the various flags that are now available, or if somebody has a lot of
files to put into a module.
158. If I do a recursive commit and find that the same RCS file is checked
out (and modified!) in two different places within my checked-out
files (but within the realm of a single "commit"), CVS will commit the
first change, then overwrite that change with the second change. We
should catch this (typically unusual) case and issue an appropriate
diagnostic and die.
160. The checks that the commit command does should be extended to make
sure that the revision that we will lock is not already locked by
someone else. Maybe it should also lock the new revision if the old
revision was already locked by the user as well, thus moving the lock
forward after the commit.
163. The rtag/tag commands should have an option that removes the specified
tag from any file that is in the attic. This allows one to re-use a
tag (like "Mon", "Tue", ...) all the time and still have it tag the
real main-line code.
165. The "import" command will create RCS files automatically, but will
screw-up when trying to create long file names on short file name
file systems. Perhaps import should be a bit more cautious.
166. There really needs to be a "Getting Started" document which describes
some of the new CVS philosophies. Folks coming straight from SCCS or
RCS might be confused by "cvs import". Also need to explain:
- How one might setup their $CVSROOT
- What all the tags mean in an "import" command
- Tags are important; revision numbers are not
170. Is there an "info" file that can be invoked when a file is checked out, or
updated ? What I want to do is to advise users, particularly novices, of
the state of their working source whenever they check something out, as
a sanity check.
For example, I've written a perl script which tells you what branch you're
on, if any. Hopefully this will help guard against mistaken checkins to
the trunk, or to the wrong branch. I suppose I can do this in
"commitinfo", but it'd be nice to advise people before they edit their
files.
It would also be nice if there was some sort of "verboseness" switch to
the checkout and update commands that could turn this invocation of the
script off, for mature users.
173. Need generic date-on-branch handling. Currently, many commands
allow both -r and -D, but that's problematic for commands like diff
that interpret that as two revisions rather than a single revision.
Checkout and update -j takes tag:date which is probably a better
solution overall.
174. I would like to see "cvs release" modified so that it only removes files
which are known to CVS - all the files in the repository, plus those which
are listed in .cvsignore. This way, if you do leave something valuable in
a source tree you can "cvs release -d" the tree and your non-CVS goodies
are still there. If a user is going to leave non-CVS files in their source
trees, they really should have to clean them up by hand.
175. And, in the feature request department, I'd dearly love a command-line
interface to adding a new module to the CVSROOT/modules file.
176. If you use the -i flag in the modules file, you can control access
to source code; this is a Good Thing under certain circumstances. I
just had a nasty thought, and on experiment discovered that the
filter specified by -i is _not_ run before a cvs admin command; as
this allows a user to go behind cvs's back and delete information
(cvs admin -o1.4 file) this seems like a serious problem.
177. We've got some external vendor source that sits under a source code
hierarchy, and when we do a cvs update, it gets wiped out because
its tag is different from the "main" distribution. I've tried to
use "-I" to ignore the directory, as well as .cvsignore, but this
doesn't work.
179. "cvs admin" does not log its actions with loginfo, nor does it check
whether the action is allowed with commitinfo. It should.
180. "cvs edit" should show you who is already editing the files,
probably (that is, do "cvs editors" before executing, or some
similar result). (But watch out for what happens if the network
is down!).
182. There should be a way to show log entries corresponding to
changes from tag "foo" to tag "bar". "cvs log -rfoo:bar" doesn't cut
it, because it erroneously shows the changes associated with the
change from the revision before foo to foo. I'm not sure that is ever
a useful or logical behavior ("cvs diff -r foo -r bar" gets this
right), but is compatibility an issue? See
http://www.cyclic.com/cvs/unoff-log.txt for an unofficial patch.
183. "cvs status" should report on Entries.Static flag and CVS/Tag (how?
maybe a "cvs status -d" to give directory status?). There should also
be more documentation of how these get set and how/when to re-set them.
184. Would be nice to implement the FreeBSD MD5-based password hash
algorithm in pserver. For more info see "6.1. DES, MD5, and Crypt" in
the FreeBSD Handbook, and src/lib/libcrypt/crypt.c in the FreeBSD
sources. Certainly in the context of non-unix servers this algorithm
makes more sense than the traditional unix crypt() algorithm, which
suffers from export control problems.
185. A frequent complaint is that keyword expansion causes conflicts
when merging from one branch to another. The first step is
documenting CVS's existing features in this area--what happens with
various -k options in various places? The second step is thinking
about whether there should be some new feature and if so how it should
be designed. For example, here is one thought:
rcs' co command needs a new -k option. The new option should expand
$Log entries without expanding $Revision entries. This would
allow cvs to use rcsmerge in such a way that joining branches into
main lines would neither generate extra collisions on revisions nor
drop log lines.
The details of this are out of date (CVS no longer invokes "co", and
any changes in this area would be done by bypassing RCS rather than
modifying it), but even as to the general idea, I don't have a clear
idea about whether it would be good (see what I mean about the need
for better documentation? I work on CVS full-time, and even I don't
understand the state of the art on this subject).
186. There is a frequent discussion of multisite features.
* There may be some overlap with the client/server CVS, which is good
especially when there is a single developer at each location. But by
"multisite" I mean something in which each site is more autonomous, to
one extent or another.
* Vendor branches are the closest thing that CVS currently has for
multisite features. They have fixable drawbacks (such as poor
handling of added and removed files), and more fundamental drawbacks
(when you import a vendor branch, you are importing a set of files,
not importing any knowledge of their version history outside the
current repository).
* One approach would be to require checkins (or other modifications to
the repository) to succeed at a write quorum of sites (51%) before
they are allowed to complete. To work well, the network should be
reliable enough that one can typically get to that many sites. When a
server which has been out of touch reconnects, it would want to update
its data before doing anything else. Any of the servers can service
all requests locally, except perhaps for a check that they are
up-to-date. The way this differs from a run-of-the-mill distributed
database is that if one only allows reversible operations via this
mechanism (exclude "cvs admin -o", "cvs tag -d", &c), then each site
can back up the others, such that failures at one site, including
something like deleting all the sources, can be recovered from. Thus
the sites need not trust each other as much as for many shared
databases, and the system may be resilient to many types of
organizational failures. Sometimes I call this design the
"CVScluster" design.
* Another approach is a master/slave one. Checkins happen at the
master site, and slave sites need to check whether their local
repository is up to date before relying on its information.
* Another approach is to have each site own a particular branch. This
one is the most tolerant of flaky networks; if checkins happen at each
site independently there is no particular problem. The big question
is whether merges happen only manually, as with existing CVS branches,
or whether there is a feature whereby there are circumstances in which
merges from one branch to the other happen automatically (for example,
the case in which the branches have not diverged). This might be a
legitimate question to ask even quite aside from multisite features.
187. Might want to separate out usage error messages and help
messages. The problem now is that if you specify an invalid option,
for example, the error message is lost among all the help text. In
the new regime, the error message would be followed by a one-line
message directing people to the appropriate help option ("cvs -H
<command>" or "cvs --help-commands" or whatever, according to the
situation). I'm not sure whether this change would be controversial
(as defined in HACKING), so there might be a need for further
discussion or other actions other than just coding.
188. Option parsing and .cvsrc has at least one notable limitation.
If you want to set a global option only for some CVS commands, there
is no way to do it (for example, if one wants to set -q only for
"rdiff"). I am told that the "popt" package from RPM
(http://www.rpm.org) could solve this and other problems (for example,
if the syntax of option stuff in .cvsrc is similar to RPM, that would
be great from a user point of view). It would at least be worth a
look (it also provides a cleaner API than getopt_long).
Another issue which may or may not be related is the issue of
overriding .cvsrc from the command line. The cleanest solution might
be to have options in mutually exclusive sets (-l/-R being a current
example, but --foo/--no-foo is a better way to name such options). Or
perhaps there is some better solution.
189. Renaming files and directories is a frequently discussed topic.
Some of the problems with the status quo:
a. "cvs annotate" cannot operate on both the old and new files in a
single run. You need to run it twice, once for the new name and once
for the old name.
b. "cvs diff" (or "cvs diff -N") shows a rename as a removal of the
old file and an addition of the new one. Some people would like to
see the differences between the file contents (but then how would we
indicate the fact that the file has been renamed? Certainly the
notion that "patch(1)" has of renames is as a removal and addition).
c. "cvs log" should be able to show the changes between two
tags/dates, even in the presence of adds/removes/renames (I'm not sure
what the status quo is on this; see also item #182).
d. Renaming directories is way too hard.
Implementations:
It is perhaps premature to try to design implementation details
without answering some of the above questions about desired behaviors
but several general implementations get mentioned.
i. No fundamental changes (for example, a "cvs rename" command which
operated on directories could still implement the current recommended
practice for renaming directories, which is to rename each of the
files contained therein via an add and a remove). One thing to note
that the status quo gets right is proper merges, even with adds and
removals (Well, mostly right at least. There are a *LOT* of different
cases; see the testsuite for some of them).
ii. Rename database. In this scheme the files in the repository
would have some arbitrary name, and then a separate rename database
would indicate the current correspondence between the filename in the
working directory and the actual storage. As far as I know this has
never been designed in detail for CVS.
iii. A modest change in which the RCS files would contain some
information such as "renamed from X" or "renamed to Y". That is, this
would be generally similar to the log messages which are suggested
when one renames via an add and a removal, but would be
computer-parseable. I don't think anyone has tried to flesh out any
details here either.
It is interesting to note that in solution ii. version numbers in the
"new file" start where the "old file" left off, while in solutions
i. and iii., version numbers restart from 1.1 each time a file is
renamed. Except perhaps in the case where we rename a file from foo
to bar and then back to foo. I'll shut up now.
Regardless of the method we choose, we need to address how renames
affect existing CVS behaviors. For example, what happens when you
rename a file on a branch but not the trunk and then try to merge the
two? What happens when you rename a file on one branch and delete it
on another and try to merge the two?
Ideally, we'd come up with a way to parameterize the problem and
simply write up a lookup table to determine the correct behavior.
190. The meaning of the -q and -Q global options is very ad hoc;
there is no clear definition of which messages are suppressed by them
and which are not. Here is a classification of the current meanings
of -q; I don't know whether anyone has done a similar investigation of
-Q:
a. The "warm fuzzies" printed upon entering each directory (for
example, "cvs update: Updating sdir"). The need for these messages
may be decreased now that most of CVS uses ->fullname instead of
->file in messages (a project which is *still* not 100% complete,
alas). However, the issue of whether CVS can offer status as it
runs is an important one. Of course from the command line it is
hard to do this well and one ends up with options like -q. But
think about emacs, jCVS, or other environments which could flash you
the latest status line so you can see whether the system is working
or stuck.
b. Other cases where the message just offers information (rather
than an error) and might be considered unnecessarily verbose. These
have a certain point to them, although it isn't really clear whether
it should be the same option as the warm fuzzies or whether it is
worth the conceptual hair:
add.c: scheduling %s `%s' for addition (may be an issue)
modules.c: %s %s: Executing '%s' (I can see how that might be noise,
but...)
remove.c: scheduling `%s' for removal (analogous to the add.c one)
update.c: Checking out %s (hmm, that message is a bit on the noisy side...)
(but the similar message in annotate is not affected by -q).
c. Suppressing various error messages. This is almost surely
bogus.
commit.c: failed to remove tag `%s' from `%s' (Questionable.
Rationale might be that we already printed another message
elsewhere but why would it be necessary to avoid
the extra message in such an uncommon case?)
commit.c: failed to commit dead revision for `%s' (likewise)
remove.c: file `%s' still in working directory (see below about rm
-f analogy)
remove.c: nothing known about `%s' (looks dubious to me, especially in
the case where the user specified it explicitly).
remove.c: removed `%s' (seems like an obscure enough case that I fail
to see the appeal of being cryptically concise here).
remove.c: file `%s' already scheduled for removal (now it is starting
to look analogous to the infamous rm -f option).
rtag.c: cannot find tag `%s' in `%s' (more rm -f like behavior)
rtag.c: failed to remove tag `%s' from `%s' (ditto)
tag.c: failed to remove tag %s from %s (see above about whether RCS_*
has already printed an error message).
tag.c: couldn't tag added but un-commited file `%s' (more rm -f
like behavior)
tag.c: skipping removed but un-commited file `%s' (ditto)
tag.c: cannot find revision control file for `%s' (ditto, but at first
glance seems even worse, as this would seem to be a "can't happen"
condition)
191. Storing RCS files, especially binary files, takes rather more
space than it could, typically.
- The virtue of the status quo is that it is simple to implement.
Of course it is also simplest in terms of dealing with compatibility.
- Just storing the revisions as separate gzipped files is a common
technique. It also is pretty simple (no new algorithms, CVS
already has zlib around). Of course for some files (such as files
which are already compressed) the gzip step won't help, but
something which can at least sometimes avoid rewriting the entire
RCS file for each new revision would, I would think, be a big
speedup for large files.
- Josh MacDonald has written a tool called xdelta which produces
differences (that is, sufficient information to transform the old
to the new) which looks for common sequences of bytes, like RCS
currently does, but which is not based on lines. This seems to do
quite well for some kinds of files (e.g. FrameMaker documents,
text files), and not as well for others (anything which is already
compressed, executables). xdelta 1.10 also is faster than GNU diff.
- Karl Fogel has thought some about using a difference technique
analogous to fractal compression (see the comp.compression FAQ for
more on fractal compression, including at least one patent to
watch for; I don't know how analogous Karl's ideas are to the
techniques described there).
- Quite possibly want some documented interface by which a site can
plug in their choice of external difference programs (with the
ability to choose the program based on filename, magic numbers,
or some such).
192. "cvs update" using an absolute pathname does not work if the
working directory is not a CVS-controlled directory with the correct
CVSROOT. For example, the following will fail:
cd /tmp
cvs -d /repos co foo
cd /
cvs update /tmp/foo
It is possible to read the CVSROOT from the administrative files in
the directory specified by the absolute pathname argument to update.
In that case, the last command above would be equivalent to:
cd /tmp/foo
cvs update .
This can be problematic, however, if we ask CVS to update two
directories with different CVSROOTs. Currently, CVS has no way of
changing CVSROOT mid-stream. Consider the following:
cd /tmp
cvs -d /repos1 co foo
cvs -d /repos2 co bar
cd /
cvs update /tmp/foo /tmp/bar
To make that example work, we need to think hard about:
- where and when CVSROOT-related variables get set
- who caches said variables for later use
- how the remote protocol should be extended to handle sending a new
repository mid-stream
- how the client should maintain connections to a variety of servers
in a single invocation.
Because those issues are hairy, I suspect that having a change in
CVSROOT be an error would be a better move.
193. The client relies on timestamps to figure out whether a file is
(maybe) modified. If something goes awry, then it ends up sending
entire files to the server to be checked, and this can be quite slow
especially over a slow network. A couple of things that can happen:
(a) other programs, like make, use timestamps, so one ends up needing
to do "touch foo" and otherwise messing with timestamps, (b) changing
the timezone offset (e.g. summer vs. winter or moving a machine)
should work on unix, but there may be problems with non-unix.
Possible solutions:
a. Store a checksum for each file in CVS/Entries or some such
place. What to do about hash collisions is interesting: using a
checksum, like MD5, large enough to "never" have collisions
probably works in practice (of course, if there is a collision then
all hell breaks loose because that code path was not tested, but
given the tiny, tiny probability of that I suppose this is only an
aesthetic issue).
b. I'm not thinking of others, except storing the whole file in
CVS/Base, and I'm sure using twice the disk space would be
unpopular.
194. CVS does not separate the "metadata" from the actual revision
history; it stores them both in the RCS files. Metadata means tags
and header information such as the number of the head revision.
Storing the metadata separately could speed up "cvs tag" enormously,
which is a big problem for large repositories. It could also probably
make CVS's locking much less in the way (see comment in do_recursion
about "two-pass design").
195. Many people using CVS over a slow link are interested in whether
the remote protocol could be any more efficient with network
bandwidth. This item is about one aspect of that--how the server
sends a new version of a file the client has a different version of,
or vice versa.
a. Cases in which the status quo already sends a diff. For most text
files, this is probably already close to optimal. For binary files,
and anomalous (?) text files (e.g. those in which it would help to do
moves, as well as adds and deletes), it might be worth looking into other
difference algorithms (see item #191).
b. Cases in which the status quo does not send a diff (e.g. "cvs
commit").
b1. With some frequency, people suggest rsync or a similar algorithm
(see ftp://samba.anu.edu.au/pub/rsync/). This could speed things up,
and in some ways involves the most minimal changes to the default CVS
paradigm. There are some downsides though: (1) there is an extra
network turnaround, (2) the algorithm needs to transmit some data to
discover what difference type programs can discover locally (although
this is only about 1% of the size of the files).
b2. If one is willing to require that users use "cvs edit" before
editing a file on the client side (in some cases, a development
environment like emacs can make this fairly easy), then the Modified
request in the protocol could be extended to allow the client to just
send differences instead of entire files. In the degenerate case
(e.g. "cvs diff" without arguments) the required network traffic is
reduced to zero, and the client need not even contact the server.
197. Analyze the difference between CVS_UNLINK & unlink_file. As far as I
can tell, unlink_file aborts in noexec mode and CVS_UNLINK does not. I'm not
sure it would be possible to remove even the use of temp files in noexec mode,
but most unlinks should probably be using unlink_file and not CVS_UNLINK.
198. Remove references to deprecated cvs_temp_name function.
199. Add test for login & logout functionality, including support for
backwards compatibility with old CVSROOTs.
200. Make a 'cvs add' without write access a non-fatal error so that
the user's Entries file is updated and future 'cvs diffs' will work
properly. This should ease patch submission.
201. cvs_temp_file should be creating temporary files in a privately owned
subdirectory of of temp due to security issues on some systems.
202. Enable rdiff to accept most diff options. Make rdiff output look
like diff's. Make CVS diff garbage go to stderr and only standard diff
output go to stdout.
203. Add val-tags additions to the tagging code. Don't remove the
update additions since val-tags could still be used as a cache when the
repository was imported from elsewhere (the tags weren't applied with a
version which wrote val-tags).
204. Add test case for compression. A buf_shutdown error using compression
wasn't caught by the test suite.
205. There are lots of cases where trailing slashes on directory names
and other non-canonical paths confuse CVS. Most of the cases that do
work are handled on an ad-hoc basis. We need to come up with a coherent
strategy to address path canonicalization and apply it consistently.
208. Merge enhancements to the diff package back into the original GNU source.
209. Go through this file and try to:
a. Verify that items are still valid.
b. Create test cases for valid items when they don't exist.
c. Remove fixed and no longer applicable items.
210. Explain to sanity.sh how to deal with paths with spaces and other odd
characters in them.
211. Make sanity.sh run under the Win32 bash (cygwin) and maybe other Windex
environments (e.g. DGSS or whatever the MSVC portability environemnt is called).
212. Autotestify (see autoconf source) sanity.sh.
213. Examine desirability of updating the regex library (regex.{c,h}) to the
more recent versions that come with glibc and emacs. It might be worth waiting
for the emacs folks to get their act together and merge their changes into the
glibc version.
214. Make options.h options configure script options instead.
215. Add reditors and rwatchers commands.
- Is an r* command abstraction layer possible here for the commands
where this makes sense? Would it be simpler? It seems to me the
major operational differences lie in the file list construction.
218. Fix "checkout -d ." in client/server mode.
221. Handle spaces in file/directory names. (Most, if not all, of the
internal infrastructure already handles them correctly, but most of the
administrative file interfaces do not.)
223. Internationalization support. This probably means using some kind
of universal character set (ISO 10646?) internally and converting on
input and output, which opens the locale can of worms.
224. Better timezone handling. Many people would like to see times
output in local time rather than UTC, but that's tricky since the
conversion from internal form is currently done by the server who has no
idea what the user's timezone even is, let alone the rules for
converting to it.
- On the contrary, I think the MT server response should be easily adaptable
for this purpose. It is defined in cvsclient.texi as processed by the client
if it knows how and printed to stdout otherwise. A "time" tag or the like
could be the usual CVS server UTC time string. An old client could just print
the time in UTC and a new client would know that it could convert the time to a
local time string according to the localization settings before printing it.
225. Add support for --allow-root to server command.
227. 'cvs release' should use the CVS/Root in the directory being released
when such is specified rather than $CVSROOT. In my work directory with no CVS
dir, a release of subdirectories causes the released projects to be tested
against my $CVSROOT environment variable, which often isn't correct but which
can complete without generating error messages if the project also exists in
the other CVSROOT. This happens a lot with my copies of the ccvs project.
228. Consider adding -d to commit ala ci.
229. Improve the locking code to use a random delay with exponential
backoff ala Ethernet and separate the notification interval from the
wait interval.
230. Support for options like compression as part of the CVSROOT might be
nice. This should be fairly easy to implement now using the method options.
234. Noop commands should be logged in the history file. Information can
still be obtained with noop commands, for instance via `cvs -n up -p', and
paranoid admins might appreciate this. Similarly, perhaps diff operations
should be logged.

View File

@ -1,361 +0,0 @@
/* This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details. */
AC_DEFUN([ACX_WITH_GSSAPI],[
#
# Use --with-gssapi[=DIR] to enable GSSAPI support.
#
# defaults to enabled with DIR in default list below
#
# Search for /SUNHEA/ and read the comments about this default below.
#
AC_ARG_WITH(
[gssapi],
AC_HELP_STRING(
[--with-gssapi],
[GSSAPI directory (default autoselects)]), ,
[with_gssapi=yes])dnl
dnl
dnl FIXME - cache withval and obliterate later cache values when options change
dnl
#
# Try to locate a GSSAPI installation if no location was specified, assuming
# GSSAPI was enabled (the default).
#
if test -n "$acx_gssapi_cv_gssapi"; then
# Granted, this is a slightly ugly way to print this info, but the
# AC_CHECK_HEADER used in the search for a GSSAPI installation makes using
# AC_CACHE_CHECK worse
AC_MSG_CHECKING([for GSSAPI])
else :; fi
AC_CACHE_VAL([acx_gssapi_cv_gssapi], [
if test x$with_gssapi = xyes; then
# --with but no location specified
# assume a gssapi.h or gssapi/gssapi.h locates our install.
#
# This isn't always strictly true. For instance Solaris 7's SUNHEA (header)
# package installs gssapi.h whether or not the necessary libraries are
# installed. I'm still not sure whether to consider this a bug. The long
# way around is to not consider GSSPAI installed unless gss_import_name is
# found, but that brings up a lot of other hassles, like continuing to let
# gcc & ld generate the error messages when the user uses --with-gssapi=dir
# as a debugging aid. The short way around is to disable GSSAPI by default,
# but I think Sun users have been faced with this for awhile and I haven't
# heard many complaints.
acx_gssapi_save_CPPFLAGS=$CPPFLAGS
for acx_gssapi_cv_gssapi in yes /usr/kerberos /usr/cygnus/kerbnet no; do
if test x$acx_gssapi_cv_gssapi = xno; then
break
fi
if test x$acx_gssapi_cv_gssapi = xyes; then
AC_MSG_CHECKING([for GSSAPI])
AC_MSG_RESULT([])
else
CPPFLAGS="$acx_gssapi_save_CPPFLAGS -I$acx_gssapi_cv_gssapi/include"
AC_MSG_CHECKING([for GSSAPI in $acx_gssapi_cv_gssapi])
AC_MSG_RESULT([])
fi
unset ac_cv_header_gssapi_h
unset ac_cv_header_gssapi_gssapi_h
unset ac_cv_header_krb5_h
AC_CHECK_HEADERS([gssapi.h gssapi/gssapi.h krb5.h])
if (test "$ac_cv_header_gssapi_h" = yes ||
test "$ac_cv_header_gssapi_gssapi_h" = yes) &&
test "$ac_cv_header_krb5_h" = yes; then
break
fi
done
CPPFLAGS=$acx_gssapi_save_CPPFLAGS
else
acx_gssapi_cv_gssapi=$with_gssapi
fi
AC_MSG_CHECKING([for GSSAPI])
])dnl
AC_MSG_RESULT([$acx_gssapi_cv_gssapi])
#
# Set up GSSAPI includes for later use. We don't bother to check for
# $acx_gssapi_cv_gssapi=no here since that will be caught later.
#
if test x$acx_gssapi_cv_gssapi = xyes; then
# no special includes necessary
GSSAPI_INCLUDES=""
else
# GSSAPI at $acx_gssapi_cv_gssapi (could be 'no')
GSSAPI_INCLUDES=" -I$acx_gssapi_cv_gssapi/include"
fi
#
# Get the rest of the information CVS needs to compile with GSSAPI support
#
if test x$acx_gssapi_cv_gssapi != xno; then
# define HAVE_GSSAPI and set up the includes
AC_DEFINE([HAVE_GSSAPI], ,
[Define if you have GSSAPI with Kerberos version 5 available.])
includeopt=$includeopt$GSSAPI_INCLUDES
# locate any other headers
acx_gssapi_save_CPPFLAGS=$CPPFLAGS
CPPFLAGS=$CPPFLAGS$GSSAPI_INCLUDES
dnl We don't use HAVE_KRB5_H anywhere, but including it here might make it
dnl easier to spot errors by reading configure output
AC_CHECK_HEADERS([gssapi.h gssapi/gssapi.h gssapi/gssapi_generic.h krb5.h])
# And look through them for GSS_C_NT_HOSTBASED_SERVICE or its alternatives
AC_CACHE_CHECK(
[for GSS_C_NT_HOSTBASED_SERVICE],
[acx_gssapi_cv_gss_c_nt_hostbased_service],
[
acx_gssapi_cv_gss_c_nt_hostbased_service=no
if test "$ac_cv_header_gssapi_h" = "yes"; then
AC_EGREP_HEADER(
[GSS_C_NT_HOSTBASED_SERVICE], [gssapi.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=yes],
[
AC_EGREP_HEADER(
[gss_nt_service_name], [gssapi.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=gss_nt_service_name])
])
fi
if test $acx_gssapi_cv_gss_c_nt_hostbased_service = no &&
test "$ac_cv_header_gssapi_gssapi_h" = "yes"; then
AC_EGREP_HEADER(
[GSS_C_NT_HOSTBASED_SERVICE], [gssapi/gssapi.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=yes],
[
AC_EGREP_HEADER([gss_nt_service_name], [gssapi/gssapi.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=gss_nt_service_name])
])
else :; fi
if test $acx_gssapi_cv_gss_c_nt_hostbased_service = no &&
test "$ac_cv_header_gssapi_gssapi_generic_h" = "yes"; then
AC_EGREP_HEADER(
[GSS_C_NT_HOSTBASED_SERVICE], [gssapi/gssapi_generic.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=yes],
[
AC_EGREP_HEADER(
[gss_nt_service_name], [gssapi/gssapi_generic.h],
[acx_gssapi_cv_gss_c_nt_hostbased_service=gss_nt_service_name])
])
else :; fi
])
if test $acx_gssapi_cv_gss_c_nt_hostbased_service != yes &&
test $acx_gssapi_cv_gss_c_nt_hostbased_service != no; then
# don't define for yes since that means it already means something and
# don't define for no since we'd rather the compiler catch the error
# It's debatable whether we'd prefer that the compiler catch the error
# - it seems our estranged developer is more likely to be familiar with
# the intricacies of the compiler than with those of autoconf, but by
# the same token, maybe we'd rather alert them to the fact that most
# of the support they need to fix the problem is installed if they can
# simply locate the appropriate symbol.
AC_DEFINE_UNQUOTED(
[GSS_C_NT_HOSTBASED_SERVICE],
[$acx_gssapi_cv_gss_c_nt_hostbased_service],
[Define to an alternative value if GSS_C_NT_HOSTBASED_SERVICE isn't defined
in the gssapi.h header file. MIT Kerberos 1.2.1 requires this. Only relevant
when using GSSAPI.])
else :; fi
CPPFLAGS=$acx_gssapi_save_CPPFLAGS
# Expect the libs to be installed parallel to the headers
#
# We could try once with and once without, but I'm not sure it's worth the
# trouble.
if test x$acx_gssapi_cv_gssapi != xyes; then
if test -z "$LIBS"; then
LIBS="-L$acx_gssapi_cv_gssapi/lib"
else
LIBS="-L$acx_gssapi_cv_gssapi/lib $LIBS"
fi
else :; fi
dnl What happens if we want to enable, say, krb5 and some other GSSAPI
dnl authentication method at the same time?
#
# Some of the order below is particular due to library dependencies
#
#
# des Heimdal K 0.3d, but Heimdal seems to be set up such
# that it could have been installed from elsewhere.
#
AC_SEARCH_LIBS([des_set_odd_parity], [des])
#
# com_err Heimdal K 0.3d
#
# com_err MIT K5 v1.2.2-beta1
#
AC_SEARCH_LIBS([com_err], [com_err])
#
# asn1 Heimdal K 0.3d -lcom_err
#
AC_SEARCH_LIBS([initialize_asn1_error_table_r], [asn1])
#
# resolv required, but not installed by Heimdal K 0.3d
#
# resolv MIT K5 1.2.2-beta1
# Linux 2.2.17
#
AC_SEARCH_LIBS([__dn_expand], [resolv])
#
# crypto Need by gssapi under FreeBSD 5.4
#
AC_SEARCH_LIBS([RC4], [crypto])
#
# crypt Needed by roken under FreeBSD 4.6.
#
AC_SEARCH_LIBS([crypt], [crypt])
#
# roken Heimdal K 0.3d -lresolv
# roken FreeBSD 4.6 -lcrypt
#
AC_SEARCH_LIBS([roken_gethostbyaddr], [roken])
#
# k5crypto MIT K5 v1.2.2-beta1
#
AC_SEARCH_LIBS([valid_enctype], [k5crypto])
#
# gen ? ? ? Needed on Irix 5.3 with some
# Irix 5.3 version of Kerberos. I'm not
# sure which since Irix didn't
# get any testing this time
# around. Original comment:
#
# This is necessary on Irix 5.3, in order to link against libkrb5 --
# there, an_to_ln.o refers to things defined only in -lgen.
#
AC_SEARCH_LIBS([compile], [gen])
#
# krb5 ? ? ? -lgen -l???
# Irix 5.3
#
# krb5 MIT K5 v1.1.1
#
# krb5 MIT K5 v1.2.2-beta1 -lcrypto -lcom_err
# Linux 2.2.17
#
# krb5 MIT K5 v1.2.2-beta1 -lcrypto -lcom_err -lresolv
#
# krb5 Heimdal K 0.3d -lasn1 -lroken -ldes
#
AC_SEARCH_LIBS([krb5_free_context], [krb5])
#
# gss This may be the only lib needed under HP-UX, so find it
# first.
#
# gssapi_krb5 Only lib needed with MIT K5 v1.2.1, so find it first in
# order to prefer MIT Kerberos. If both MIT & Heimdal
# Kerberos are installed and in the path, this will leave
# some of the libraries above in LIBS unnecessarily, but
# noone would ever do that, right?
#
# gss HP-UX ???
#
# gssapi_krb5 MIT K5 v1.2.2-beta1 -lkrb5
#
# gssapi Heimdal K 0.3d -lkrb5
#
AC_SEARCH_LIBS([gss_import_name], [gss gssapi_krb5 gssapi])
fi
])dnl
# size_max.m4 serial 2
dnl Copyright (C) 2003 Free Software Foundation, Inc.
dnl This file is free software, distributed under the terms of the GNU
dnl General Public License. As a special exception to the GNU General
dnl Public License, this file may be distributed as part of a program
dnl that contains a configuration script generated by Autoconf, under
dnl the same distribution terms as the rest of that program.
dnl From Bruno Haible.
AC_DEFUN([gl_SIZE_MAX],
[
AC_CHECK_HEADERS(stdint.h)
dnl First test whether the system already has SIZE_MAX.
AC_MSG_CHECKING([for SIZE_MAX])
result=
AC_EGREP_CPP([Found it], [
#include <limits.h>
#if HAVE_STDINT_H
#include <stdint.h>
#endif
#ifdef SIZE_MAX
Found it
#endif
], result=yes)
if test -z "$result"; then
dnl Define it ourselves. Here we assume that the type 'size_t' is not wider
dnl than the type 'unsigned long'.
dnl The _AC_COMPUTE_INT macro works up to LONG_MAX, since it uses 'expr',
dnl which is guaranteed to work from LONG_MIN to LONG_MAX.
_AC_COMPUTE_INT([~(size_t)0 / 10], res_hi,
[#include <stddef.h>], result=?)
_AC_COMPUTE_INT([~(size_t)0 % 10], res_lo,
[#include <stddef.h>], result=?)
_AC_COMPUTE_INT([sizeof (size_t) <= sizeof (unsigned int)], fits_in_uint,
[#include <stddef.h>], result=?)
if test "$fits_in_uint" = 1; then
dnl Even though SIZE_MAX fits in an unsigned int, it must be of type
dnl 'unsigned long' if the type 'size_t' is the same as 'unsigned long'.
AC_TRY_COMPILE([#include <stddef.h>
extern size_t foo;
extern unsigned long foo;
], [], fits_in_uint=0)
fi
if test -z "$result"; then
if test "$fits_in_uint" = 1; then
result="$res_hi$res_lo"U
else
result="$res_hi$res_lo"UL
fi
else
dnl Shouldn't happen, but who knows...
result='~(size_t)0'
fi
fi
AC_MSG_RESULT([$result])
if test "$result" != yes; then
AC_DEFINE_UNQUOTED([SIZE_MAX], [$result],
[Define as the maximum value of type 'size_t', if the system doesn't define it.])
fi
])
# xsize.m4 serial 3
dnl Copyright (C) 2003-2004 Free Software Foundation, Inc.
dnl This file is free software, distributed under the terms of the GNU
dnl General Public License. As a special exception to the GNU General
dnl Public License, this file may be distributed as part of a program
dnl that contains a configuration script generated by Autoconf, under
dnl the same distribution terms as the rest of that program.
AC_DEFUN([gl_XSIZE],
[
dnl Prerequisites of lib/xsize.h.
AC_REQUIRE([gl_SIZE_MAX])
AC_REQUIRE([AC_C_INLINE])
AC_CHECK_HEADERS(stdint.h)
])

938
contrib/cvs/aclocal.m4 vendored
View File

@ -1,938 +0,0 @@
# generated automatically by aclocal 1.10 -*- Autoconf -*-
# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
# 2005, 2006 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
m4_if(m4_PACKAGE_VERSION, [2.61],,
[m4_fatal([this file was generated for autoconf 2.61.
You have another version of autoconf. If you want to use that,
you should regenerate the build system entirely.], [63])])
# Copyright (C) 2002, 2003, 2005, 2006 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_AUTOMAKE_VERSION(VERSION)
# ----------------------------
# Automake X.Y traces this macro to ensure aclocal.m4 has been
# generated from the m4 files accompanying Automake X.Y.
# (This private macro should not be called outside this file.)
AC_DEFUN([AM_AUTOMAKE_VERSION],
[am__api_version='1.10'
dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
dnl require some minimum version. Point them to the right macro.
m4_if([$1], [1.10], [],
[AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
])
# _AM_AUTOCONF_VERSION(VERSION)
# -----------------------------
# aclocal traces this macro to find the Autoconf version.
# This is a private macro too. Using m4_define simplifies
# the logic in aclocal, which can simply ignore this definition.
m4_define([_AM_AUTOCONF_VERSION], [])
# AM_SET_CURRENT_AUTOMAKE_VERSION
# -------------------------------
# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
# This function is AC_REQUIREd by AC_INIT_AUTOMAKE.
AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
[AM_AUTOMAKE_VERSION([1.10])dnl
_AM_AUTOCONF_VERSION(m4_PACKAGE_VERSION)])
# AM_AUX_DIR_EXPAND -*- Autoconf -*-
# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets
# $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to
# `$srcdir', `$srcdir/..', or `$srcdir/../..'.
#
# Of course, Automake must honor this variable whenever it calls a
# tool from the auxiliary directory. The problem is that $srcdir (and
# therefore $ac_aux_dir as well) can be either absolute or relative,
# depending on how configure is run. This is pretty annoying, since
# it makes $ac_aux_dir quite unusable in subdirectories: in the top
# source directory, any form will work fine, but in subdirectories a
# relative path needs to be adjusted first.
#
# $ac_aux_dir/missing
# fails when called from a subdirectory if $ac_aux_dir is relative
# $top_srcdir/$ac_aux_dir/missing
# fails if $ac_aux_dir is absolute,
# fails when called from a subdirectory in a VPATH build with
# a relative $ac_aux_dir
#
# The reason of the latter failure is that $top_srcdir and $ac_aux_dir
# are both prefixed by $srcdir. In an in-source build this is usually
# harmless because $srcdir is `.', but things will broke when you
# start a VPATH build or use an absolute $srcdir.
#
# So we could use something similar to $top_srcdir/$ac_aux_dir/missing,
# iff we strip the leading $srcdir from $ac_aux_dir. That would be:
# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"`
# and then we would define $MISSING as
# MISSING="\${SHELL} $am_aux_dir/missing"
# This will work as long as MISSING is not called from configure, because
# unfortunately $(top_srcdir) has no meaning in configure.
# However there are other variables, like CC, which are often used in
# configure, and could therefore not use this "fixed" $ac_aux_dir.
#
# Another solution, used here, is to always expand $ac_aux_dir to an
# absolute PATH. The drawback is that using absolute paths prevent a
# configured tree to be moved without reconfiguration.
AC_DEFUN([AM_AUX_DIR_EXPAND],
[dnl Rely on autoconf to set up CDPATH properly.
AC_PREREQ([2.50])dnl
# expand $ac_aux_dir to an absolute path
am_aux_dir=`cd $ac_aux_dir && pwd`
])
# AM_CONDITIONAL -*- Autoconf -*-
# Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005, 2006
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 8
# AM_CONDITIONAL(NAME, SHELL-CONDITION)
# -------------------------------------
# Define a conditional.
AC_DEFUN([AM_CONDITIONAL],
[AC_PREREQ(2.52)dnl
ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])],
[$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl
AC_SUBST([$1_TRUE])dnl
AC_SUBST([$1_FALSE])dnl
_AM_SUBST_NOTMAKE([$1_TRUE])dnl
_AM_SUBST_NOTMAKE([$1_FALSE])dnl
if $2; then
$1_TRUE=
$1_FALSE='#'
else
$1_TRUE='#'
$1_FALSE=
fi
AC_CONFIG_COMMANDS_PRE(
[if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then
AC_MSG_ERROR([[conditional "$1" was never defined.
Usually this means the macro was only invoked conditionally.]])
fi])])
# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 9
# There are a few dirty hacks below to avoid letting `AC_PROG_CC' be
# written in clear, in which case automake, when reading aclocal.m4,
# will think it sees a *use*, and therefore will trigger all it's
# C support machinery. Also note that it means that autoscan, seeing
# CC etc. in the Makefile, will ask for an AC_PROG_CC use...
# _AM_DEPENDENCIES(NAME)
# ----------------------
# See how the compiler implements dependency checking.
# NAME is "CC", "CXX", "GCJ", or "OBJC".
# We try a few techniques and use that to set a single cache variable.
#
# We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was
# modified to invoke _AM_DEPENDENCIES(CC); we would have a circular
# dependency, and given that the user is not expected to run this macro,
# just rely on AC_PROG_CC.
AC_DEFUN([_AM_DEPENDENCIES],
[AC_REQUIRE([AM_SET_DEPDIR])dnl
AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl
AC_REQUIRE([AM_MAKE_INCLUDE])dnl
AC_REQUIRE([AM_DEP_TRACK])dnl
ifelse([$1], CC, [depcc="$CC" am_compiler_list=],
[$1], CXX, [depcc="$CXX" am_compiler_list=],
[$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'],
[$1], UPC, [depcc="$UPC" am_compiler_list=],
[$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'],
[depcc="$$1" am_compiler_list=])
AC_CACHE_CHECK([dependency style of $depcc],
[am_cv_$1_dependencies_compiler_type],
[if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
# We make a subdir and do the tests there. Otherwise we can end up
# making bogus files that we don't know about and never remove. For
# instance it was reported that on HP-UX the gcc test will end up
# making a dummy file named `D' -- because `-MD' means `put the output
# in D'.
mkdir conftest.dir
# Copy depcomp to subdir because otherwise we won't find it if we're
# using a relative directory.
cp "$am_depcomp" conftest.dir
cd conftest.dir
# We will build objects and dependencies in a subdirectory because
# it helps to detect inapplicable dependency modes. For instance
# both Tru64's cc and ICC support -MD to output dependencies as a
# side effect of compilation, but ICC will put the dependencies in
# the current directory while Tru64 will put them in the object
# directory.
mkdir sub
am_cv_$1_dependencies_compiler_type=none
if test "$am_compiler_list" = ""; then
am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp`
fi
for depmode in $am_compiler_list; do
# Setup a source with many dependencies, because some compilers
# like to wrap large dependency lists on column 80 (with \), and
# we should not choose a depcomp mode which is confused by this.
#
# We need to recreate these files for each test, as the compiler may
# overwrite some of them when testing with obscure command lines.
# This happens at least with the AIX C compiler.
: > sub/conftest.c
for i in 1 2 3 4 5 6; do
echo '#include "conftst'$i'.h"' >> sub/conftest.c
# Using `: > sub/conftst$i.h' creates only sub/conftst1.h with
# Solaris 8's {/usr,}/bin/sh.
touch sub/conftst$i.h
done
echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
case $depmode in
nosideeffect)
# after this tag, mechanisms are not by side-effect, so they'll
# only be used when explicitly requested
if test "x$enable_dependency_tracking" = xyes; then
continue
else
break
fi
;;
none) break ;;
esac
# We check with `-c' and `-o' for the sake of the "dashmstdout"
# mode. It turns out that the SunPro C++ compiler does not properly
# handle `-M -o', and we need to detect this.
if depmode=$depmode \
source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \
depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
$SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \
>/dev/null 2>conftest.err &&
grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 &&
${MAKE-make} -s -f confmf > /dev/null 2>&1; then
# icc doesn't choke on unknown options, it will just issue warnings
# or remarks (even with -Werror). So we grep stderr for any message
# that says an option was ignored or not supported.
# When given -MP, icc 7.0 and 7.1 complain thusly:
# icc: Command line warning: ignoring option '-M'; no argument required
# The diagnosis changed in icc 8.0:
# icc: Command line remark: option '-MP' not supported
if (grep 'ignoring option' conftest.err ||
grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
am_cv_$1_dependencies_compiler_type=$depmode
break
fi
fi
done
cd ..
rm -rf conftest.dir
else
am_cv_$1_dependencies_compiler_type=none
fi
])
AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type])
AM_CONDITIONAL([am__fastdep$1], [
test "x$enable_dependency_tracking" != xno \
&& test "$am_cv_$1_dependencies_compiler_type" = gcc3])
])
# AM_SET_DEPDIR
# -------------
# Choose a directory name for dependency files.
# This macro is AC_REQUIREd in _AM_DEPENDENCIES
AC_DEFUN([AM_SET_DEPDIR],
[AC_REQUIRE([AM_SET_LEADING_DOT])dnl
AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl
])
# AM_DEP_TRACK
# ------------
AC_DEFUN([AM_DEP_TRACK],
[AC_ARG_ENABLE(dependency-tracking,
[ --disable-dependency-tracking speeds up one-time build
--enable-dependency-tracking do not reject slow dependency extractors])
if test "x$enable_dependency_tracking" != xno; then
am_depcomp="$ac_aux_dir/depcomp"
AMDEPBACKSLASH='\'
fi
AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno])
AC_SUBST([AMDEPBACKSLASH])dnl
_AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl
])
# Generate code to set up dependency tracking. -*- Autoconf -*-
# Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
#serial 3
# _AM_OUTPUT_DEPENDENCY_COMMANDS
# ------------------------------
AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
[for mf in $CONFIG_FILES; do
# Strip MF so we end up with the name of the file.
mf=`echo "$mf" | sed -e 's/:.*$//'`
# Check whether this is an Automake generated Makefile or not.
# We used to match only the files named `Makefile.in', but
# some people rename them; so instead we look at the file content.
# Grep'ing the first line is not enough: some people post-process
# each Makefile.in and add a new line on top of each file to say so.
# Grep'ing the whole file is not good either: AIX grep has a line
# limit of 2048, but all sed's we know have understand at least 4000.
if sed 10q "$mf" | grep '^#.*generated by automake' > /dev/null 2>&1; then
dirpart=`AS_DIRNAME("$mf")`
else
continue
fi
# Extract the definition of DEPDIR, am__include, and am__quote
# from the Makefile without running `make'.
DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"`
test -z "$DEPDIR" && continue
am__include=`sed -n 's/^am__include = //p' < "$mf"`
test -z "am__include" && continue
am__quote=`sed -n 's/^am__quote = //p' < "$mf"`
# When using ansi2knr, U may be empty or an underscore; expand it
U=`sed -n 's/^U = //p' < "$mf"`
# Find all dependency output files, they are included files with
# $(DEPDIR) in their names. We invoke sed twice because it is the
# simplest approach to changing $(DEPDIR) to its actual value in the
# expansion.
for file in `sed -n "
s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \
sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do
# Make sure the directory exists.
test -f "$dirpart/$file" && continue
fdir=`AS_DIRNAME(["$file"])`
AS_MKDIR_P([$dirpart/$fdir])
# echo "creating $dirpart/$file"
echo '# dummy' > "$dirpart/$file"
done
done
])# _AM_OUTPUT_DEPENDENCY_COMMANDS
# AM_OUTPUT_DEPENDENCY_COMMANDS
# -----------------------------
# This macro should only be invoked once -- use via AC_REQUIRE.
#
# This code is only required when automatic dependency tracking
# is enabled. FIXME. This creates each `.P' file that we will
# need in order to bootstrap the dependency handling code.
AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS],
[AC_CONFIG_COMMANDS([depfiles],
[test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS],
[AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"])
])
# Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 8
# AM_CONFIG_HEADER is obsolete. It has been replaced by AC_CONFIG_HEADERS.
AU_DEFUN([AM_CONFIG_HEADER], [AC_CONFIG_HEADERS($@)])
# Do all the work for Automake. -*- Autoconf -*-
# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
# 2005, 2006 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 12
# This macro actually does too much. Some checks are only needed if
# your package does certain things. But this isn't really a big deal.
# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE])
# AM_INIT_AUTOMAKE([OPTIONS])
# -----------------------------------------------
# The call with PACKAGE and VERSION arguments is the old style
# call (pre autoconf-2.50), which is being phased out. PACKAGE
# and VERSION should now be passed to AC_INIT and removed from
# the call to AM_INIT_AUTOMAKE.
# We support both call styles for the transition. After
# the next Automake release, Autoconf can make the AC_INIT
# arguments mandatory, and then we can depend on a new Autoconf
# release and drop the old call support.
AC_DEFUN([AM_INIT_AUTOMAKE],
[AC_PREREQ([2.60])dnl
dnl Autoconf wants to disallow AM_ names. We explicitly allow
dnl the ones we care about.
m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl
AC_REQUIRE([AC_PROG_INSTALL])dnl
if test "`cd $srcdir && pwd`" != "`pwd`"; then
# Use -I$(srcdir) only when $(srcdir) != ., so that make's output
# is not polluted with repeated "-I."
AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl
# test to see if srcdir already configured
if test -f $srcdir/config.status; then
AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
fi
fi
# test whether we have cygpath
if test -z "$CYGPATH_W"; then
if (cygpath --version) >/dev/null 2>/dev/null; then
CYGPATH_W='cygpath -w'
else
CYGPATH_W=echo
fi
fi
AC_SUBST([CYGPATH_W])
# Define the identity of the package.
dnl Distinguish between old-style and new-style calls.
m4_ifval([$2],
[m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
AC_SUBST([PACKAGE], [$1])dnl
AC_SUBST([VERSION], [$2])],
[_AM_SET_OPTIONS([$1])dnl
dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,,
[m4_fatal([AC_INIT should be called with package and version arguments])])dnl
AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl
_AM_IF_OPTION([no-define],,
[AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package])
AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl
# Some tools Automake needs.
AC_REQUIRE([AM_SANITY_CHECK])dnl
AC_REQUIRE([AC_ARG_PROGRAM])dnl
AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version})
AM_MISSING_PROG(AUTOCONF, autoconf)
AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version})
AM_MISSING_PROG(AUTOHEADER, autoheader)
AM_MISSING_PROG(MAKEINFO, makeinfo)
AM_PROG_INSTALL_SH
AM_PROG_INSTALL_STRIP
AC_REQUIRE([AM_PROG_MKDIR_P])dnl
# We need awk for the "check" target. The system "awk" is bad on
# some platforms.
AC_REQUIRE([AC_PROG_AWK])dnl
AC_REQUIRE([AC_PROG_MAKE_SET])dnl
AC_REQUIRE([AM_SET_LEADING_DOT])dnl
_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])],
[_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])],
[_AM_PROG_TAR([v7])])])
_AM_IF_OPTION([no-dependencies],,
[AC_PROVIDE_IFELSE([AC_PROG_CC],
[_AM_DEPENDENCIES(CC)],
[define([AC_PROG_CC],
defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl
AC_PROVIDE_IFELSE([AC_PROG_CXX],
[_AM_DEPENDENCIES(CXX)],
[define([AC_PROG_CXX],
defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl
AC_PROVIDE_IFELSE([AC_PROG_OBJC],
[_AM_DEPENDENCIES(OBJC)],
[define([AC_PROG_OBJC],
defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl
])
])
# When config.status generates a header, we must update the stamp-h file.
# This file resides in the same directory as the config header
# that is generated. The stamp files are numbered to have different names.
# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the
# loop where config.status creates the headers, so we can generate
# our stamp files there.
AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
[# Compute $1's index in $config_headers.
_am_stamp_count=1
for _am_header in $config_headers :; do
case $_am_header in
$1 | $1:* )
break ;;
* )
_am_stamp_count=`expr $_am_stamp_count + 1` ;;
esac
done
echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count])
# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_SH
# ------------------
# Define $install_sh.
AC_DEFUN([AM_PROG_INSTALL_SH],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
install_sh=${install_sh-"\$(SHELL) $am_aux_dir/install-sh"}
AC_SUBST(install_sh)])
# Copyright (C) 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 2
# Check whether the underlying file-system supports filenames
# with a leading dot. For instance MS-DOS doesn't.
AC_DEFUN([AM_SET_LEADING_DOT],
[rm -rf .tst 2>/dev/null
mkdir .tst 2>/dev/null
if test -d .tst; then
am__leading_dot=.
else
am__leading_dot=_
fi
rmdir .tst 2>/dev/null
AC_SUBST([am__leading_dot])])
# Add --enable-maintainer-mode option to configure. -*- Autoconf -*-
# From Jim Meyering
# Copyright (C) 1996, 1998, 2000, 2001, 2002, 2003, 2004, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 4
AC_DEFUN([AM_MAINTAINER_MODE],
[AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles])
dnl maintainer-mode is disabled by default
AC_ARG_ENABLE(maintainer-mode,
[ --enable-maintainer-mode enable make rules and dependencies not useful
(and sometimes confusing) to the casual installer],
USE_MAINTAINER_MODE=$enableval,
USE_MAINTAINER_MODE=no)
AC_MSG_RESULT([$USE_MAINTAINER_MODE])
AM_CONDITIONAL(MAINTAINER_MODE, [test $USE_MAINTAINER_MODE = yes])
MAINT=$MAINTAINER_MODE_TRUE
AC_SUBST(MAINT)dnl
]
)
AU_DEFUN([jm_MAINTAINER_MODE], [AM_MAINTAINER_MODE])
# Check to see how 'make' treats includes. -*- Autoconf -*-
# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 3
# AM_MAKE_INCLUDE()
# -----------------
# Check to see how make treats includes.
AC_DEFUN([AM_MAKE_INCLUDE],
[am_make=${MAKE-make}
cat > confinc << 'END'
am__doit:
@echo done
.PHONY: am__doit
END
# If we don't find an include directive, just comment out the code.
AC_MSG_CHECKING([for style of include used by $am_make])
am__include="#"
am__quote=
_am_result=none
# First try GNU make style include.
echo "include confinc" > confmf
# We grep out `Entering directory' and `Leaving directory'
# messages which can occur if `w' ends up in MAKEFLAGS.
# In particular we don't look at `^make:' because GNU make might
# be invoked under some other name (usually "gmake"), in which
# case it prints its new name instead of `make'.
if test "`$am_make -s -f confmf 2> /dev/null | grep -v 'ing directory'`" = "done"; then
am__include=include
am__quote=
_am_result=GNU
fi
# Now try BSD make style include.
if test "$am__include" = "#"; then
echo '.include "confinc"' > confmf
if test "`$am_make -s -f confmf 2> /dev/null`" = "done"; then
am__include=.include
am__quote="\""
_am_result=BSD
fi
fi
AC_SUBST([am__include])
AC_SUBST([am__quote])
AC_MSG_RESULT([$_am_result])
rm -f confinc confmf
])
# Copyright (C) 1999, 2000, 2001, 2003, 2004, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 5
# AM_PROG_CC_C_O
# --------------
# Like AC_PROG_CC_C_O, but changed for automake.
AC_DEFUN([AM_PROG_CC_C_O],
[AC_REQUIRE([AC_PROG_CC_C_O])dnl
AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([compile])dnl
# FIXME: we rely on the cache variable name because
# there is no other way.
set dummy $CC
ac_cc=`echo $[2] | sed ['s/[^a-zA-Z0-9_]/_/g;s/^[0-9]/_/']`
if eval "test \"`echo '$ac_cv_prog_cc_'${ac_cc}_c_o`\" != yes"; then
# Losing compiler, so override with the script.
# FIXME: It is wrong to rewrite CC.
# But if we don't then we get into trouble of one sort or another.
# A longer-term fix would be to have automake use am__CC in this case,
# and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
CC="$am_aux_dir/compile $CC"
fi
dnl Make sure AC_PROG_CC is never called again, or it will override our
dnl setting of CC.
m4_define([AC_PROG_CC],
[m4_fatal([AC_PROG_CC cannot be called after AM_PROG_CC_C_O])])
])
# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*-
# Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 5
# AM_MISSING_PROG(NAME, PROGRAM)
# ------------------------------
AC_DEFUN([AM_MISSING_PROG],
[AC_REQUIRE([AM_MISSING_HAS_RUN])
$1=${$1-"${am_missing_run}$2"}
AC_SUBST($1)])
# AM_MISSING_HAS_RUN
# ------------------
# Define MISSING if not defined so far and test if it supports --run.
# If it does, set am_missing_run to use it, otherwise, to nothing.
AC_DEFUN([AM_MISSING_HAS_RUN],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([missing])dnl
test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing"
# Use eval to expand $SHELL
if eval "$MISSING --run true"; then
am_missing_run="$MISSING --run "
else
am_missing_run=
AC_MSG_WARN([`missing' script is too old or missing])
fi
])
# Copyright (C) 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_MKDIR_P
# ---------------
# Check for `mkdir -p'.
AC_DEFUN([AM_PROG_MKDIR_P],
[AC_PREREQ([2.60])dnl
AC_REQUIRE([AC_PROG_MKDIR_P])dnl
dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P,
dnl while keeping a definition of mkdir_p for backward compatibility.
dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile.
dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of
dnl Makefile.ins that do not define MKDIR_P, so we do our own
dnl adjustment using top_builddir (which is defined more often than
dnl MKDIR_P).
AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl
case $mkdir_p in
[[\\/$]]* | ?:[[\\/]]*) ;;
*/*) mkdir_p="\$(top_builddir)/$mkdir_p" ;;
esac
])
# Helper functions for option handling. -*- Autoconf -*-
# Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 3
# _AM_MANGLE_OPTION(NAME)
# -----------------------
AC_DEFUN([_AM_MANGLE_OPTION],
[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])])
# _AM_SET_OPTION(NAME)
# ------------------------------
# Set option NAME. Presently that only means defining a flag for this option.
AC_DEFUN([_AM_SET_OPTION],
[m4_define(_AM_MANGLE_OPTION([$1]), 1)])
# _AM_SET_OPTIONS(OPTIONS)
# ----------------------------------
# OPTIONS is a space-separated list of Automake options.
AC_DEFUN([_AM_SET_OPTIONS],
[AC_FOREACH([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])])
# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET])
# -------------------------------------------
# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
AC_DEFUN([_AM_IF_OPTION],
[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
# Check to make sure that the build environment is sane. -*- Autoconf -*-
# Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005
# Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 4
# AM_SANITY_CHECK
# ---------------
AC_DEFUN([AM_SANITY_CHECK],
[AC_MSG_CHECKING([whether build environment is sane])
# Just in case
sleep 1
echo timestamp > conftest.file
# Do `set' in a subshell so we don't clobber the current shell's
# arguments. Must try -L first in case configure is actually a
# symlink; some systems play weird games with the mod time of symlinks
# (eg FreeBSD returns the mod time of the symlink's containing
# directory).
if (
set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null`
if test "$[*]" = "X"; then
# -L didn't work.
set X `ls -t $srcdir/configure conftest.file`
fi
rm -f conftest.file
if test "$[*]" != "X $srcdir/configure conftest.file" \
&& test "$[*]" != "X conftest.file $srcdir/configure"; then
# If neither matched, then we have a broken ls. This can happen
# if, for instance, CONFIG_SHELL is bash and it inherits a
# broken ls alias from the environment. This has actually
# happened. Such a system could not be considered "sane".
AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
alias in your environment])
fi
test "$[2]" = conftest.file
)
then
# Ok.
:
else
AC_MSG_ERROR([newly created file is older than distributed files!
Check your system clock])
fi
AC_MSG_RESULT(yes)])
# Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_STRIP
# ---------------------
# One issue with vendor `install' (even GNU) is that you can't
# specify the program used to strip binaries. This is especially
# annoying in cross-compiling environments, where the build's strip
# is unlikely to handle the host's binaries.
# Fortunately install-sh will honor a STRIPPROG variable, so we
# always use install-sh in `make install-strip', and initialize
# STRIPPROG with the value of the STRIP variable (set by the user).
AC_DEFUN([AM_PROG_INSTALL_STRIP],
[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
# Installed binaries are usually stripped using `strip' when the user
# run `make install-strip'. However `strip' might not be the right
# tool to use in cross-compilation environments, therefore Automake
# will honor the `STRIP' environment variable to overrule this program.
dnl Don't test for $cross_compiling = yes, because it might be `maybe'.
if test "$cross_compiling" != no; then
AC_CHECK_TOOL([STRIP], [strip], :)
fi
INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
AC_SUBST([INSTALL_STRIP_PROGRAM])])
# Copyright (C) 2006 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_SUBST_NOTMAKE(VARIABLE)
# ---------------------------
# Prevent Automake from outputing VARIABLE = @VARIABLE@ in Makefile.in.
# This macro is traced by Automake.
AC_DEFUN([_AM_SUBST_NOTMAKE])
# Check how to create a tarball. -*- Autoconf -*-
# Copyright (C) 2004, 2005 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# serial 2
# _AM_PROG_TAR(FORMAT)
# --------------------
# Check how to create a tarball in format FORMAT.
# FORMAT should be one of `v7', `ustar', or `pax'.
#
# Substitute a variable $(am__tar) that is a command
# writing to stdout a FORMAT-tarball containing the directory
# $tardir.
# tardir=directory && $(am__tar) > result.tar
#
# Substitute a variable $(am__untar) that extract such
# a tarball read from stdin.
# $(am__untar) < result.tar
AC_DEFUN([_AM_PROG_TAR],
[# Always define AMTAR for backward compatibility.
AM_MISSING_PROG([AMTAR], [tar])
m4_if([$1], [v7],
[am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'],
[m4_case([$1], [ustar],, [pax],,
[m4_fatal([Unknown tar format])])
AC_MSG_CHECKING([how to create a $1 tar archive])
# Loop over all known methods to create a tar archive until one works.
_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none'
_am_tools=${am_cv_prog_tar_$1-$_am_tools}
# Do not fold the above two line into one, because Tru64 sh and
# Solaris sh will not grok spaces in the rhs of `-'.
for _am_tool in $_am_tools
do
case $_am_tool in
gnutar)
for _am_tar in tar gnutar gtar;
do
AM_RUN_LOG([$_am_tar --version]) && break
done
am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"'
am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"'
am__untar="$_am_tar -xf -"
;;
plaintar)
# Must skip GNU tar: if it does not support --format= it doesn't create
# ustar tarball either.
(tar --version) >/dev/null 2>&1 && continue
am__tar='tar chf - "$$tardir"'
am__tar_='tar chf - "$tardir"'
am__untar='tar xf -'
;;
pax)
am__tar='pax -L -x $1 -w "$$tardir"'
am__tar_='pax -L -x $1 -w "$tardir"'
am__untar='pax -r'
;;
cpio)
am__tar='find "$$tardir" -print | cpio -o -H $1 -L'
am__tar_='find "$tardir" -print | cpio -o -H $1 -L'
am__untar='cpio -i -H $1 -d'
;;
none)
am__tar=false
am__tar_=false
am__untar=false
;;
esac
# If the value was cached, stop now. We just wanted to have am__tar
# and am__untar set.
test -n "${am_cv_prog_tar_$1}" && break
# tar/untar a dummy directory, and stop if the command works
rm -rf conftest.dir
mkdir conftest.dir
echo GrepMe > conftest.dir/file
AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar])
rm -rf conftest.dir
if test -s conftest.tar; then
AM_RUN_LOG([$am__untar <conftest.tar])
grep GrepMe conftest.dir/file >/dev/null 2>&1 && break
fi
done
rm -rf conftest.dir
AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool])
AC_MSG_RESULT([$am_cv_prog_tar_$1])])
AC_SUBST([am__tar])
AC_SUBST([am__untar])
]) # _AM_PROG_TAR
m4_include([acinclude.m4])

View File

@ -1,142 +0,0 @@
#! /bin/sh
# Wrapper for compilers which do not understand `-c -o'.
scriptversion=2005-05-14.22
# Copyright (C) 1999, 2000, 2003, 2004, 2005 Free Software Foundation, Inc.
# Written by Tom Tromey <tromey@cygnus.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
case $1 in
'')
echo "$0: No command. Try \`$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand `-c -o'.
Remove `-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file `INSTALL'.
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "compile $scriptversion"
exit $?
;;
esac
ofile=
cfile=
eat=
for arg
do
if test -n "$eat"; then
eat=
else
case $1 in
-o)
# configure might choose to run compile as `compile cc -o foo foo.c'.
# So we strip `-o arg' only if arg is an object.
eat=1
case $2 in
*.o | *.obj)
ofile=$2
;;
*)
set x "$@" -o "$2"
shift
;;
esac
;;
*.c)
cfile=$1
set x "$@" "$1"
shift
;;
*)
set x "$@" "$1"
shift
;;
esac
fi
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no `-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# `.c' file was seen then we are probably linking. That is also
# ok.
exec "$@"
fi
# Name of file we expect compiler to create.
cofile=`echo "$cfile" | sed -e 's|^.*/||' -e 's/\.c$/.o/'`
# Create the lock directory.
# Note: use `[/.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo "$cofile" | sed -e 's|[/.-]|_|g'`.d
while true; do
if mkdir "$lockdir" >/dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir '$lockdir'; exit 1" 1 2 15
# Run the compile.
"$@"
ret=$?
if test -f "$cofile"; then
mv "$cofile" "$ofile"
elif test -f "${cofile}bj"; then
mv "${cofile}bj" "$ofile"
fi
rmdir "$lockdir"
exit $ret
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,520 +0,0 @@
/* config.h.in. Generated from configure.in by autoheader. */
/* Enable AUTH_CLIENT_SUPPORT to enable pserver as a remote access method in
the CVS client (default) */
#undef AUTH_CLIENT_SUPPORT
/* Define if you want to use the password authenticated server. */
#undef AUTH_SERVER_SUPPORT
/* Define if you want CVS to be able to be a remote repository client. */
#undef CLIENT_SUPPORT
/* Define to 1 if the `closedir' function returns void instead of `int'. */
#undef CLOSEDIR_VOID
/* The CVS admin command is restricted to the members of the group
CVS_ADMIN_GROUP. If this group does not exist, all users are allowed to run
CVS admin. To disable the CVS admin command for all users, create an empty
CVS_ADMIN_GROUP by running configure with the --with-cvs-admin-group=
option. To disable access control for CVS admin, run configure with the
--without-cvs-admin-group option in order to comment out the define below.
*/
#undef CVS_ADMIN_GROUP
/* When committing a permanent change, CVS and RCS make a log entry of who
committed the change. If you are committing the change logged in as "root"
(not under "su" or other root-priv giving program), CVS/RCS cannot
determine who is actually making the change. As such, by default, CVS
prohibits changes committed by users logged in as "root". You can disable
checking by passing the "--enable-rootcommit" option to configure or by
commenting out the lines below. */
#undef CVS_BADROOT
/* The default editor to use, if one does not specify the "-e" option to cvs,
or does not have an EDITOR environment variable. If this is not set to an
absolute path to an executable, use the shell to find where the editor
actually is. This allows sites with /usr/bin/vi or /usr/ucb/vi to work
equally well (assuming that their PATH is reasonable). */
#undef EDITOR_DFLT
/* Define to enable encryption support. */
#undef ENCRYPTION
/* Define if this executable will be running on case insensitive file systems.
In the client case, this means that it will request that the server pretend
to be case insensitive if it isn't already. */
#undef FILENAMES_CASE_INSENSITIVE
/* When committing or importing files, you must enter a log message. Normally,
you can do this either via the -m flag on the command line, the -F flag on
the command line, or an editor will be started for you. If you like to use
logging templates (the rcsinfo file within the $CVSROOT/CVSROOT directory),
you might want to force people to use the editor even if they specify a
message with -m or -F. Enabling FORCE_USE_EDITOR will cause the -m or -F
message to be appended to the temp file when the editor is started. */
#undef FORCE_USE_EDITOR
/* Define to an alternative value if GSS_C_NT_HOSTBASED_SERVICE isn't defined
in the gssapi.h header file. MIT Kerberos 1.2.1 requires this. Only
relevant when using GSSAPI. */
#undef GSS_C_NT_HOSTBASED_SERVICE
/* Define if you have the connect function. */
#undef HAVE_CONNECT
/* Define if you have the crypt function. */
#undef HAVE_CRYPT
/* Define to 1 if you have the <direct.h> header file. */
#undef HAVE_DIRECT_H
/* Define to 1 if you have the <dirent.h> header file, and it defines `DIR'.
*/
#undef HAVE_DIRENT_H
/* Define to 1 if you have the `dup2' function. */
#undef HAVE_DUP2
/* Define to 1 if you have the <errno.h> header file. */
#undef HAVE_ERRNO_H
/* Define to 1 if you have the `fchdir' function. */
#undef HAVE_FCHDIR
/* Define to 1 if you have the `fchmod' function. */
#undef HAVE_FCHMOD
/* Define to 1 if you have the <fcntl.h> header file. */
#undef HAVE_FCNTL_H
/* Define to 1 if your system has a working POSIX `fnmatch' function. */
#undef HAVE_FNMATCH
/* Define to 1 if you have the <fnmatch.h> header file. */
#undef HAVE_FNMATCH_H
/* Define to 1 if you have the `fork' function. */
#undef HAVE_FORK
/* Define to 1 if you have the `fsync' function. */
#undef HAVE_FSYNC
/* Define to 1 if you have the `ftime' function. */
#undef HAVE_FTIME
/* Define to 1 if you have the `ftruncate' function. */
#undef HAVE_FTRUNCATE
/* Define to 1 if you have the `geteuid' function. */
#undef HAVE_GETEUID
/* Define to 1 if you have the `getgroups' function. */
#undef HAVE_GETGROUPS
/* Define to 1 if you have the `gethostname' function. */
#undef HAVE_GETHOSTNAME
/* Define to 1 if you have the `getopt' function. */
#undef HAVE_GETOPT
/* Define to 1 if you have the `getpagesize' function. */
#undef HAVE_GETPAGESIZE
/* Define if you have the getspnam function. */
#undef HAVE_GETSPNAM
/* Define to 1 if you have the `gettimeofday' function. */
#undef HAVE_GETTIMEOFDAY
/* Define if you have GSSAPI with Kerberos version 5 available. */
#undef HAVE_GSSAPI
/* Define to 1 if you have the <gssapi/gssapi_generic.h> header file. */
#undef HAVE_GSSAPI_GSSAPI_GENERIC_H
/* Define to 1 if you have the <gssapi/gssapi.h> header file. */
#undef HAVE_GSSAPI_GSSAPI_H
/* Define to 1 if you have the <gssapi.h> header file. */
#undef HAVE_GSSAPI_H
/* Define to 1 if you have the `initgroups' function. */
#undef HAVE_INITGROUPS
/* Define to 1 if you have the <inttypes.h> header file. */
#undef HAVE_INTTYPES_H
/* Define to 1 if you have the <io.h> header file. */
#undef HAVE_IO_H
/* Define if you have MIT Kerberos version 4 available. */
#undef HAVE_KERBEROS
/* Define to 1 if you have the <krb5.h> header file. */
#undef HAVE_KRB5_H
/* Define to 1 if you have the `krb_get_err_text' function. */
#undef HAVE_KRB_GET_ERR_TEXT
/* Define to 1 if you have the `krb' library (-lkrb). */
#undef HAVE_LIBKRB
/* Define to 1 if you have the `krb4' library (-lkrb4). */
#undef HAVE_LIBKRB4
/* Define to 1 if you have the `nsl' library (-lnsl). */
#undef HAVE_LIBNSL
/* Define to 1 if you have the <limits.h> header file. */
#undef HAVE_LIMITS_H
/* Define to 1 if you have the `login' function. */
#undef HAVE_LOGIN
/* Define to 1 if you have the `logout' function. */
#undef HAVE_LOGOUT
/* Define to 1 if you support file names longer than 14 characters. */
#undef HAVE_LONG_FILE_NAMES
/* Define if you have memchr (always for CVS). */
#undef HAVE_MEMCHR
/* Define to 1 if you have the `memmove' function. */
#undef HAVE_MEMMOVE
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the `mkdir' function. */
#undef HAVE_MKDIR
/* Define to 1 if you have the `mknod' function. */
#undef HAVE_MKNOD
/* Define to 1 if you have the `mkstemp' function. */
#undef HAVE_MKSTEMP
/* Define to 1 if you have the `mktemp' function. */
#undef HAVE_MKTEMP
/* Define to 1 if you have a working `mmap' system call. */
#undef HAVE_MMAP
/* Define to 1 if you have the `nanosleep' function. */
#undef HAVE_NANOSLEEP
/* Define to 1 if you have the <ndbm.h> header file. */
#undef HAVE_NDBM_H
/* Define to 1 if you have the <ndir.h> header file, and it defines `DIR'. */
#undef HAVE_NDIR_H
/* Define to 1 if you have the `putenv' function. */
#undef HAVE_PUTENV
/* Define to 1 if you have the `readlink' function. */
#undef HAVE_READLINK
/* Define to 1 if you have the `regcomp' function. */
#undef HAVE_REGCOMP
/* Define to 1 if you have the `regerror' function. */
#undef HAVE_REGERROR
/* Define to 1 if you have the `regexec' function. */
#undef HAVE_REGEXEC
/* Define to 1 if you have the `regfree' function. */
#undef HAVE_REGFREE
/* Define to 1 if you have the `rename' function. */
#undef HAVE_RENAME
/* Define to 1 if you have the `select' function. */
#undef HAVE_SELECT
/* Define if the diff library should use setmode for binary files. */
#undef HAVE_SETMODE
/* Define to 1 if you have the `sigaction' function. */
#undef HAVE_SIGACTION
/* Define to 1 if you have the `sigblock' function. */
#undef HAVE_SIGBLOCK
/* Define to 1 if you have the `sigprocmask' function. */
#undef HAVE_SIGPROCMASK
/* Define to 1 if you have the `sigsetmask' function. */
#undef HAVE_SIGSETMASK
/* Define to 1 if you have the `sigvec' function. */
#undef HAVE_SIGVEC
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define if you have strchr (always for CVS). */
#undef HAVE_STRCHR
/* Define to 1 if you have the `strerror' function. */
#undef HAVE_STRERROR
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the `strstr' function. */
#undef HAVE_STRSTR
/* Define to 1 if you have the `strtoul' function. */
#undef HAVE_STRTOUL
/* Define to 1 if `st_blksize' is member of `struct stat'. */
#undef HAVE_STRUCT_STAT_ST_BLKSIZE
/* Define to 1 if `st_rdev' is member of `struct stat'. */
#undef HAVE_STRUCT_STAT_ST_RDEV
/* Define to 1 if you have the <syslog.h> header file. */
#undef HAVE_SYSLOG_H
/* Define to 1 if you have the <sys/bsdtypes.h> header file. */
#undef HAVE_SYS_BSDTYPES_H
/* Define to 1 if you have the <sys/dir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_DIR_H
/* Define to 1 if you have the <sys/file.h> header file. */
#undef HAVE_SYS_FILE_H
/* Define to 1 if you have the <sys/ndir.h> header file, and it defines `DIR'.
*/
#undef HAVE_SYS_NDIR_H
/* Define to 1 if you have the <sys/param.h> header file. */
#undef HAVE_SYS_PARAM_H
/* Define to 1 if you have the <sys/resource.h> header file. */
#undef HAVE_SYS_RESOURCE_H
/* Define to 1 if you have the <sys/select.h> header file. */
#undef HAVE_SYS_SELECT_H
/* Define to 1 if you have the <sys/stat.h> header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the <sys/timeb.h> header file. */
#undef HAVE_SYS_TIMEB_H
/* Define to 1 if you have the <sys/time.h> header file. */
#undef HAVE_SYS_TIME_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have <sys/wait.h> that is POSIX.1 compatible. */
#undef HAVE_SYS_WAIT_H
/* Define to 1 if you have the `tempnam' function. */
#undef HAVE_TEMPNAM
/* Define to 1 if you have the `timezone' function. */
#undef HAVE_TIMEZONE
/* Define to 1 if you have the `tzset' function. */
#undef HAVE_TZSET
/* Define to 1 if you have the <unistd.h> header file. */
#undef HAVE_UNISTD_H
/* Define to 1 if you have the `usleep' function. */
#undef HAVE_USLEEP
/* Define to 1 if you have the <utime.h> header file. */
#undef HAVE_UTIME_H
/* Define to 1 if `utime(file, NULL)' sets file's timestamp to the present. */
#undef HAVE_UTIME_NULL
/* Define to 1 if you have the `valloc' function. */
#undef HAVE_VALLOC
/* Define to 1 if you have the `vfork' function. */
#undef HAVE_VFORK
/* Define to 1 if you have the <vfork.h> header file. */
#undef HAVE_VFORK_H
/* Define to 1 if you have the `vprintf' function. */
#undef HAVE_VPRINTF
/* Define to 1 if you have the `wait3' function. */
#undef HAVE_WAIT3
/* Define to 1 if you have the `waitpid' function. */
#undef HAVE_WAITPID
/* Define to 1 if `fork' works. */
#undef HAVE_WORKING_FORK
/* Define to 1 if `vfork' works. */
#undef HAVE_WORKING_VFORK
/* By default, CVS stores its modules and other such items in flat text files
(MY_NDBM enables this). Turning off MY_NDBM causes CVS to look for a
system-supplied ndbm database library and use it instead. That may speed
things up, but the default setting generally works fine too. */
#undef MY_NDBM
/* Define to 1 if your C compiler doesn't accept -c and -o together. */
#undef NO_MINUS_C_MINUS_O
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Path to the pr utility */
#undef PR_PROGRAM
/* Define to force lib/regex.c to use malloc instead of alloca. */
#undef REGEX_MALLOC
/* Define as the return type of signal handlers (`int' or `void'). */
#undef RETSIGTYPE
/* The default remote shell to use, if one does not specify the CVS_RSH
environment variable. */
#undef RSH_DFLT
/* If you are working with a large remote repository and a 'cvs checkout' is
swamping your network and memory, define these to enable flow control. You
will end up with even less probability of a consistent checkout (see
Concurrency in cvs.texinfo), but CVS doesn't try to guarantee that anyway.
The master server process will monitor how far it is getting behind, if it
reaches the high water mark, it will signal the child process to stop
generating data when convenient (ie: no locks are held, currently at the
beginning of a new directory). Once the buffer has drained sufficiently to
reach the low water mark, it will be signalled to start again. */
#undef SERVER_FLOWCONTROL
/* The high water mark in bytes for server flow control. Required if
SERVER_FLOWCONTROL is defined, and useless otherwise. */
#undef SERVER_HI_WATER
/* The low water mark in bytes for server flow control. Required if
SERVER_FLOWCONTROL is defined, and useless otherwise. */
#undef SERVER_LO_WATER
/* Define if you want CVS to be able to serve repositories to remote clients.
*/
#undef SERVER_SUPPORT
/* Define as the maximum value of type 'size_t', if the system doesn't define
it. */
#undef SIZE_MAX
/* The default remote shell to use, if one does not specify the CVS_SSH
environment variable. */
#undef SSH_DFLT
/* Define to 1 if the `S_IS*' macros in <sys/stat.h> do not work properly. */
#undef STAT_MACROS_BROKEN
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
#undef TIME_WITH_SYS_TIME
/* Directory used for storing temporary files, if not overridden by
environment variables or the -T global option. There should be little need
to change this (-T is a better mechanism if you need to use a different
directory for temporary files). */
#undef TMPDIR_DFLT
/* The default umask to use when creating or otherwise setting file or
directory permissions in the repository. Must be a value in the range of 0
through 0777. For example, a value of 002 allows group rwx access and world
rx access; a value of 007 allows group rwx access but no world access. This
value is overridden by the value of the CVSUMASK environment variable,
which is interpreted as an octal number. */
#undef UMASK_DFLT
/* Define if setmode is required when writing binary data to stdout. */
#undef USE_SETMODE_STDOUT
/* Define if utime requires write access to the file (true on Windows, but not
Unix). */
#undef UTIME_EXPECTS_WRITABLE
/* Define to 1 if on AIX 3.
System headers sometimes define this.
We just want to avoid a redefinition error message. */
#ifndef _ALL_SOURCE
# undef _ALL_SOURCE
#endif
/* Define to 1 if on MINIX. */
#undef _MINIX
/* Define to 2 if the system does not provide POSIX.1 features except with
this defined. */
#undef _POSIX_1_SOURCE
/* Define to 1 if you need to in order for `stat' and other things to work. */
#undef _POSIX_SOURCE
/* Define to force lib/regex.c to define re_comp et al. */
#undef _REGEX_RE_COMP
/* Define to empty if `const' does not conform to ANSI C. */
#undef const
/* We want to always use the GNULIB version of getpass which we have in lib,
so define getpass to something that won't conflict with any existing system
declarations. */
#undef getpass
/* Define to `int' if <sys/types.h> doesn't define. */
#undef gid_t
/* Define to `__inline__' or `__inline' if that's what the C compiler
calls it, or to nothing if 'inline' is not supported under any name. */
#ifndef __cplusplus
#undef inline
#endif
/* Define to `int' if <sys/types.h> does not define. */
#undef mode_t
/* Define to `int' if <sys/types.h> does not define. */
#undef pid_t
/* Define to `unsigned int' if <sys/types.h> does not define. */
#undef size_t
/* Define to `int' if <sys/types.h> doesn't define. */
#undef uid_t
/* Define as `fork' if `vfork' does not work. */
#undef vfork

14785
contrib/cvs/configure vendored

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,765 +0,0 @@
2005-09-01 Derek Price <derek@ximbiot.com>
* cvs_acls.html, cvs_acls.in, log_accum.in: Update links.
2005-09-01 Derek Price <derek@ximbiot.com>
* commit_prep.in, cvs_acls.in, log.in, log_accum.in, mfpipe.in,
pvcs2rcs.in, rcslock.in: Update links and email addresses.
2005-07-12 Derek Price <derek@ximbiot.com>
* clmerge.in, cln_hist.in, commit_prep.in, cvs2vendor.sh, cvs_acls.in,
cvscheck.sh, debug_check_log.sh, descend.sh, log.in, log_accum.in,
mfpipe.in, rcs-to-cvs.sh, rcs2log.sh, rcs2sccs.sh, rcslock.in,
sccs2rcs.in: Add copyright notices.
2005-07-11 Derek Price <derek@ximbiot.com>
* clmerge.in, cln_hist.in, commit_prep.in, cvs2vendor.sh, cvs_acls.in,
cvscheck.sh, debug_check_log.sh, descend.sh, log.in, log_accum.in,
mfpipe.in, rcs-to-cvs.sh, rcs2log.sh, rcs2sccs.sh, rcslock.in,
sccs2rcs.in: Update license notices.
2005-04-14 Derek Price <derek@ximbiot.com>
* commit_prep.in, cvs_acls.in, log.in, log_accum.in, mfpipe.in,
rcslock.in: Enable taint checking and comment. This closes cvshome.org
Issue #224.
2005-04-08 Derek Price <derek@ximbiot.com>
* README: Correct my email address.
2005-01-31 Derek Price <derek@ximbiot.com>
* Makefile.am: Update copyright notices.
2005-01-25 Mark D. Baushke <mdb@cvshome.org>
* cvs_acls.in: New version from
"Peter Connolly" <Peter.Connolly@cnet.com>.
* cvs_acls.html: New file from
"Peter Connolly" <Peter.Connolly@cnet.com>.
* Makefile.am (EXTRA_DIST): Add cvs_acls.html
* Makefile.in: Regenerated.
2004-08-30 Derek Price <derek@ximbiot.com>
* log_accum.in: Changes to supress warnings under Perl 5.8.5.
(Patch from Jeroen Ruigrok/asmodai <asmodai@wxs.nl>.)
2004-01-30 Derek Price <derek@ximbiot.com>
Close issue #155.
* log_accum.in: Remove unused variables.
(Patch from Ville Skyttä <scop@cvshome.org>.)
2003-10-14 Derek Price <derek@ximbiot.com>
Port to pedantic POSIX 1003.1-2001 hosts, such as Debian GNU/Linux
testing with _POSIX2_VERSION=200112 in the environment.
* cvs2vendor.sh: Work with POSIX sort as well as with
traditional sort.
* rcs2sccs.sh, sccs2rcs.in: Likewise.
(Patch from Paul Eggert <eggert@twinsun.com>.)
2003-09-26 Mark D. Baushke <mdb@cvshome.org>
* sccs2rcs.in: Use @AWK@ to avoid ancient Solaris awk (no support
for the "?" operator). Add support for handling binary SCCS files.
(Suggestion from Allan Schrum <agschrum@mindspring.com>.)
2003-08-06 Derek Price <derek@ximbiot.com>
* commit_prep.in, log_accum.in: Port copious changes from Karl Fogel
and CollabNet. These changes add features, generalize, and organize.
2003-07-07 Larry Jones <lawrence.jones@eds.com>
* rcs2log.1: New file from Paul Eggert <eggert@twinsun.com>
via Eric Seidel <eseidel@apple.com>.
2003-06-20 Derek Price <derek@ximbiot.com>
* Don't call CVS with the -l option since CVS no longer accepts it.
(Suggestion from Matt Doar <matt@trpz.com>.)
2003-05-21 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerate with Automake version 1.7.5.
2003-04-10 Larry Jones <lawrence.jones@eds.com>
* Makefile.in: Regenerated.
2003-03-24 Derek Price <derek@ximbiot.com>
* Makefile.am: Update copyright notice.
* Makefile.in: Regenerated.
2003-02-25 Derek Price <derek@ximbiot.com>
* rcs2log.sh: Import RedHat 8.0's use of mktemp from the CVS 1.11.2
RPM. Use new MKTEMP variable from configure.
* Makefile.in: Regenerated.
2003-02-24 Larry Jones <lawrence.jones@eds.com>
and Donald Sharp <sharpd@cisco.com>
* check_cvs.in: Filenames with funky characters need to be quoted
correctly. Also needed to modify regex due to locked revisions of
files cause output to be different.
* check_cvs.in: Fixed multiple symlinks in your cvsroot,
improved CVSROOT/CVSROOT handling (Patch from Shlomo Reinstein
<shlomo.reinstein@intel.com). Fixed retrieving revisions of ,v
files. Added passwd, readers, and writers to list of files to
ignore and sorted list to match the one in src/mkmodules.c.
2002-12-16 Derek Price <derek@ximbiot.com>
* cvs_acls.in: Fix split loop error with Perl 5.8.0.
(Patch from Ville Skyttä <ville.skytta@iki.fi>.)
2002-12-11 Larry Jones <lawrence.jones@eds.com>
* Makefile.am (install-data-local): test -e isn't portable: use -f.
* Makefile.in: Regenerated.
(Reported by Philip Brown <phil@bolthole.com>.)
2002-11-21 Larry Jones <lawrence.jones@eds.com>
* .cvsignore: Add check_cvs.
* check_cvs.in: New script contributed by Donald Sharp.
* Makefile.am (contrib_SCRIPTS): Add check_cvs.
* Makefile.in: Regenerated.
* README: Add check_cvs and other missing scripts, alphabetize.
2002-11-08 Derek Price <derek@ximbiot.com>
* debug_check_log.sh: Simplify some code. Attempt to default to
src/check.log before falling back to ./check.log.
2002-09-24 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerated using Automake 1.6.3.
2002-09-24 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerated.
2002-05-20 Derek Price <oberon@umich.edu>
* cvs_acls.in: Add note about using checkoutlist with avail
in the commentary's INSTALLATION section.
(Original patch from Ville Skyttä <ville.skytta@xemacs.org>.)
2002-04-30 Derek Price <oberon@umich.edu>
* Makefile.in: Regenerated with automake 1.6.
2002-03-21 Derek Price <oberon@umich.edu>
* Makefile.am (install-data-local): Import a patch from RedHat which
was no longer necessary but causes a FIXME to print - maybe someone
will see it and fix it.
* Makefile.in: Regenerated.
2001-12-06 Derek Price <oberon@umich.edu>
* cvs_acls.in: Allow ACL specification based on branch matching.
(Patch from Aaron Voisine <voisine@bytemobile.com>.)
2001-10-16 Derek Price <dprice@collab.net>
* sccs2rcs.in: Replace Y2K bug fix with something more succint.
(Suggested by SAKAI Hiroaki <sakai.hiroaki@pfu.fujitsu.com>.)
2001-10-16 Derek Price <dprice@collab.net>
* rcs2sccs.in: Fix Y2K bug.
(Patch from SAKAI Hiroaki <sakai.hiroaki@pfu.fujitsu.com>.)
2001-09-06 Larry Jones <larry.jones@sdrc.com>
for Paul Eggert <eggert@twinsun.com>
Sync with revision 1.48 of the GNU Emacs sources. This
incorporates the following changes:
* rcs2log (Help, mainline code): Add new option -L FILE.
(Copyright): Update year.
(LANG, LANGUAGE, LC_ALL, LC_COLLATE, LC_CTYPE, LC_MESSAGES,
LC_NUMERIC, LC_TIME): New shell vars, to make sure we live in the C locale.
(mainline code): Handle nonstandard -u option differently, by
transforming it to standard form. Check for "Working file: ", not
"Working file:". Allow file names with spaces.
(SOH, rlogfile): New shell vars.
(rlogout): Remove. Its old functionality is mostly migrated to rlogfile.
Append ';;' to the last arm of every case statement, for portability to
ancient broken BSD shells.
(logins): Fix bug; was not being computed at all, lowering performance.
(pository): New var. This fixes some bugs where repositories are
remote, or have trailing slashes.
(authors): $llogout is never an empty shell var, so don't worry about that
possibility.
(printlogline, mainline code): Fix bug with SOH's being put into the output.
2001-07-20 Gerd Moellmann <gerd@gnu.org>
* rcs2log: Update copyright notice.
2001-01-03 Paul Eggert <eggert@twinsun.com>
* rcs2log: Avoid security hole allowing attacker to
cause user of rcs2log to overwrite arbitrary files, fixing
a bug reported by Morten Welinder.
Don't put "exit 1" at the end of the exit trap; it's
ineffective in POSIX shells.
2001-09-04 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with automake 1.5.
2001-08-21 Larry Jones <larry.jones@sdrc.com>
* sccs2rcs.in: Fix typo: missing quote.
(Patch submitted by "Mark D. Baushke" <mdb@cvshome.org>.)
2001-08-06 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated.
2001-07-04 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with new Automake release candidate 1.4h.
2001-06-28 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with new version of Automake.
2001-05-30 Derek Price <dprice@collab.net>
* pvcs2cvs.in: Rename to...
* pvcs2rcs.in: here.
* .cvsignore: Add pvcs2rcs.
* Makefile.am (contrib_SCRIPTS): Change pvcs2cvs to pvcs2rcs.
* Makefile.in: Regenerated.
2001-05-29 Derek Price <dprice@collab.net>
patch from Pavel Roskin <proski@gnu.org>
* Makefile.am (install-data-local): Double hash comment in rule since
single hash comments are not portable.
* Makefile.in: Regenerated.
2001-05-29 Derek Price <dprice@collab.net>
* pvcs2cvs.in: New file.
* Makefile.am (contrib_SCRIPTS): Add pcvs2cvs.
* Makefile.in: Regenerated.
2001-05-23 Larry Jones <larry.jones@sdrc.com>
* sccs2rcs.in: No need for grep when you're already using awk.
* sccs2rcs.in: Fix y2k bug correctly.
(Reported by "Hayes, Ted (London)" <HayesRog@exchange.uk.ml.com>.)
2001-04-25 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated using AM 1.4e as of today at 18:10 -0400.
2001-04-16 Derek Price <dprice@collab.net>
* log.pl: Accept new '-V' option for non-verbose status messages.
2001-03-14 Derek Price <derek.price@openavenue.com>
* Makefile.in: Regenerated
2001-01-05 Derek Price <derek.price@openavenue.com>
* contrib/Makefile.am (EXTRA_DIST, SUFFIXES, .pl:, .csh:): Move some
script targets to configure.in - see ../ChangeLog for more
* contrib/clmerge.in: Rename from clmerge.pl
* contrib/cln_hist.in: Rename from cln_hist.pl
* contrib/commit_prep.in: Rename from commit_prep.pl
* contrib/cvs_acls.in: Rename from cvs_acls.pl
* contrib/log.in: Rename from log.pl
* contrib/log_accum.in: Rename from log_accum.pl
* contrib/mfpipe.in: Rename from mfpipe.pl
* contrib/rcslock.in: Rename from rcslock.pl
* contrib/sccs2rcs.in: Rename from scc2rcs.csh
* contrib/clmerge.pl: Rename to clmerge.in
* contrib/cln_hist.pl: Rename to cln_hist.in
* contrib/commit_prep.pl: Rename to commit_prep.in
* contrib/cvs_acls.pl: Rename to cvs_acls.in
* contrib/log.pl: Rename to log.in
* contrib/log_accum.pl: Rename to log_accum.in
* contrib/mfpipe.pl: Rename to mfpipe.in
* contrib/rcslock.pl: Rename to rcslock.in
* contrib/sccs2rcs.csh: Rename to sccs2rcs.in
2000-12-22 Derek Price <derek.price@openavenue.com>
* Makefile.in: Regenerated
2000-12-21 Derek Price <derek.price@openavenue.com>
* Makefile.am: New file needed by Automake
* Makefile.in: Regenerated
2000-12-14 Derek Price <derek.price@openavenue.com>
Thomas Maeder <maeder@glue.ch>
* sccs2rcs.csh: unkludge a Y2k workaround
2000-10-23 Derek Price <derek.price@openavenue.com>
* debug_check_log.sh: added this script for analyzing sanity.sh output
* Makefile.in: add above file to DISTFILES and CONTRIB_PROGS
* .cvsignore: add debug_check_log
2000-09-07 Larry Jones <larry.jones@sdrc.com>
* Makefile.in: Use @bindir@, @libdir@, @infodir@, and @mandir@
from autoconf.
2000-02-25 Larry Jones <larry.jones@sdr.com>
* log.pl: Get committer from command line instead of getlogin
so that client/server works correctly.
* loc_accum.pl: Ditto.
2000-01-24 K.J. Paradise <kj@sourcegear.com>
* sccs2rcs.csh: fixed a y2k bug. This was submitted
by Ceri Davies <ceri_davies@isdcorp.com>, and looks
okay to me.
1999-01-19 Graham Stoney <greyham@research.canon.com.au>
* log.pl: The author commited the canonical perl "localtime" Y2K
offence, of printing "19$year" instead of (1900 + $year). Of
course, the result is non-compliance in year 2000. Fix it.
1998-10-14 Jim Kingdon
* ccvs-rsh.pl: Removed; it was not in DISTFILES so it didn't
actually get distributed. I'm going to move it to the web on the
theory that the web is a better place for such things.
* README: Don't mention it.
* Makefile.in (dist-dir, distclean): Remove references to elib.
* elib: Remove this subdirectory and all its contents. It went
with pcl-cvs, which is no longer distributed with CVS.
1998-09-22 Jim Kingdon <kingdon@harvey.cyclic.com>
* pvcs_to_rcs: Removed; it was not in DISTFILES so it didn't
actually get distributed. I'm going to move it to the web on the
theory that the web is a better place for such things.
* README: Don't mention it.
1998-09-10 Jim Kingdon
Check in Paul Eggert <eggert@twinsun.com>'s submission of
1998-08-15. I also ran "cvs admin -ko" on this file so that his
version number would be intact (not an ideal solution, because
people will import it into other repositories, but I don't feel
like hacking the master version).
* rcs2log.sh: Sync with master version at gnu.org.
1998-08-15 Jim Kingdon <kingdon@harvey.cyclic.com>
* README: Don't mention listener, since it was removed a while
ago.
* listen2.c, listen2.mak: Removed; because there is no easy way to
pass a socket (as opposed to file descriptor) from one process to
another on Windows, this isn't a promising approach (at least not
in this form).
* Makefile.in (DISTFILES): Remove them.
* .cvsignore: Remove listen2.ncb listen2.mdp Debug.
1998-05-11 W. Bradley Rubenstein
* log.pl: Check for errors from open and exec.
Sat Feb 21 21:59:45 1998 Ian Lance Taylor <ian@cygnus.com>
* Makefile.in (clean): Change "/bin/rm" to "rm".
Thu Aug 7 22:42:23 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* pvcs_to_rcs: Remove RCS keywords. Remove $Log and move the data
to this ChangeLog (below). Add paragraph that David Martin
emailed along with the script.
Revision 1.6 1997/03/07 16:21:28 divad
Need to explicitly state archive name in PVCS get command for
those cases where the case of the workfile and the case of the
archive file are different (OS/2)
Revision 1.5 1997/03/07 00:31:04 divad
Added capitalized extensions and framemaker files as binaries;
also overriding any path specification for workfiles at PVCS
checkout (most annoying).
Revision 1.4 1997/03/06 21:04:55 divad
Added \n to the end of each comment line to prevent multi-line
comments for a single revision from "merging"
Revision 1.3 1997/03/06 19:50:25 divad
Corrected bug in binary extensions; correcting processing
comment strings with double quotes
Revision 1.2 1997/03/06 17:29:10 divad
Provided list of extensions (rather than using Unix file
command) to determine which files are binary; also printing
version label as they are applied
Revision 1.1 1997/02/26 00:04:29 divad
Perl script to convert pvcs archives to rcs archives
* README: mention pvcs_to_rcs.
* pvcs_to_rcs: New file. This is the file as I got it from David
Martin. Will be checking in the tweaks shortly.
17 May 1997 Jim Kingdon
* listen2.c: Failed attempt at making this do what it was
intended to do. Will need to rethink the approach.
* listen2.mak: The usual involuntary tweaks.
* .cvsignore: Add listen2.ncb listen2.mdp.
Mon May 12 11:59:23 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* listener.c: Removed; see ../ChangeLog for rationale.
10 May 1997 Jim Kingdon
* listen2.c, listen2.mak: New files.
* Makefile.in (DISTFILES): Add them.
* .cvsignore: Add Debug.
Thu Feb 20 22:43:45 1997 David J MacKenzie <djm@va.pubnix.com>
* rcs-to-cvs.sh: Put temporary files in /var/tmp or /usr/tmp
whichever one exists. Just call "vi" not "/usr/ucb/vi".
Mon Feb 17 08:51:37 1997 Greg A. Woods <woods@most.weird.com>
* .cvsignore: added 'cvs2vendor' target from Feb. 12 changes.
* log_accum.pl (build_header): added "Repository:" to the report
header to show the first argument supplied to the script by CVS.
[[this value seems spuriously to be wrong when client is used]]
($hostdomain): correct order of initialization from the Feb. 12
changes.
($modulename): add more commentary about using '-M' to to get a
meaningful string here.
Tweak a few other comments from the Feb. 12 changes.
Wed Feb 12 10:27:48 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* cln_hist.pl, commit_prep.pl, cvs2vendor.sh, cvs_acls.pl,
cvscheck.man, cvscheck.sh, cvshelp.man, descend.man, descend.sh,
log_accum.pl, mfpipe.pl, rcs-to-cvs.sh, rcs2log.sh, rcs2sccs.sh,
sccs2rcs.csh: Remove $Id; we decided to get rid of these some
time ago.
Wed Feb 12 00:24:33 1997 Greg A. Woods <woods@most.weird.com>
* cvs2vendor.sh: new script.
* README: noted new cvs2vendor script.
* Makefile.in (DISTFILES): added cvs2vendor.sh.
(CONTRIB_PROGS): added cvs2vendor.
* log_accum.pl (show_wd): new variable, initialized to 0.
- set $show_wd if '-w' option found while parsing @ARGV.
- don't add 'In directory' line to report header unless $show_wd
is set.
(domainname): prepend a leading '.' if none there so that
concatenation with $hostname works (those with a FQDN hostname
*and* a domainname still lose).
(mail_notification): don't set a "From:" header -- the mailer will.
Wed Jan 8 14:48:58 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in, README, log.pl: Remove CVSid; we decided to get rid
of these some time ago.
Thu Jan 2 13:30:56 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in: Remove "675" paragraph; see ../ChangeLog for rationale.
Thu Oct 17 18:28:25 1996 Jim Kingdon <kingdon@harvey.cyclic.com>
* patch-2.1-.new-fix: Removed; it was not in DISTFILES so it never
made it into distributions. It also isn't clear what it has to do
with CVS. It is available from
ftp://ftp.weird.com/pub/patch-2.1-.new-fix
* README: Remove entry for patch-2.1-.new-fix.
Wed Oct 16 10:22:44 1996 Jim Blandy <jimb@totoro.cyclic.com>
* rcs2log.sh: Change date output format to something CVS 1.9
accepts. I think this breaks the Sep 29 change, but I don't have
a copy of CVS 1.5 handy, so I can't find a format that works with
both, and I think it's more important that it work with the
version it's distributed with.
Sat Oct 12 21:18:19 1996 Jim Kingdon <kingdon@harvey.cyclic.com>
* README: Don't mention pcl-cvs; it isn't here any more.
Sun Sep 29 19:45:19 1996 Greg A. Woods <woods@most.weird.com>
* README: add entry for patch-2.1-.new-fix.
* README: re-write the top section a bit.
* patch-2.1-.new-fix: re-generated using fixed "cvs patch" command.
* patch-2.1-.new-fix: new file.
Sun Sep 29 14:25:28 1996 Dave Love <d.love@dl.ac.uk>
* rcs2log.sh (month_data): Make default date format acceptable to
CVS post v1.8 as well as earlier CVSs and RCS.
Message-Id: <199609291546.QAA25531@mserv1.dl.ac.uk>
To: bug-gnu-emacs@prep.ai.mit.edu
Thu Aug 29 11:58:03 1996 Jim Blandy <jimb@totoro.cyclic.com>
* rcs2log: Update FSF address.
* rcs2log: Be more aggressive about finding the author's full
name; try nismatch and ypmatch.
* rcs2log: If the hostname appears not to be fully qualified, see
if domainname provides any useful information.
Fri Aug 16 16:02:36 1996 Norbert Kiesel <nk@col.sw-ley.de>
* Makefile.in (installdirs): support this target
Mon May 6 13:04:57 1996 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in (install): Don't tell user to run cvsinit. It isn't
called cvsinit anymore, and it isn't necessary (repositories are,
and need to be, compatible between cvs versions).
Sun Apr 14 11:30:36 1996 Karl Fogel <kfogel@floss.red-bean.com>
* Removed pcl-cvs/ subdir; see tools/ subdir in the top-level from
now on.
Added elib/ subdir.
* Makefile.in (dist-dir): Removed all references to pcl-cvs/
subdir.
Wed Mar 6 10:20:28 1996 Greg A. Woods <woods@most.weird.com>
* log_accum.pl: ($MAILER): use sendmail directly to allow other
headers to be included
* log_accum.pl (mail_notification): add support to allow settting
of Reply-To and Date header fields in the sent mail; remove $mailto
argument and use the global variable (as with $replyto).
* log_accum.pl: add -R option for mail_notification()'s optional
Reply-To value [default to $login]
Fri Mar 1 01:51:56 1996 Benjamin J. Lee <benjamin@cyclic.com>
* listener.c: added as mentioned in ../README.VMS
Mon Feb 19 13:37:36 1996 Jim Kingdon <kingdon@harvey.cyclic.com>
* README: Don't just tell people "we don't want your script"; tell
them what to do instead.
Thu Feb 1 14:28:16 1996 Karl Fogel <kfogel@floss.red-bean.com>
* Makefile.in (DISTFILES): added `rcs2sccs.sh', as mentioned in
README.
Thu Jan 18 09:39:16 1996 Jim Kingdon <kingdon@harvey.cyclic.com>
* README: Talk about submitting changes to contrib directory.
Tue Nov 14 15:28:25 1995 Greg A. Woods <woods@most.weird.com>
* README: fix some spelling and other typos
* Makefile.in: if I need reminding to run cvsinit....
Tue Nov 14 13:47:40 1995 Greg A. Woods <woods@most.weird.com>
* log_accum.pl:
- Fix 'cvs status' to use global -Qq options
- fix up a couple of comments, incl., my proper address
* log.pl: add a CVSid and fix a couple of comments
Sun Oct 1 02:02:57 1995 Peter Wemm <peter@haywire.dialix.com>
* Makefile.in: supply a suffix rule to deal with .sh "source"
Sat Jul 29 17:29:13 1995 James Kingdon <kingdon@harvey.cyclic.com>
* log.pl: Use global options -Qq, not command options -Qq.
* Makefile.in (install): Look for $(PROGS) and
$(CONTRIB_PROGS) in build dir, not srcdir.
Fri Jul 28 19:48:45 1995 Paul Eggert <eggert@twinsun.com>
* rcs2log.sh: Sync with latest Emacs snapshot.
Thu Jul 27 20:29:30 1995 Jim Blandy <jimb@totoro.cyclic.com>
* rcs2log.sh: import of initial WNT port work
Fri Jul 14 22:38:44 1995 Jim Blandy <jimb@totoro.cyclic.com>
* rcs-to-cvs.sh: Changes from David J. Mackenzie.
Set permissions on new repository files correctly.
Ignore *~ files.
Thu Jul 13 23:04:12 CDT 1995 Jim Meyering (meyering@comco.com)
* Makefile.in (.pl, .csh): *Never* redirect output directly to
the target (usu $@) of a rule. Instead, redirect to a temporary
file, and then move that temporary to the target. I chose to
name temporary files $@-t. Remember to be careful that the length
of the temporary file name not exceed the 14-character limit.
Sun Jul 9 21:16:53 1995 Karl Fogel <kfogel@floss.cyclic.com>
These are actually Greg Woods' changes:
* clmerge.pl, cvscheck.sh, descend.sh, dirfns.shar, rcs-to-cvs.sh,
rcs2log.sh, sccs2rcs.csh: renamed from the corresponding files
sans extensions.
* rcs2sccs.sh: new file.
Sun Jul 9 19:03:00 1995 Greg A. Woods <woods@most.weird.com>
* rcs2log.sh: oops, one more thing that should not have been
there.
- fix interpreter file syntax.
- remove "fix" for separating filenames and comments
* Makefile.in: hmm... thought rcs2log was in RCS-5.7 for some
reason -- it's not, so we'll install it from here....
- fix typo -- that's what you get for re-doing changes by hand!
- updates to support proper transformation and installation of
renamed files (from previous local changes)
* .cvsignore: one more target noted...
* sccs2rcs.csh: set up the interpreter file for updating by
Makefile (from previous local changes)
* log_accum.pl, log.pl, commit_prep.pl:
- set up the interpreter file for updating by Makefile
- various modifications, updates, and enhancements
(from previous local changes)
* rcslock.pl, mfpipe.pl, cvs_acls.pl, cln_hist.pl, clmerge.pl:
- set up the interpreter file for updating by Makefile
(from previous local changes)
- include changes from 1.5 here too, if any
* README:
- remove extensions from filenames to match installed names
(from previous local changes)
* .cvsignore: - added $(CONTRIB_PROGS) (from previous local changes)
Thu Jun 29 10:43:07 1995 James Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in (distclean): Also remove pcl-cvs/Makefile.
Thu Jun 8 15:32:29 1995 Jim Kingdon (kingdon@lioth.cygnus.com)
* intro.doc: Added.
* Makefile.in (DISTFILES): Add intro.doc.
Sat May 27 08:46:00 1995 Jim Meyering (meyering@comco.com)
* Makefile.in (Makefile): Regenerate only Makefile in current
directory when Makefile.in is out of date. Depend on ../config.status.
Mon May 8 13:06:29 1995 Bryan O'Sullivan <bos@serpentine.com>
* README: added an entry for ccvs-rsh.pl.
Sun Apr 30 23:50:32 1995 Bryan O'Sullivan <bos@serpentine.com>
* ccvs-rsh.pl: fixed a typo and added more flexible use of
CVS_PROXY_USER.
Sun Apr 30 14:56:21 1995 Jim Blandy <jimb@totoro.bio.indiana.edu>
* clmerge: Changes from Tom Tromey --- fix bug in date comparison
function.
Sat Apr 29 20:53:08 1995 Bryan O'Sullivan <bos@serpentine.com>
* ccvs-rsh.pl: created. See the file itself for documentation.
* Makefile.in (DISTFILES): added ccvs-rsh.pl to the list of
files to install.
Fri Apr 28 22:32:45 1995 Jim Blandy <jimb@totoro.bio.indiana.edu>
* Makefile.in (DISTFILES): Brought up-to-date with current
directory contents.
(dist-dir): Renamed from dist-dir; use DISTDIR variable, passed
from parent.
Mon Feb 13 13:32:07 1995 Jim Blandy <jimb@totoro.bio.indiana.edu>
* rcs2log: rcs2log was originally in this tree; how did it get
deleted? Anyway, this is the version distributed with Emacs
19.28, hacked to support CVS and Remote CVS.
Mon Jul 26 13:18:23 1993 David J. Mackenzie (djm@thepub.cygnus.com)
* rcs-to-cvs: Rewrite in sh.
Wed Jul 14 21:16:40 1993 David J. Mackenzie (djm@thepub.cygnus.com)
* rcs-to-cvs: Don't source .cshrc or hardcode paths.
Make respository dir if needed. Don't suppress errors
(such as prompts) from co.
Wed Feb 26 18:04:40 1992 K. Richard Pixley (rich@cygnus.com)
* Makefile.in, configure.in: removed traces of namesubdir,
-subdirs, $(subdir), $(unsubdir), some rcs triggers. Forced
copyrights to '92, changed some from Cygnus to FSF.

View File

@ -1,103 +0,0 @@
## Process this file with automake to produce Makefile.in
# Makefile for GNU CVS contributed sources.
# Do not use this makefile directly, but only from `../Makefile'.
#
# Copyright (C) 1986-2005 The Free Software Foundation, Inc.
#
# Portions Copyright (C) 1998-2005 Derek Price, Ximbiot <http://ximbiot.com>,
# and others.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
contribdir = $(pkgdatadir)/contrib
contrib_SCRIPTS = \
check_cvs \
clmerge \
cln_hist \
commit_prep \
cvs2vendor \
cvs_acls \
cvscheck \
debug_check_log \
log \
log_accum \
mfpipe \
pvcs2rcs \
rcs-to-cvs \
rcs2log \
rcslock \
sccs2rcs
contrib_DATA = \
README \
intro.doc
contrib_MANS = \
cvscheck.man
bin_LINKS = \
rcs2log
EXTRA_DIST = \
.cvsignore \
$(contrib_DATA) \
$(contrib_MANS) \
cvs2vendor.sh \
cvscheck.sh \
cvshelp.man \
cvs_acls.html \
debug_check_log.sh \
descend.sh \
descend.man \
dirfns.shar \
rcs-to-cvs.sh \
rcs2log.sh \
rcs2sccs.sh
CLEANFILES = $(bin_SCRIPTS) $(contrib_SCRIPTS)
# we'd rather have a link here rather than two copies of a script
install-data-local:
: FIXME - this path should be determined dynamically from bindir
: and contribdir
@$(NORMAL_INSTALL)
$(mkinstalldirs) $(DESTDIR)$(bindir)
@list='$(bin_LINKS)'; for p in $$list; do \
echo "test ! -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
echo " && cd $(DESTDIR)$(bindir) && $(LN_S) ../share/$(PACKAGE)/contrib/`echo $$p|sed '$(transform)'` ."; \
(test ! -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'` \
&& cd $(DESTDIR)$(bindir) && $(LN_S) ../share/$(PACKAGE)/contrib/`echo $$p|sed '$(transform)'` .) \
|| (echo "Link creation failed" && if test -f $$p; then \
echo " $(INSTALL_SCRIPT) $$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
$(INSTALL_SCRIPT) $$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
else if test -f $(srcdir)/$$p; then \
echo " $(INSTALL_SCRIPT) $(srcdir)/$$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
$(INSTALL_SCRIPT) $(srcdir)/$$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
else :; fi; fi); \
done
uninstall-local:
@$(NORMAL_UNINSTALL)
list='$(bin_LINKS)'; for p in $$list; do \
rm -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
done
SUFFIXES = .sh
.sh:
rm -f $@
cp $< $@
chmod +x $@
# for backwards compatibility with the old makefiles
realclean: maintainer-clean
.PHONY: realclean

View File

@ -1,497 +0,0 @@
# Makefile.in generated by automake 1.10 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
# Makefile for GNU CVS contributed sources.
# Do not use this makefile directly, but only from `../Makefile'.
#
# Copyright (C) 1986-2005 The Free Software Foundation, Inc.
#
# Portions Copyright (C) 1998-2005 Derek Price, Ximbiot <http://ximbiot.com>,
# and others.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = contrib
DIST_COMMON = README $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
$(srcdir)/check_cvs.in $(srcdir)/clmerge.in \
$(srcdir)/cln_hist.in $(srcdir)/commit_prep.in \
$(srcdir)/cvs_acls.in $(srcdir)/log.in $(srcdir)/log_accum.in \
$(srcdir)/mfpipe.in $(srcdir)/pvcs2rcs.in $(srcdir)/rcs2log.sh \
$(srcdir)/rcslock.in $(srcdir)/sccs2rcs.in ChangeLog
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.in
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES = check_cvs clmerge cln_hist commit_prep cvs_acls \
log log_accum mfpipe pvcs2rcs rcs2log rcslock sccs2rcs
am__installdirs = "$(DESTDIR)$(contribdir)" "$(DESTDIR)$(contribdir)"
contribSCRIPT_INSTALL = $(INSTALL_SCRIPT)
SCRIPTS = $(contrib_SCRIPTS)
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
contribDATA_INSTALL = $(INSTALL_DATA)
DATA = $(contrib_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CSH = @CSH@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EDITOR = @EDITOR@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
KRB4 = @KRB4@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAINT = @MAINT@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
MKTEMP = @MKTEMP@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PERL = @PERL@
PR = @PR@
PS2PDF = @PS2PDF@
RANLIB = @RANLIB@
ROFF = @ROFF@
SENDMAIL = @SENDMAIL@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TEXI2DVI = @TEXI2DVI@
VERSION = @VERSION@
YACC = @YACC@
YFLAGS = @YFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
ac_prefix_program = @ac_prefix_program@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
includeopt = @includeopt@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
with_default_rsh = @with_default_rsh@
with_default_ssh = @with_default_ssh@
contribdir = $(pkgdatadir)/contrib
contrib_SCRIPTS = \
check_cvs \
clmerge \
cln_hist \
commit_prep \
cvs2vendor \
cvs_acls \
cvscheck \
debug_check_log \
log \
log_accum \
mfpipe \
pvcs2rcs \
rcs-to-cvs \
rcs2log \
rcslock \
sccs2rcs
contrib_DATA = \
README \
intro.doc
contrib_MANS = \
cvscheck.man
bin_LINKS = \
rcs2log
EXTRA_DIST = \
.cvsignore \
$(contrib_DATA) \
$(contrib_MANS) \
cvs2vendor.sh \
cvscheck.sh \
cvshelp.man \
cvs_acls.html \
debug_check_log.sh \
descend.sh \
descend.man \
dirfns.shar \
rcs-to-cvs.sh \
rcs2log.sh \
rcs2sccs.sh
CLEANFILES = $(bin_SCRIPTS) $(contrib_SCRIPTS)
SUFFIXES = .sh
all: all-am
.SUFFIXES:
.SUFFIXES: .sh
$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu contrib/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu contrib/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
check_cvs: $(top_builddir)/config.status $(srcdir)/check_cvs.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
clmerge: $(top_builddir)/config.status $(srcdir)/clmerge.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
cln_hist: $(top_builddir)/config.status $(srcdir)/cln_hist.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
commit_prep: $(top_builddir)/config.status $(srcdir)/commit_prep.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
cvs_acls: $(top_builddir)/config.status $(srcdir)/cvs_acls.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
log: $(top_builddir)/config.status $(srcdir)/log.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
log_accum: $(top_builddir)/config.status $(srcdir)/log_accum.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
mfpipe: $(top_builddir)/config.status $(srcdir)/mfpipe.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
pvcs2rcs: $(top_builddir)/config.status $(srcdir)/pvcs2rcs.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
rcs2log: $(top_builddir)/config.status $(srcdir)/rcs2log.sh
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
rcslock: $(top_builddir)/config.status $(srcdir)/rcslock.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
sccs2rcs: $(top_builddir)/config.status $(srcdir)/sccs2rcs.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
install-contribSCRIPTS: $(contrib_SCRIPTS)
@$(NORMAL_INSTALL)
test -z "$(contribdir)" || $(MKDIR_P) "$(DESTDIR)$(contribdir)"
@list='$(contrib_SCRIPTS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
if test -f $$d$$p; then \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " $(contribSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(contribdir)/$$f'"; \
$(contribSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(contribdir)/$$f"; \
else :; fi; \
done
uninstall-contribSCRIPTS:
@$(NORMAL_UNINSTALL)
@list='$(contrib_SCRIPTS)'; for p in $$list; do \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " rm -f '$(DESTDIR)$(contribdir)/$$f'"; \
rm -f "$(DESTDIR)$(contribdir)/$$f"; \
done
install-contribDATA: $(contrib_DATA)
@$(NORMAL_INSTALL)
test -z "$(contribdir)" || $(MKDIR_P) "$(DESTDIR)$(contribdir)"
@list='$(contrib_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(contribDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(contribdir)/$$f'"; \
$(contribDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(contribdir)/$$f"; \
done
uninstall-contribDATA:
@$(NORMAL_UNINSTALL)
@list='$(contrib_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(contribdir)/$$f'"; \
rm -f "$(DESTDIR)$(contribdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(SCRIPTS) $(DATA)
installdirs:
for dir in "$(DESTDIR)$(contribdir)" "$(DESTDIR)$(contribdir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-contribDATA install-contribSCRIPTS \
install-data-local
install-dvi: install-dvi-am
install-exec-am:
install-html: install-html-am
install-info: install-info-am
install-man:
install-pdf: install-pdf-am
install-ps: install-ps-am
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-contribDATA uninstall-contribSCRIPTS \
uninstall-local
.MAKE: install-am install-strip
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-contribDATA install-contribSCRIPTS \
install-data install-data-am install-data-local install-dvi \
install-dvi-am install-exec install-exec-am install-html \
install-html-am install-info install-info-am install-man \
install-pdf install-pdf-am install-ps install-ps-am \
install-strip installcheck installcheck-am installdirs \
maintainer-clean maintainer-clean-generic mostlyclean \
mostlyclean-generic pdf pdf-am ps ps-am uninstall uninstall-am \
uninstall-contribDATA uninstall-contribSCRIPTS uninstall-local
# we'd rather have a link here rather than two copies of a script
install-data-local:
: FIXME - this path should be determined dynamically from bindir
: and contribdir
@$(NORMAL_INSTALL)
$(mkinstalldirs) $(DESTDIR)$(bindir)
@list='$(bin_LINKS)'; for p in $$list; do \
echo "test ! -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
echo " && cd $(DESTDIR)$(bindir) && $(LN_S) ../share/$(PACKAGE)/contrib/`echo $$p|sed '$(transform)'` ."; \
(test ! -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'` \
&& cd $(DESTDIR)$(bindir) && $(LN_S) ../share/$(PACKAGE)/contrib/`echo $$p|sed '$(transform)'` .) \
|| (echo "Link creation failed" && if test -f $$p; then \
echo " $(INSTALL_SCRIPT) $$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
$(INSTALL_SCRIPT) $$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
else if test -f $(srcdir)/$$p; then \
echo " $(INSTALL_SCRIPT) $(srcdir)/$$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`"; \
$(INSTALL_SCRIPT) $(srcdir)/$$p $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
else :; fi; fi); \
done
uninstall-local:
@$(NORMAL_UNINSTALL)
list='$(bin_LINKS)'; for p in $$list; do \
rm -f $(DESTDIR)$(bindir)/`echo $$p|sed '$(transform)'`; \
done
.sh:
rm -f $@
cp $< $@
chmod +x $@
# for backwards compatibility with the old makefiles
realclean: maintainer-clean
.PHONY: realclean
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,132 +0,0 @@
This "contrib" directory is a place holder for code/scripts sent to me
by contributors around the world. This README file will be kept
up-to-date from release to release. BUT, we must point out that these
contributions are really, REALLY UNSUPPORTED. In fact, we probably
don't even know what some of them really do. We certainly do not
guarantee to have tried them, or ported them to work with this CVS
distribution. If you have questions, your best bet is to contact the
original author, but you should not necessarily expect a reply, since
the author may not be available at the address given.
USE AT YOUR OWN RISK -- and all that stuff.
"Unsupported" also means that no one has volunteered to accept and check
in changes to this directory. So submissions for new scripts to add
here are unlikely to be accepted. Suggested changes to the existing
scripts here conceivably might, but that isn't clear either, unless of
course they come from the original author of the script.
If you have some software that works with CVS that you wish to offer it
is suggested that you make it available by FTP or HTTP and then announce
it on the info-cvs mailing list.
There is a web page of software related to CVS at the following URL which
would presumably be willing to list your software.
http://www.loria.fr/~molli/cvs-index.html
An attempt at a table of Contents for this directory:
README This file.
check_cvs A perl script to check an entire repository for
corruption.
Contributed by Donald Sharp <sharpd@cisco.com>.
clmerge A perl script to handle merge conflicts in GNU
style ChangeLog files .
Contributed by Tom Tromey <tromey@busco.lanl.gov>.
cln_hist A perl script to compress your
$CVSROOT/CVSROOT/history file, as it can grow quite
large after extended use.
Contributed by David G. Grubbs <dgg@ksr.com>
commit_prep A perl script, to be combined with log_accum.pl, to
log_accum provide for a way to combine the individual log
messages of a multi-directory "commit" into a
single log message, and mail the result somewhere.
Can also do other checks for $Id and that you are
committing the correct revision of the file.
Read the comments carefully.
Contributed by David Hampton <hampton@cisco.com>.
cvs2vendor A shell script to move changes from a repository
that was started without a vendor branch to one
that has a vendor branch.
Contributed by Greg A. Woods <woods@planix.com>.
cvs_acls A perl script that implements Access Control Lists
by using the "commitinfo" hook provided with the
"cvs commit" command.
Contributed by David G. Grubbs <dgg@ksr.com>.
cvscheck Identifies files added, changed, or removed in a
cvscheck.man checked out CVS tree; also notices unknown files.
Contributed by Lowell Skoog <fluke!lowell@uunet.uu.net>
cvshelp.man An introductory manual page written by Lowell Skoog
<fluke!lowell@uunet.uu.net>. It is most likely
out-of-date relative to CVS 1.3, but still may be
useful.
debug_check_log A shell script to help analyze sanity check failures.
Contributed by Derek R. Price <derek@ximbiot.com>.
descend A shell script that can be used to recursively
descend.man descend through a directory. In CVS 1.2, this was
very useful, since many of the commands were not
recursive. In CVS 1.3 (and later), however, most of
the commands are recursive. However, this may still
come in handy.
Contributed by Lowell Skoog <fluke!lowell@uunet.uu.net>
dirfns A shar file which contains some code that might
help your system support opendir/readdir/closedir,
if it does not already.
Copied from the C-News distribution.
intro.doc A user's view of what you need to know to get
started with CVS.
Contributed by <Steven.Pemberton@cwi.nl>.
log A perl script suitable for including in your
$CVSROOT/CVSROOT/loginfo file for logging commit
changes. Includes the RCS revision of the change
as part of the log.
Contributed by Kevin Samborn <samborn@sunrise.com>.
log_accum See commit_prep.
mfpipe Another perl script for logging. Allows you to
pipe the log message to a file and/or send mail
to some alias.
Contributed by John Clyne <clyne@niwot.scd.ucar.edu>.
pvcs2rcs A perl script to convert a PVCS tree to an RCS tree.
rcs-to-cvs Script to import sources that may have been under
RCS control already.
Contributed by Per Cederqvist <ceder@lysator.liu.se>.
rcs2log A shell script to create a ChangeLog-format file
given only a set of RCS files.
Contributed by Paul Eggert <eggert@twinsun.com>.
rcs2sccs A shell script to convert simple RCS files into
SCCS files, originally gleaned off the network
somewhere (originally by "kenc") and modified by
Jerry Jelinek <jerry@rmtc.Central.Sun.COM> and
Brian Berliner <berliner@sun.com> to increase
robustness and add support for one-level of branches.
rcslock A perl script that can be added to your commitinfo
file that tries to determine if your RCS file is
currently locked by someone else, as might be the
case for a binary file.
Contributed by John Rouillard <rouilj@cs.umb.edu>.
sccs2rcs A C-shell script that can convert (some) SCCS files
into RCS files, retaining the info contained in the
SCCS file (like dates, author, and log message).
Contributed by Ken Cox <kenstir@viewlogic.com>.

View File

@ -1,822 +0,0 @@
#! @PERL@ -w
########################################################################
# Copyright (c) 2000, 2001 by Donald Sharp <sharpd@cisco.com>
# All Rights Reserved
#
# Permission is granted to copy and/or distribute this file, with or
# without modifications, provided this notice is preserved.
#
########################################################################
=head1 check_cvs.pl
Script to check the integrity of the Repository
=head1 SYNOPSIS
check_cvs.pl
=head1 DESCRIPTION
This script will search through a repository and determine if
any of the files in it are corrupted.
Please do not run this script inside of the repository itself,
it will cause it too fail.
Also it currently can only be run over the entire repository,
so only point your CVSROOT at the actual CVSROOT.
=head1 OPTIONS
There are no options.
=head1 EXAMPLES
setenv CVSROOT /release/111/cvs
# To see more verbose output
setenv CVSDEBUGEDIT 1
check_cvs.pl
=head1 SEE ALSO
None
=cut
######################################################################
# MODULES #
######################################################################
use strict;
use File::Find;
use File::Basename;
use File::Path;
use Cwd;
######################################################################
# GLOBALS #
######################################################################
my @list_of_broken_files;
my @extra_files;
my $verbose = 0;
my $total_revisions;
my $total_interesting_revisions;
my $total_files;
my @ignore_files;
######################################################################
# SUBROUTINES #
######################################################################
######################################################################
#
# NAME :
# main
#
# PURPOSE :
# To search the repository for broken files
#
# PARAMETERS :
# NONE
#
# GLOBALS :
# $ENV{ CVSROOT } - The CVS repository to search through
# $ENV{ CVSDEBUGEDIT } - Turn on Debugging.
# @list_of_broken_files - The list of files that need to
# be fixed.
# $verbose - is verbose mode on?
# $total_revisions - The number of revisions considered
# $total_interesting_revisions - The number of revisions used
# $total_files - The total number of files looked at.
#
# RETURNS :
# A list of broken files
#
# COMMENTS :
# Do not run this script inside the repository. Choose
# a nice safe spot( like /tmp ) outside of the repository.
#
######################################################################
my $directory_to_look_at;
select (STDOUT); $| = 1; # make unbuffered
$total_revisions = 0;
$total_interesting_revisions = 0;
$total_files = 0;
if( !exists( $ENV{ CVSROOT } ) )
{
die( "The script should be run with the CVSROOT environment variable set" );
}
if( exists( $ENV{ CVSDEBUGEDIT } ) )
{
$verbose = 1;
print( "Verbose Mode Turned On\n" );
}
$directory_to_look_at = $ENV{ CVSROOT };
my $sym_count = 0;
while( -l $directory_to_look_at )
{
$directory_to_look_at = readlink( $directory_to_look_at );
$sym_count += 1;
if( $sym_count > 5 )
{
die( "Encountered too many symlinks for $ENV{ CVSROOT }\n" );
}
}
print( "Processing: $directory_to_look_at\n" ) if( $verbose );
@ignore_files = &get_ignore_files_from_cvsroot( $directory_to_look_at );
find( \&process_file, $directory_to_look_at );
my $num_files = @list_of_broken_files;
print( "List of corrupted files\n" ) if( $num_files > 0 );
foreach my $broken ( @list_of_broken_files )
{
print( "**** File: $broken\n" );
}
$num_files = @extra_files;
print( "List of Files That Don't belong in Repository:\n" ) if( $num_files > 0 );
foreach my $extra ( @extra_files )
{
print( "**** File: $extra\n" );
}
print( "Total Files: $total_files\n" );
print( "Total Revisions: $total_revisions Interesting Revisions: $total_interesting_revisions\n" );
######################################################################
#
# NAME :
# process_file
#
# PURPOSE :
# This function is called by the find function, it's purpose
# is to decide if it is important to look at a file or not.
# We only care about files that have the ,v at the end.
#
# PARAMETERS :
# NONE
#
# GLOBALS :
# $ENV{ CVSROOT } - The CVS repository to search through
#
# RETURNS :
# NONE
#
# COMMENTS :
# NONE
#
######################################################################
sub process_file
{
my $path = $File::Find::name;
$total_files += 1;
$path =~ s/^$directory_to_look_at\///;
print( "\tProcessing File: $path\n" ) if( $verbose );
if( $path =~ /,v$/ )
{
$path =~ s/,v$//;
look_at_cvs_file( $path );
}
elsif( ! -d $File::Find::name )
{
my $save = 0;
foreach my $ignore ( @ignore_files )
{
if( $path =~ /$ignore/ )
{
$save = 1;
last;
}
}
if( !$save )
{
push( @extra_files, $path );
}
}
}
######################################################################
#
# NAME :
# look_at_cvs_file
#
# PURPOSE :
# To decide if a file is broken or not. The algorithm is:
# a) Get the revision history for the file.
# - If that fails the file is broken, save the fact
# and continue processing other files.
# - If that succeeds we have a list of revisions.
# b) For Each revision try to retrieve that version
# - If that fails the file is broken, save the fact
# and continue processing other files.
# c) Continue on
#
# PARAMETERS :
# $file - The file to look at.
#
# GLOBALS :
# NONE
#
# RETURNS :
# NONE
#
# COMMENTS :
# We have to handle Attic files in a special manner.
# Basically remove the Attic from the string if it
# exists at the end of the $path variable.
#
######################################################################
sub look_at_cvs_file
{
my( $file ) = @_;
my( $name, $path, $suffix ) = fileparse( $file );
if( $path =~ s/Attic\/$// )
{
$file = $path . $name;
}
my $revisions = get_history( $name );
if( !defined( $revisions ) )
{
print( "\t$file is corrupted, this was determined via a cvs log command\n" ) if( $verbose );
push( @list_of_broken_files, $file );
return();
}
my @int_revisions = find_interesting_revisions( @$revisions );
foreach my $revision ( @int_revisions )
{
print( "\t\tLooking at Revision: $revision\n" ) if( $verbose );
if( !check_revision( $file, $revision ) )
{
print( "\t$file is corrupted in revision: $revision\n" ) if( $verbose );
push( @list_of_broken_files, $file );
return();
}
}
}
######################################################################
#
# NAME :
# get_history
#
# PURPOSE :
# To retrieve a array of revision numbers.
#
# PARAMETERS :
# $file - The file to retrieve the revision numbers for
#
# GLOBALS :
# NONE
#
# RETURNS :
# On Success - Reference to the list of revision numbers
# On Failure - undef.
#
# COMMENTS :
# The $_ is saved off because The File::find functionality
# expects the $_ to not have been changed.
# The -N option for the rlog command means to spit out
# tags or branch names.
#
######################################################################
sub get_history
{
my( $file ) = @_;
$file =~ s/(["\$`\\])/\\$1/g;
my @revisions;
my $revision;
my $ignore = 1;
my $save_ = $_;
open( FILE, "rlog -N \"$file\" 2>&1 |" ) or die( "unable to run rlog, help" );
while( <FILE> )
{
#rlog outputs a "----" line before the actual revision
#without this we'll pick up peoples comments if they
#happen to start with revision
if( /^----------------------------$/ )
{
$ignore = 0;
next;
}
if( ( !$ignore ) && ( ( $revision ) = m/^revision (\S+)/ ) )
{
push( @revisions, $revision );
$ignore = 1;
}
}
$_ = $save_;
if( !close( FILE ) )
{
return( undef );
}
return( \@revisions );
}
######################################################################
#
# NAME :
# check_revision
#
# PURPOSE :
# Given a file and a revision number ensure that we can
# check out that file
#
# PARAMETERS :
# $file - The file to look at.
# $revision - The revision to look at.
#
# GLOBALS :
# NONE
#
# RETURNS :
# If we can get the File - 1
# If we can not get the File - 0
#
# COMMENTS :
# cvs command line options are as followed:
# -n - Do not run any checkout program as specified by the -o
# option in the modules file
# -p - Put all output to standard out.
# -r - The revision of the file that we would like to look at.
# Please note that cvs will return 0 for being able to successfully
# read the file and 1 for failure to read the file.
#
######################################################################
sub check_revision
{
my( $file, $revision ) = @_;
$file =~ s/(["\$`\\])/\\$1/g;
my $cwd = getcwd();
chdir( "/tmp" );
my $ret_code = 0xffff & system( "cvs co -n -p -r $revision \"$file\" > /dev/null 2>&1" );
chdir( $cwd );
return( 1 ) if ( $ret_code == 0 );
return( 0 );
return( $ret_code );
}
######################################################################
#
# NAME :
# find_interesting_revisions
#
# PURPOSE :
# CVS stores information in a logical manner. We only really
# need to look at some interestin revisions. These are:
# The first version
# And the last version on every branch.
# This is because cvs stores changes descending from
# main line. ie suppose the last version on mainline is 1.6
# version 1.6 of the file is stored in toto. version 1.5
# is stored as a diff between 1.5 and 1.6. 1.4 is stored
# as a diff between 1.5 and 1.4.
# branches are stored a little differently. They are
# stored in ascending order. Suppose there is a branch
# on 1.4 of the file. The first branches revision number
# would be 1.4.1.1. This is stored as a diff between
# version 1.4 and 1.4.1.1. The 1.4.1.2 version is stored
# as a diff between 1.4.1.1 and 1.4.1.2. Therefore
# we are only interested in the earliest revision number
# and the highest revision number on a branch.
#
# PARAMETERS :
# @revisions - The list of revisions to find interesting ones
#
# GLOBALS :
# NONE
#
# RETURNS :
# @new_revisions - The list of revisions that we find interesting
#
# COMMENTS :
#
######################################################################
sub find_interesting_revisions
{
my( @revisions ) = @_;
my @new_revisions;
my %branch_revision;
my $branch_number;
my $branch_rev;
my $key;
my $value;
START_OVER:
foreach my $revision( @revisions )
{
my $start_over = 0;
( $branch_number, $branch_rev ) = branch_split( $revision );
#if the number of elements in the branch is 1
#and the new branch is less than the old branch
if( elements_in_branch( $branch_number ) == 1 )
{
( $start_over,
%branch_revision ) = find_int_mainline_revision( $branch_number,
$branch_rev,
%branch_revision );
next START_OVER if( $start_over );
}
%branch_revision = find_int_branch_revision( $branch_number,
$branch_rev,
%branch_revision );
}
%branch_revision = remove_duplicate_branches( %branch_revision );
while( ( $key, $value ) = each ( %branch_revision ) )
{
push( @new_revisions, $key . "." . $value );
}
my $nrc;
my $rc;
$rc = @revisions;
$nrc = @new_revisions;
$total_revisions += $rc;
$total_interesting_revisions += $nrc;
print( "\t\tTotal Revisions: $rc Interesting Revisions: $nrc\n" ) if( $verbose );
return( @new_revisions );
}
########################################################################
#
# NAME :
# remove_duplicate_branches
#
# PURPOSE :
# To remove from the list of branches that we are interested
# in duplication that will cause cvs to check a revision multiple
# times. For Instance revision 1.1.1.1 should be prefered
# to be checked over revision 1.1, as that v1.1.1.1 can
# only be retrieved by going through v1.1. Therefore
# we should remove v1.1 from the list of branches that
# are interesting.
#
# PARAMETERS :
# %branch_revisions - The hash of the interesting revisions
#
# GLOBALS :
# NONE
#
# RETURNS :
# %branch_revisions - The hash of the modified interesting revisions
#
# COMMENTS :
# NONE
#
########################################################################
sub remove_duplicate_branches
{
my( %branch_revisions ) = @_;
my $key;
my $value;
my $branch_comp;
my $branch;
RESTART:
{
my @keys = keys( %branch_revisions );
while( ( $key, $value ) = each ( %branch_revisions ) )
{
$branch_comp = $key . "." . $value;
foreach $branch ( @keys )
{
if( $branch eq $key )
{
next;
}
if( elements_in_branch( $branch_comp ) ==
elements_in_branch( $branch ) - 1 )
{
if( $branch =~ /^$branch_comp/ )
{
delete( $branch_revisions{ $key } );
goto RESTART;
}
}
}
}
}
return( %branch_revisions );
}
######################################################################
#
# NAME :
# find_int_branch_revision
#
# PURPOSE :
# To Find a interesting branch revision.
# Algorithm:
# If the $branch_revision exists in the interesting branch
# hash and the new $branch_rev is less than currently saved
# one replace it with the new $branch_rev.
# else if the $branch_revision doesn't exist in the interesting
# branch hash, then just store the $branch_number and $branch_rev
#
# PARAMETERS :
# $branch_number - The branch that we are looking at
# $branch_rev - The particular revision we are looking
# at on the $branch_number.
# %branch_revision - The hash storing the interesting branches
# and the revisions on them.
#
# GLOBALS :
# NONE
#
# RETURNS :
# %branch_revision - The modified hash that stores interesting
# branches.
#
# COMMENTS :
# NONE
#
######################################################################
sub find_int_branch_revision
{
my( $branch_number, $branch_rev, %branch_revision ) = @_;
if( exists( $branch_revision{ $branch_number } ) )
{
if( $branch_rev > $branch_revision{ $branch_number } )
{
$branch_revision{ $branch_number } = $branch_rev;
}
}
else
{
$branch_revision{ $branch_number } = $branch_rev;
}
return( %branch_revision );
}
######################################################################
#
# NAME :
# find_int_mainline_revision
#
# PURPOSE :
# To Find a interesting mainline revision.
# Algorithm:
# if the $branch_number is less then a branch number
# with one element in it, then delete the old branch_number
# and return.
# if the $branch_number is greater than a branch number
# then return, and tell the calling function that we
# should skip this element, as that it's not important.
# if the $branch_number is the same as a branch number
# with one element in it, then check to see if the
# $branch_rev is less than the stored branch rev if
# it is replace with new $branch_rev. Else ignore revision
#
# PARAMETERS :
# $branch_number - The branch that we are looking at
# $branch_rev - The particular revision we are looking
# at on the $branch_number.
# %branch_revision - The hash storing the interesting branches
# and the revisions on them.
#
# GLOBALS :
# NONE
#
# RETURNS :
# ( $skip, %branch_revision ) -
# $skip - 1 if we need to ignore this particular $branch_number
# $branch_rev combo. Else 0.
# %branch_revision - The modified hash that stores interesting
# branches.
#
# COMMENTS :
# NONE
#
######################################################################
sub find_int_mainline_revision
{
my( $branch_number, $branch_rev, %branch_revision ) = @_;
foreach my $key ( keys %branch_revision )
{
if( elements_in_branch( $key ) == 1 )
{
if( $branch_number < $key )
{
delete( $branch_revision{ $key } );
next;
}
if( $branch_number > $key )
{
return( 1, %branch_revision );
}
if( ( exists( $branch_revision{ $branch_number } ) ) &&
( $branch_rev < $branch_revision{ $branch_number } ) )
{
$branch_revision{ $branch_number } = $branch_rev;
return( 1, %branch_revision );
}
}
}
return( 0, %branch_revision );
}
######################################################################
#
# NAME :
# elements_in_branch
#
# PURPOSE :
# Determine the number of elements in a revision number
# Elements are defined by numbers seperated by ".".
# the revision 1.2.3.4 would have 4 elements
# the revision 1.2.4.5.6.7 would have 6 elements
#
# PARAMETERS :
# $branch - The revision to look at.
#
# GLOBALS :
# NONE
#
# RETURNS :
# $count - The number of elements
#
# COMMENTS :
# NONE
#
######################################################################
sub elements_in_branch
{
my( $branch ) = @_;
my @split_rev;
@split_rev = split /\./, $branch;
my $count = @split_rev;
return( $count );
}
######################################################################
#
# NAME :
# branch_split
#
# PURPOSE :
# To split up a revision number up into the branch part and
# the number part. For Instance:
# 1.1.1.1 - is split 1.1.1 and 1
# 2.1 - is split 2 and 1
# 1.3.4.5.7.8 - is split 1.3.4.5.7 and 8
#
# PARAMETERS :
# $revision - The revision to look at.
#
# GLOBALS :
# NONE
#
# RETURNS :
# ( $branch, $revision ) -
# $branch - The branch part of the revision number
# $revision - The revision part of the revision number
#
# COMMENTS :
# NONE
#
######################################################################
sub branch_split
{
my( $revision ) = @_;
my $branch;
my $version;
my @split_rev;
my $count;
@split_rev = split /\./, $revision;
my $numbers = @split_rev;
@split_rev = reverse( @split_rev );
$branch = pop( @split_rev );
for( $count = 0; $count < $numbers - 2 ; $count++ )
{
$branch .= "." . pop( @split_rev );
}
return( $branch, pop( @split_rev ) );
}
######################################################################
#
# NAME :
# get_ignore_files_from_cvsroot
#
# PURPOSE :
# Retrieve the list of files from the CVSROOT/ directory
# that should be ignored.
# These are the regular files (e.g., commitinfo, loginfo)
# and those specified in the checkoutlist file.
#
# PARAMETERS :
# The CVSROOT
#
# GLOBALS :
# NONE
#
# RETURNS :
# @ignore - the list of files to ignore
#
# COMMENTS :
# NONE
#
######################################################################
sub get_ignore_files_from_cvsroot {
my( $cvsroot ) = @_;
my @ignore = ( 'CVS\/fileattr$',
'^CVSROOT\/loginfo',
'^CVSROOT\/.#loginfo',
'^CVSROOT\/rcsinfo',
'^CVSROOT\/.#rcsinfo',
'^CVSROOT\/editinfo',
'^CVSROOT\/.#editinfo',
'^CVSROOT\/verifymsg',
'^CVSROOT\/.#verifymsg',
'^CVSROOT\/commitinfo',
'^CVSROOT\/.#commitinfo',
'^CVSROOT\/taginfo',
'^CVSROOT\/.#taginfo',
'^CVSROOT\/cvsignore',
'^CVSROOT\/.#cvsignore',
'^CVSROOT\/checkoutlist',
'^CVSROOT\/.#checkoutlist',
'^CVSROOT\/cvswrappers',
'^CVSROOT\/.#cvswrappers',
'^CVSROOT\/notify',
'^CVSROOT\/.#notify',
'^CVSROOT\/modules',
'^CVSROOT\/.#modules',
'^CVSROOT\/readers',
'^CVSROOT\/.#readers',
'^CVSROOT\/writers',
'^CVSROOT\/.#writers',
'^CVSROOT\/passwd',
'^CVSROOT\/config',
'^CVSROOT\/.#config',
'^CVSROOT\/val-tags',
'^CVSROOT\/.#val-tags',
'^CVSROOT\/history' );
my $checkoutlist_file = "$cvsroot\/CVSROOT\/checkoutlist";
open( CHECKOUTLIST, "<$cvsroot\/CVSROOT\/checkoutlist" )
or die( "Unable to read checkoutlist file: $!\n" );
my @list = <CHECKOUTLIST>;
chomp( @list );
close( CHECKOUTLIST )
or die( "Unable to close checkoutlist file: $!\n" );
foreach my $line( @list )
{
next if( $line =~ /^#/ || $line =~ /^$/ );
if( $line =~ /^\s*(\S*)\s*/ ) { $line = $1 };
push( @ignore, "^CVSROOT\/$line", "^CVSROOT\/\.#$line" );
}
return @ignore;
}

View File

@ -1,164 +0,0 @@
#! @PERL@
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# Merge conflicted ChangeLogs
# tromey Mon Aug 15 1994
# Usage is:
#
# cl-merge [-i] file ...
#
# With -i, it works in place (backups put in a ~ file). Otherwise the
# merged ChangeLog is printed to stdout.
# Please report any bugs to me. I wrote this yesterday, so there are no
# guarantees about its performance. I recommend checking its output
# carefully. If you do send a bug report, please include the failing
# ChangeLog, so I can include it in my test suite.
#
# Tom
# ---
# tromey@busco.lanl.gov Member, League for Programming Freedom
# Sadism and farce are always inexplicably linked.
# -- Alexander Theroux
# Month->number mapping. Used for sorting.
%months = ('Jan', 0,
'Feb', 1,
'Mar', 2,
'Apr', 3,
'May', 4,
'Jun', 5,
'Jul', 6,
'Aug', 7,
'Sep', 8,
'Oct', 9,
'Nov', 10,
'Dec', 11);
# If '-i' is given, do it in-place.
if ($ARGV[0] eq '-i') {
shift (@ARGV);
$^I = '~';
}
$lastkey = '';
$lastval = '';
$conf = 0;
%conflist = ();
$tjd = 0;
# Simple state machine. The states:
#
# 0 Not in conflict. Just copy input to output.
# 1 Beginning an entry. Next non-blank line is key.
# 2 In entry. Entry beginner transitions to state 1.
while (<>) {
if (/^<<<</ || /^====/) {
# Start of a conflict.
# Copy last key into array.
if ($lastkey ne '') {
$conflist{$lastkey} = $lastval;
$lastkey = '';
$lastval = '';
}
$conf = 1;
} elsif (/^>>>>/) {
# End of conflict. Output.
# Copy last key into array.
if ($lastkey ne '') {
$conflist{$lastkey} = $lastval;
$lastkey = '';
$lastval = '';
}
foreach (reverse sort clcmp keys %conflist) {
print STDERR "doing $_" if $tjd;
print $_;
print $conflist{$_};
}
$lastkey = '';
$lastval = '';
$conf = 0;
%conflist = ();
} elsif ($conf == 1) {
# Beginning an entry. Skip empty lines. Error if not a real
# beginner.
if (/^$/) {
# Empty line; just skip at this point.
} elsif (/^[MTWFS]/) {
# Looks like the name of a day; assume opener and move to
# "in entry" state.
$lastkey = $_;
$conf = 2;
print STDERR "found $_" if $tjd;
} else {
die ("conflict crosses entry boundaries: $_");
}
} elsif ($conf == 2) {
# In entry. Copy into variable until we see beginner line.
if (/^[MTWFS]/) {
# Entry beginner line.
# Copy last key into array.
if ($lastkey ne '') {
$conflist{$lastkey} = $lastval;
$lastkey = '';
$lastval = '';
}
$lastkey = $_;
print STDERR "found $_" if $tjd;
$lastval = '';
} else {
$lastval .= $_;
}
} else {
# Just copy.
print;
}
}
# Compare ChangeLog time strings like <=>.
#
# 0 1 2 3
# Thu Aug 11 13:22:42 1994 Tom Tromey (tromey@creche.colorado.edu)
# 0123456789012345678901234567890
#
sub clcmp {
# First check year.
$r = substr ($a, 20, 4) <=> substr ($b, 20, 4);
# Now check month.
$r = $months{substr ($a, 4, 3)} <=> $months{substr ($b, 4, 3)} if !$r;
# Now check day.
$r = substr ($a, 8, 2) <=> substr ($b, 8, 2) if !$r;
# Now check time (3 parts).
$r = substr ($a, 11, 2) <=> substr ($b, 11, 2) if !$r;
$r = substr ($a, 14, 2) <=> substr ($b, 14, 2) if !$r;
$r = substr ($a, 17, 2) <=> substr ($b, 17, 2) if !$r;
$r;
}

View File

@ -1,103 +0,0 @@
#! @PERL@
# -*-Perl-*-
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# Contributed by David G. Grubbs <dgg@ksr.com>
#
# Clean up the history file. 10 Record types: MAR OFT WUCG
#
# WUCG records are thrown out.
# MAR records are retained.
# T records: retain only last tag with same combined tag/module.
#
# Two passes: Walk through the first time and remember the
# 1. Last Tag record with same "tag" and "module" names.
# 2. Last O record with unique user/module/directory, unless followed
# by a matching F record.
#
$r = $ENV{"CVSROOT"};
$c = "$r/CVSROOT";
$h = "$c/history";
eval "print STDERR \$die='Unknown parameter $1\n' if !defined \$$1; \$$1=\$';"
while ($ARGV[0] =~ /^(\w+)=/ && shift(@ARGV));
exit 255 if $die; # process any variable=value switches
%tags = ();
%outs = ();
#
# Move history file to safe place and re-initialize a new one.
#
rename($h, "$h.bak");
open(XX, ">$h");
close(XX);
#
# Pass1 -- remember last tag and checkout.
#
open(HIST, "$h.bak");
while (<HIST>) {
next if /^[MARWUCG]/;
# Save whole line keyed by tag|module
if (/^T/) {
@tmp = split(/\|/, $_);
$tags{$tmp[4] . '|' . $tmp[5]} = $_;
}
# Save whole line
if (/^[OF]/) {
@tmp = split(/\|/, $_);
$outs{$tmp[1] . '|' . $tmp[2] . '|' . $tmp[5]} = $_;
}
}
#
# Pass2 -- print out what we want to save.
#
open(SAVE, ">$h.work");
open(HIST, "$h.bak");
while (<HIST>) {
next if /^[FWUCG]/;
# If whole line matches saved (i.e. "last") one, print it.
if (/^T/) {
@tmp = split(/\|/, $_);
next if $tags{$tmp[4] . '|' . $tmp[5]} ne $_;
}
# Save whole line
if (/^O/) {
@tmp = split(/\|/, $_);
next if $outs{$tmp[1] . '|' . $tmp[2] . '|' . $tmp[5]} ne $_;
}
print SAVE $_;
}
#
# Put back the saved stuff
#
system "cat $h >> $h.work";
if (-s $h) {
rename ($h, "$h.interim");
print "history.interim has non-zero size.\n";
} else {
unlink($h);
}
rename ("$h.work", $h);
exit(0);

View File

@ -1,86 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
# Perl filter to handle pre-commit checking of files. This program
# records the last directory where commits will be taking place for
# use by the log_accum.pl script.
#
# IMPORTANT: this script interacts with log_accum, they have to agree
# on the tmpfile name to use. See $LAST_FILE below.
#
# Contributed by David Hampton <hampton@cisco.com>
# Stripped to minimum by Roy Fielding
#
############################################################
$TMPDIR = $ENV{'TMPDIR'} || '/tmp';
$FILE_PREFIX = '#cvs.';
# If see a "-u $USER" argument, then destructively remove it from the
# argument list, so $ARGV[0] will be the repository dir again, as it
# used to be before we added the -u flag.
if ($ARGV[0] eq '-u') {
shift @ARGV;
$CVS_USERNAME = shift (@ARGV);
}
# This needs to match the corresponding var in log_accum.pl, including
# the appending of the pgrp and username suffixes (see uses of this
# var farther down).
$LAST_FILE = "$TMPDIR/${FILE_PREFIX}lastdir";
sub write_line {
my ($filename, $line) = @_;
# A check of some kind is needed here, but the rules aren't apparent
# at the moment:
# foreach($filename, $line){
# $_ =~ m#^([-\@\w.\#]+)$#;
# $_ = $1;
# }
open(FILE, ">$filename") || die("Cannot open $filename: $!\n");
print(FILE $line, "\n");
close(FILE);
}
#
# Record this directory as the last one checked. This will be used
# by the log_accumulate script to determine when it is processing
# the final directory of a multi-directory commit.
#
$id = getpgrp();
&write_line("$LAST_FILE.$id.$CVS_USERNAME", $ARGV[0]);
exit(0);

View File

@ -1,159 +0,0 @@
#! /bin/sh
#
# Copyright (C) 1997-2005 The Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# cvs2vendor - move revsisions from files in A to files in B
#
# The primary reason for this script is to move deltas from a
# non-vendor branched repository onto a fresh vendor branched one,
# skipping the initial checkin in assumption that it is the same in
# both repositories. This way you can take a project that was moved
# into CVS without the benefit of the vendor branch and for all
# intents and purposes add the vendor branch underneath the existing
# deltas.
#
# This script is also a decent example of repository maintenance using
# raw RCS commands (if I do say so myself! ;-).
#
# Tags are preserved.
#
# The timestamp of the initial vendor branch revision will be adjusted
# to be the same as the 1.1 revision of each source file.
#
# Extra branches in the source directory will cause breakage.
#
# Intermediate files are created in the current working directory
# where this script is started.
#
# Written by Greg A. Woods <woods@planix.com>, based on rcs2sccs
# (retains some of the rlog parsing from it).
#
# The copyright is in the Public Domain.
#
if [ $# -ne 2 ]; then
echo USAGE: $0 srcdir dstdir
exit 2
fi
tsrcdir=$1
tdstdir=$2
revfile=/tmp/cvs2vendor_$$_rev
rm -f $revfile
commentfile=/tmp/cvs2vendor_$$_comment
rm -f $commentfile
if sort -k 1,1 /dev/null 2>/dev/null
then sort_each_field='-k 1 -k 2 -k 3 -k 4 -k 5 -k 6 -k 7 -k 8 -k 9'
else sort_each_field='+0 +1 +2 +3 +4 +5 +6 +7 +8'
fi
srcdirs=`cd $tsrcdir && find . -type d -print | sed 's~^\.[/]*~~'`
# the "" is a trick to get $tsrcdir itself without resorting to '.'
for ldir in "" $srcdirs; do
srcdir=$tsrcdir/$ldir
dstdir=$tdstdir/$ldir
# Loop over every RCS file in srcdir
#
for vfile in $srcdir/*,v; do
# get rid of the ",v" at the end of the name
file=`echo $vfile | sed -e 's/,v$//'`
bfile=`basename $file`
if [ ! -d $dstdir ]; then
echo "making locally added directory $dstdir"
mkdir -p $dstdir
fi
if [ ! -f $dstdir/$bfile,v ]; then
echo "copying locally added file $dstdir/$bfile ..."
cp $vfile $dstdir
continue;
fi
# work on each rev of that file in ascending order
rlog $file | grep "^revision [0-9][0-9]*\." | awk '{print $2}' | sed -e 's/\./ /g' | sort -n -u $sort_each_field | sed -e 's/ /./g' > $revfile
for rev in `cat $revfile`; do
case "$rev" in
1.1)
newdate=`rlog -r$rev $file | grep "^date: " | awk '{printf("%s.%s\n",$2,$3); exit}' | sed -e 's~/~.~g' -e 's/:/./g' -e 's/;//' -e 's/^19//'`
olddate=`rlog -r1.1.1.1 $dstdir/$bfile | grep "^date: " | awk '{printf("%s.%s\n",$2,$3); exit}' | sed -e 's~/~.~g' -e 's/:/./g' -e 's/;//' -e 's/^19//'`
sed "s/$olddate/$newdate/" < $dstdir/$bfile,v > $dstdir/$bfile.x
mv -f $dstdir/$bfile.x $dstdir/$bfile,v
chmod -w $dstdir/$bfile,v
symname=`rlog -h $file | sed -e '1,/^symbolic names:/d' -e 's/[ ]*//g' | awk -F: '$2 == "'"$rev"'" {printf("-n%s:1.1.1.1\n",$1)}'`
if [ -n "$symname" ]; then
echo "tagging $file with $symname ..."
rcs $symname $dstdir/$bfile,v
if [ $? != 0 ]; then
echo ERROR - rcs $symname $dstdir/$bfile,v
exit 1
fi
fi
continue # skip first rev....
;;
esac
# get a lock on the destination local branch tip revision
co -r1 -l $dstdir/$bfile
if [ $? != 0 ]; then
echo ERROR - co -r1 -l $dstdir/$bfile
exit 1
fi
rm -f $dstdir/$bfile
# get file into current dir and get stats
date=`rlog -r$rev $file | grep "^date: " | awk '{printf("%s %s\n",$2,$3); exit}' | sed -e 's/;//'`
author=`rlog -r$rev $file | grep "^date: " | awk '{print $5; exit}' | sed -e 's/;//'`
symname=`rlog -h $file | sed -e '1,/^symbolic names:/d' -e 's/[ ]*//g' | awk -F: '$2 == "'"$rev"'" {printf("-n%s\n",$1)}'`
rlog -r$rev $file | sed -e '/^branches: /d' -e '1,/^date: /d' -e '/^===========/d' | awk '{if ((total += length($0) + 1) < 510) print $0}' > $commentfile
echo "==> file $file, rev=$rev, date=$date, author=$author $symname"
co -p -r$rev $file > $bfile
if [ $? != 0 ]; then
echo ERROR - co -p -r$rev $file
exit 1
fi
# check file into vendor repository...
ci -f -m"`cat $commentfile`" -d"$date" $symname -w"$author" $bfile $dstdir/$bfile,v
if [ $? != 0 ]; then
echo ERROR - ci -f -m"`cat $commentfile`" -d"$date" $symname -w"$author" $bfile $dstdir/$bfile,v
exit 1
fi
rm -f $bfile
# set the default branch to the trunk...
# XXX really only need to do this once....
rcs -b1 $dstdir/$bfile
if [ $? != 0 ]; then
echo ERROR - rcs -b1 $dstdir/$bfile
exit 1
fi
done
done
done
echo cleaning up...
rm -f $commentfile
echo " Conversion Completed Successfully"
exit 0

View File

@ -1,459 +0,0 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>cvs_acls</title>
<link rev="made" href="mailto:root@localhost" />
</head>
<body style="background-color: white">
<p><a name="__index__"></a></p>
<!-- INDEX BEGIN -->
<ul>
<li><a href="#name">Name</a></li>
<li><a href="#synopsis">Synopsis</a></li>
<li><a href="#licensing">Licensing</a></li>
<li><a href="#description">Description</a></li>
<li><a href="#enhancements">Enhancements</a></li>
<ul>
<li><a href="#fixed_bugs">Fixed Bugs</a></li>
<li><a href="#enhancements">Enhancements</a></li>
<li><a href="#todos">ToDoS</a></li>
</ul>
<li><a href="#version_information">Version Information</a></li>
<li><a href="#installation">Installation</a></li>
<li><a href="#format_of_the_cvsacl_file">Format of the cvsacl file</a></li>
<li><a href="#program_logic">Program Logic</a></li>
<ul>
<li><a href="#pseudocode">Pseudocode</a></li>
<li><a href="#sanity_check">Sanity Check</a></li>
</ul>
</ul>
<!-- INDEX END -->
<hr />
<p>
</p>
<h1><a name="name">Name</a></h1>
<p>cvs_acls - Access Control List for CVS</p>
<p>
</p>
<hr />
<h1><a name="synopsis">Synopsis</a></h1>
<p>In 'commitinfo':</p>
<pre>
repository/path/to/restrict $CVSROOT/CVSROOT/cvs_acls [-d][-u $USER][-f &lt;logfile&gt;]</pre>
<p>where:</p>
<pre>
-d turns on debug information
-u passes the client-side userId to the cvs_acls script
-f specifies an alternate filename for the restrict_log file</pre>
<p>In 'cvsacl':</p>
<pre>
{allow.*,deny.*} [|user,user,... [|repos,repos,... [|branch,branch,...]]]</pre>
<p>where:</p>
<pre>
allow|deny - allow: commits are allowed; deny: prohibited
user - userId to be allowed or restricted
repos - file or directory to be allowed or restricted
branch - branch to be allowed or restricted</pre>
<p>See below for examples.</p>
<p>
</p>
<hr />
<h1><a name="licensing">Licensing</a></h1>
<p>cvs_acls - provides access control list functionality for CVS
</p>
<pre>
Copyright (c) 2004 by Peter Connolly &lt;peter.connolly@cnet.com&gt;
All rights reserved.</pre>
<p>This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.</p>
<p>This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.</p>
<p>You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA</p>
<p>
</p>
<hr />
<h1><a name="description">Description</a></h1>
<p>This script--cvs_acls--is invoked once for each directory within a
``cvs commit''. The set of files being committed for that directory as
well as the directory itself, are passed to this script. This script
checks its 'cvsacl' file to see if any of the files being committed
are on the 'cvsacl' file's restricted list. If any of the files are
restricted, then the cvs_acls script passes back an exit code of 1
which disallows the commits for that directory.</p>
<p>Messages are returned to the committer indicating the <a href="#item_file"><code>file(s)</code></a> that
he/she are not allowed to committ. Additionally, a site-specific
set of messages (e.g., contact information) can be included in these
messages.</p>
<p>When a commit is prohibited, log messages are written to a restrict_log
file in $CVSROOT/CVSROOT. This default file can be redirected to
another destination.</p>
<p>The script is triggered from the 'commitinfo' file in $CVSROOT/CVSROOT/.</p>
<p>
</p>
<hr />
<h1><a name="enhancements">Enhancements</a></h1>
<p>This section lists the bug fixes and enhancements added to cvs_acls
that make up the current cvs_acls.</p>
<p>
</p>
<h2><a name="fixed_bugs">Fixed Bugs</a></h2>
<p>This version attempts to get rid the following bugs from the
original version of cvs_acls:</p>
<ul>
<li><strong><a name="item_files">Multiple entries on an 'cvsacl' line will be matched individually,
instead of requiring that all commit files *exactly* match all
'cvsacl' entries. Commiting a file not in the 'cvsacl' list would
allow *all* files (including a restricted file) to be committed.</a></strong><br />
</li>
[IMO, this basically made the original script unuseable for our
situation since any arbitrary combination of committed files could
avoid matching the 'cvsacl's entries.]
<p></p>
<li><strong><a name="item_handle_specific_filename_restrictions_2e_cvs_acls_">Handle specific filename restrictions. cvs_acls didn't restrict
individual files specified in 'cvsacl'.</a></strong><br />
</li>
<li><strong><a name="item_correctly_handle_multiple_2c_specific_filename_res">Correctly handle multiple, specific filename restrictions</a></strong><br />
</li>
<li><strong><a name="item_prohibit_mix_of_dirs_and_files_on_a_single__27cvsa">Prohibit mix of dirs and files on a single 'cvsacl' line
[To simplify the logic and because this would be normal usage.]</a></strong><br />
</li>
<li><strong><a name="item_correctly_handle_a_mixture_of_branch_restrictions_">Correctly handle a mixture of branch restrictions within one work
directory</a></strong><br />
</li>
<li><strong><a name="item__24cvsroot_existence_is_checked_too_late">$CVSROOT existence is checked too late</a></strong><br />
</li>
<li><strong><a name="item_option">Correctly handle the CVSROOT=:local:/... option (useful for
interactive testing)</a></strong><br />
</li>
<li><strong><a name="item_logic">Replacing shoddy ``$universal_off'' logic
(Thanks to Karl-Konig Konigsson for pointing this out.)</a></strong><br />
</li>
</ul>
<p>
</p>
<h2><a name="enhancements">Enhancements</a></h2>
<ul>
<li><strong><a name="item_checks_modules_in_the__27cvsacl_27_file_for_valid_">Checks modules in the 'cvsacl' file for valid files and directories</a></strong><br />
</li>
<li><strong><a name="item_accurately_report_restricted_entries_and_their_mat">Accurately report restricted entries and their matching patterns</a></strong><br />
</li>
<li><strong><a name="item_simplified_and_commented_overly_complex_perl_regex">Simplified and commented overly complex PERL REGEXPs for readability
and maintainability</a></strong><br />
</li>
<li><strong><a name="item_skip_the_rest_of_processing_if_a_mismatch_on_porti">Skip the rest of processing if a mismatch on portion of the 'cvsacl' line</a></strong><br />
</li>
<li><strong><a name="item_file">Get rid of opaque ``karma'' messages in favor of user-friendly messages
that describe which user, <code>file(s)</code> and <code>branch(es)</code> were disallowed.</a></strong><br />
</li>
<li><strong><a name="item_add_optional__27restrict_msg_27_file_for_additiona">Add optional 'restrict_msg' file for additional, site-specific
restriction messages.</a></strong><br />
</li>
<li><strong><a name="item_userid">Take a ``-u'' parameter for $USER from commit_prep so that the script
can do restrictions based on the client-side userId rather than the
server-side userId (usually 'cvs').</a></strong><br />
</li>
(See discussion below on ``Admin Setup'' for more on this point.)
<p></p>
<li><strong><a name="item_added_a_lot_more_debug_trace">Added a lot more debug trace</a></strong><br />
</li>
<li><strong><a name="item_tested_these_restrictions_with_concurrent_use_of_p">Tested these restrictions with concurrent use of pserver and SSH
access to model our transition from pserver to ext access.</a></strong><br />
</li>
<li><strong><a name="item_added_logging_of_restricted_commit_attempts_2e_res">Added logging of restricted commit attempts.
Restricted commits can be sent to a default file:
$CVSROOT/CVSROOT/restrictlog or to one passed to the script
via the -f command parameter.</a></strong><br />
</li>
</ul>
<p>
</p>
<h2><a name="todos">ToDoS</a></h2>
<ul>
<li><strong><a name="item_need_to_deal_with_pserver_2fssh_transition_with_co">Need to deal with pserver/SSH transition with conflicting umasks?</a></strong><br />
</li>
<li><strong><a name="item_use_a_cpan_module_to_handle_command_parameters_2e">Use a CPAN module to handle command parameters.</a></strong><br />
</li>
<li><strong><a name="item_use_a_cpan_module_to_clone_data_structures_2e">Use a CPAN module to clone data structures.</a></strong><br />
</li>
</ul>
<p>
</p>
<hr />
<h1><a name="version_information">Version Information</a></h1>
<p>This is not offered as a fix to the original 'cvs_acls' script since it
differs substantially in goals and methods from the original and there
are probably a significant number of people out there that still require
the original version's functionality.</p>
<p>The 'cvsacl' file flags of 'allow' and 'deny' were intentionally
changed to 'allow' and 'deny' because there are enough differences
between the original script's behavior and this one's that we wanted to
make sure that users will rethink their 'cvsacl' file formats before
plugging in this newer script.</p>
<p>Please note that there has been very limited cross-platform testing of
this script!!! (We did not have the time or resources to do exhaustive
cross-platform testing.)</p>
<p>It was developed and tested under Red Hat Linux 9.0 using PERL 5.8.0.
Additionally, it was built and tested under Red Hat Linux 7.3 using
PERL 5.6.1.</p>
<p>$Id: cvs_acls.html,v 1.1.2.2 2005/09/01 13:44:49 dprice Exp $</p>
<p>This version is based on the 1.11.13 version of cvs_acls
<a href="mailto:peter.connolly@cnet.com">peter.connolly@cnet.com</a> (Peter Connolly)</p>
<pre>
Access control lists for CVS. dgg@ksr.com (David G. Grubbs)
Branch specific controls added by voisine@bytemobile.com (Aaron Voisine)</pre>
<p>
</p>
<hr />
<h1><a name="installation">Installation</a></h1>
<p>To use this program, do the following four things:</p>
<p>0. Install PERL, version 5.6.1 or 5.8.0.</p>
<p>1. Admin Setup:</p>
<pre>
There are two choices here.</pre>
<pre>
a) The first option is to use the $ENV{&quot;USER&quot;}, server-side userId
(from the third column of your pserver 'passwd' file) as the basis for
your restrictions. In this case, you will (at a minimum) want to set
up a new &quot;cvsadmin&quot; userId and group on the pserver machine.
CVS administrators will then set up their 'passwd' file entries to
run either as &quot;cvs&quot; (for regular users) or as &quot;cvsadmin&quot; (for power
users). Correspondingly, your 'cvsacl' file will only list 'cvs'
and 'cvsadmin' as the userIds in the second column.</pre>
<pre>
Commentary: A potential weakness of this is that the xinetd
cvspserver process will need to run as 'root' in order to switch
between the 'cvs' and the 'cvsadmin' userIds. Some sysadmins don't
like situations like this and may want to chroot the process.
Talk to them about this point...</pre>
<pre>
b) The second option is to use the client-side userId as the basis for
your restrictions. In this case, all the xinetd cvspserver processes
can run as userId 'cvs' and no 'root' userId is required. If you have
a 'passwd' file that lists 'cvs' as the effective run-time userId for
all your users, then no changes to this file are needed. Your 'cvsacl'
file will use the individual, client-side userIds in its 2nd column.</pre>
<pre>
As long as the userIds in pserver's 'passwd' file match those userIds
that your Linux server know about, this approach is ideal if you are
planning to move from pserver to SSH access at some later point in time.
Just by switching the CVSROOT var from CVSROOT=:pserver:&lt;userId&gt;... to
CVSROOT=:ext:&lt;userId&gt;..., users can switch over to SSH access without
any other administrative changes. When all users have switched over to
SSH, the inherently insecure xinetd cvspserver process can be disabled.
[<a href="http://ximbiot.com/cvs/manual/cvs-1.11.17/cvs_2.html#SEC32">http://ximbiot.com/cvs/manual/cvs-1.11.17/cvs_2.html#SEC32</a>]</pre>
<pre>
:TODO: The only potential glitch with the SSH approach is the possibility
that each user can have differing umasks that might interfere with one
another, especially during a transition from pserver to SSH. As noted
in the ToDo section, this needs a good strategy and set of tests for that
yet...</pre>
<p>2. Put two lines, as the *only* non-comment lines, in your commitinfo file:</p>
<pre>
ALL $CVSROOT/CVSROOT/commit_prep
ALL $CVSROOT/CVSROOT/cvs_acls [-d][-u $USER ][-f &lt;logfilename&gt;]</pre>
<pre>
where &quot;-d&quot; turns on debug trace
&quot;-u $USER&quot; passes the client-side userId to cvs_acls
&quot;-f &lt;logfilename&quot;&gt; overrides the default filename used to log
restricted commit attempts.</pre>
<pre>
(These are handled in the processArgs() subroutine.)</pre>
<p>If you are using client-side userIds to restrict access to your
repository, make sure that they are in this order since the commit_prep
script is required in order to pass the $USER parameter.</p>
<p>A final note about the repository matching pattern. The example above
uses ``ALL'' but note that this means that the cvs_acls script will run
for each and every commit in your repository. Obviously, in a large
repository this adds up to a lot of overhead that may not be necesary.
A better strategy is to use a repository pattern that is more specific
to the areas that you wish to secure.</p>
<p>3. Install this file as $CVSROOT/CVSROOT/cvs_acls and make it executable.</p>
<p>4. Create a file named CVSROOT/cvsacl and optionally add it to
CVSROOT/checkoutlist and check it in. See the CVS manual's
administrative files section about checkoutlist. Typically:</p>
<pre>
$ cvs checkout CVSROOT
$ cd CVSROOT
[ create the cvsacl file, include 'commitinfo' line ]
[ add cvsacl to checkoutlist ]
$ cvs add cvsacl
$ cvs commit -m 'Added cvsacl for use with cvs_acls.' cvsacl checkoutlist</pre>
<p>Note: The format of the 'cvsacl' file is described in detail immediately
below but here is an important set up point:</p>
<pre>
Make sure to include a line like the following:</pre>
<pre>
deny||CVSROOT/commitinfo CVSROOT/cvsacl
allow|cvsadmin|CVSROOT/commitinfo CVSROOT/cvsacl</pre>
<pre>
that restricts access to commitinfo and cvsacl since this would be one of
the easiest &quot;end runs&quot; around this ACL approach. ('commitinfo' has the
line that executes the cvs_acls script and, of course, all the
restrictions are in 'cvsacl'.)</pre>
<p>5. (Optional) Create a 'restrict_msg' file in the $CVSROOT/CVSROOT directory.
Whenever there is a restricted file or dir message, cvs_acls will look
for this file and, if it exists, print its contents as part of the
commit-denial message. This gives you a chance to print any site-specific
information (e.g., who to call, what procedures to look up,...) whenever
a commit is denied.</p>
<p>
</p>
<hr />
<h1><a name="format_of_the_cvsacl_file">Format of the cvsacl file</a></h1>
<p>The 'cvsacl' file determines whether you may commit files. It contains lines
read from top to bottom, keeping track of whether a given user, repository
and branch combination is ``allowed'' or ``denied.'' The script will assume
``allowed'' on all repository paths until 'allow' and 'deny' rules change
that default.</p>
<p>The normal pattern is to specify an 'deny' rule to turn off
access to ALL users, then follow it with a matching 'allow' rule that will
turn on access for a select set of users. In the case of multiple rules for
the same user, repository and branch, the last one takes precedence.</p>
<p>Blank lines and lines with only comments are ignored. Any other lines not
beginning with ``allow'' or ``deny'' are logged to the restrict_log file.</p>
<p>Lines beginning with ``allow'' or ``deny'' are assumed to be '|'-separated
triples: (All spaces and tabs are ignored in a line.)</p>
<pre>
{allow.*,deny.*} [|user,user,... [|repos,repos,... [|branch,branch,...]]]</pre>
<pre>
1. String starting with &quot;allow&quot; or &quot;deny&quot;.
2. Optional, comma-separated list of usernames.
3. Optional, comma-separated list of repository pathnames.
These are pathnames relative to $CVSROOT. They can be directories or
filenames. A directory name allows or restricts access to all files and
directories below it. One line can have either directories or filenames
but not both.
4. Optional, comma-separated list of branch tags.
If not specified, all branches are assumed. Use HEAD to reference the
main branch.</pre>
<p>Example: (Note: No in-line comments.)</p>
<pre>
# ----- Make whole repository unavailable.
deny</pre>
<pre>
# ----- Except for user &quot;dgg&quot;.
allow|dgg</pre>
<pre>
# ----- Except when &quot;fred&quot; or &quot;john&quot; commit to the
# module whose repository is &quot;bin/ls&quot;
allow|fred, john|bin/ls</pre>
<pre>
# ----- Except when &quot;ed&quot; commits to the &quot;stable&quot;
# branch of the &quot;bin/ls&quot; repository
allow|ed|/bin/ls|stable</pre>
<p>
</p>
<hr />
<h1><a name="program_logic">Program Logic</a></h1>
<p>CVS passes to @ARGV an absolute directory pathname (the repository
appended to your $CVSROOT variable), followed by a list of filenames
within that directory that are to be committed.</p>
<p>The script walks through the 'cvsacl' file looking for matches on
the username, repository and branch.</p>
<p>A username match is simply the user's name appearing in the second
column of the cvsacl line in a space-or-comma separate list. If
blank, then any user will match.</p>
<p>A repository match:</p>
<ul>
<li><strong><a name="item_each_entry_in_the_modules_section_of_the_current__">Each entry in the modules section of the current 'cvsacl' line is
examined to see if it is a dir or a file. The line must have
either files or dirs, but not both. (To simplify the logic.)</a></strong><br />
</li>
<li><strong><a name="item_if_neither_2c_then_assume_the__27cvsacl_27_file_wa">If neither, then assume the 'cvsacl' file was set up in error and
skip that 'allow' line.</a></strong><br />
</li>
<li><strong><a name="item_if_a_dir_2c_then_each_dir_pattern_is_matched_separ">If a dir, then each dir pattern is matched separately against the
beginning of each of the committed files in @ARGV.</a></strong><br />
</li>
<li><strong><a name="item_if_a_file_2c_then_each_file_pattern_is_matched_exa">If a file, then each file pattern is matched exactly against each
of the files to be committed in @ARGV.</a></strong><br />
</li>
<li><strong><a name="item_repository_and_branch_must_both_match_together_2e_">Repository and branch must BOTH match together. This is to cover
the use case where a user has multiple branches checked out in
a single work directory. Commit files can be from different
branches.</a></strong><br />
</li>
A branch match is either:
<ul>
<li><strong><a name="item_when_no_branches_are_listed_in_the_fourth_column_2">When no branches are listed in the fourth column. (``Match any.'')</a></strong><br />
</li>
<li><strong><a name="item_all_elements_from_the_fourth_column_are_matched_ag">All elements from the fourth column are matched against each of
the tag names for $ARGV[1..$#ARGV] found in the %branches file.</a></strong><br />
</li>
</ul>
<li><strong><a name="item__27allow_27_match_remove_that_match_from_the_tally">'allow' match remove that match from the tally map.</a></strong><br />
</li>
<li><strong><a name="item_restricted">Restricted ('deny') matches are saved in the %repository_matches
table.</a></strong><br />
</li>
<li><strong><a name="item_if_there_is_a_match_on_user_2c_repository_and_bran">If there is a match on user, repository and branch:</a></strong><br />
</li>
<pre>
If repository, branch and user match
if 'deny'
add %repository_matches entries to %restricted_entries
else if 'allow'
remove %repository_matches entries from %restricted_entries</pre>
<li><strong><a name="item_at_the_end_of_all_the__27cvsacl_27_line_checks_2c_">At the end of all the 'cvsacl' line checks, check to see if there
are any entries in the %restricted_entries. If so, then deny the
commit.</a></strong><br />
</li>
</ul>
<p>
</p>
<h2><a name="pseudocode">Pseudocode</a></h2>
<pre>
read CVS/Entries file and create branch{file}-&gt;{branch} hash table
+ for each 'allow' and 'deny' line in the 'cvsacl' file:
| user match?
| - Yes: set $user_match = 1;
| repository and branch match?
| - Yes: add to %repository_matches;
| did user, repository match?
| - Yes: if 'deny' then
| add %repository_matches -&gt; %restricted_entries
| if 'allow' then
| remove %repository_matches &lt;- %restricted_entries
+ end for loop
any saved restrictions?
no: exit,
set exit code allowing commits and exit
yes: report restrictions,
set exit code prohibiting commits and exit</pre>
<p>
</p>
<h2><a name="sanity_check">Sanity Check</a></h2>
<pre>
1) file allow trumps a dir deny
deny||java/lib
allow||java/lib/README
2) dir allow can undo a file deny
deny||java/lib/README
allow||java/lib
3) file deny trumps a dir allow
allow||java/lib
deny||java/lib/README
4) dir deny trumps a file allow
allow||java/lib/README
deny||java/lib
... so last match always takes precedence</pre>
</body>
</html>

View File

@ -1,963 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
=head1 Name
cvs_acls - Access Control List for CVS
=head1 Synopsis
In 'commitinfo':
repository/path/to/restrict $CVSROOT/CVSROOT/cvs_acls [-d][-u $USER][-f <logfile>]
where:
-d turns on debug information
-u passes the client-side userId to the cvs_acls script
-f specifies an alternate filename for the restrict_log file
In 'cvsacl':
{allow.*,deny.*} [|user,user,... [|repos,repos,... [|branch,branch,...]]]
where:
allow|deny - allow: commits are allowed; deny: prohibited
user - userId to be allowed or restricted
repos - file or directory to be allowed or restricted
branch - branch to be allowed or restricted
See below for examples.
=head1 Licensing
cvs_acls - provides access control list functionality for CVS
Copyright (c) 2004 by Peter Connolly <peter.connolly@cnet.com>
All rights reserved.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
=head1 Description
This script--cvs_acls--is invoked once for each directory within a
"cvs commit". The set of files being committed for that directory as
well as the directory itself, are passed to this script. This script
checks its 'cvsacl' file to see if any of the files being committed
are on the 'cvsacl' file's restricted list. If any of the files are
restricted, then the cvs_acls script passes back an exit code of 1
which disallows the commits for that directory.
Messages are returned to the committer indicating the file(s) that
he/she are not allowed to committ. Additionally, a site-specific
set of messages (e.g., contact information) can be included in these
messages.
When a commit is prohibited, log messages are written to a restrict_log
file in $CVSROOT/CVSROOT. This default file can be redirected to
another destination.
The script is triggered from the 'commitinfo' file in $CVSROOT/CVSROOT/.
=head1 Enhancements
This section lists the bug fixes and enhancements added to cvs_acls
that make up the current cvs_acls.
=head2 Fixed Bugs
This version attempts to get rid the following bugs from the
original version of cvs_acls:
=over 2
=item *
Multiple entries on an 'cvsacl' line will be matched individually,
instead of requiring that all commit files *exactly* match all
'cvsacl' entries. Commiting a file not in the 'cvsacl' list would
allow *all* files (including a restricted file) to be committed.
[IMO, this basically made the original script unuseable for our
situation since any arbitrary combination of committed files could
avoid matching the 'cvsacl's entries.]
=item *
Handle specific filename restrictions. cvs_acls didn't restrict
individual files specified in 'cvsacl'.
=item *
Correctly handle multiple, specific filename restrictions
=item *
Prohibit mix of dirs and files on a single 'cvsacl' line
[To simplify the logic and because this would be normal usage.]
=item *
Correctly handle a mixture of branch restrictions within one work
directory
=item *
$CVSROOT existence is checked too late
=item *
Correctly handle the CVSROOT=:local:/... option (useful for
interactive testing)
=item *
Replacing shoddy "$universal_off" logic
(Thanks to Karl-Konig Konigsson for pointing this out.)
=back
=head2 Enhancements
=over 2
=item *
Checks modules in the 'cvsacl' file for valid files and directories
=item *
Accurately report restricted entries and their matching patterns
=item *
Simplified and commented overly complex PERL REGEXPs for readability
and maintainability
=item *
Skip the rest of processing if a mismatch on portion of the 'cvsacl' line
=item *
Get rid of opaque "karma" messages in favor of user-friendly messages
that describe which user, file(s) and branch(es) were disallowed.
=item *
Add optional 'restrict_msg' file for additional, site-specific
restriction messages.
=item *
Take a "-u" parameter for $USER from commit_prep so that the script
can do restrictions based on the client-side userId rather than the
server-side userId (usually 'cvs').
(See discussion below on "Admin Setup" for more on this point.)
=item *
Added a lot more debug trace
=item *
Tested these restrictions with concurrent use of pserver and SSH
access to model our transition from pserver to ext access.
=item *
Added logging of restricted commit attempts.
Restricted commits can be sent to a default file:
$CVSROOT/CVSROOT/restrictlog or to one passed to the script
via the -f command parameter.
=back
=head2 ToDoS
=over 2
=item *
Need to deal with pserver/SSH transition with conflicting umasks?
=item *
Use a CPAN module to handle command parameters.
=item *
Use a CPAN module to clone data structures.
=back
=head1 Version Information
This is not offered as a fix to the original 'cvs_acls' script since it
differs substantially in goals and methods from the original and there
are probably a significant number of people out there that still require
the original version's functionality.
The 'cvsacl' file flags of 'allow' and 'deny' were intentionally
changed to 'allow' and 'deny' because there are enough differences
between the original script's behavior and this one's that we wanted to
make sure that users will rethink their 'cvsacl' file formats before
plugging in this newer script.
Please note that there has been very limited cross-platform testing of
this script!!! (We did not have the time or resources to do exhaustive
cross-platform testing.)
It was developed and tested under Red Hat Linux 9.0 using PERL 5.8.0.
Additionally, it was built and tested under Red Hat Linux 7.3 using
PERL 5.6.1.
$Id: cvs_acls.in,v 1.4.4.6 2005/09/01 13:44:49 dprice Exp $
This version is based on the 1.11.13 version of cvs_acls
peter.connolly@cnet.com (Peter Connolly)
Access control lists for CVS. dgg@ksr.com (David G. Grubbs)
Branch specific controls added by voisine@bytemobile.com (Aaron Voisine)
=head1 Installation
To use this program, do the following four things:
0. Install PERL, version 5.6.1 or 5.8.0.
1. Admin Setup:
There are two choices here.
a) The first option is to use the $ENV{"USER"}, server-side userId
(from the third column of your pserver 'passwd' file) as the basis for
your restrictions. In this case, you will (at a minimum) want to set
up a new "cvsadmin" userId and group on the pserver machine.
CVS administrators will then set up their 'passwd' file entries to
run either as "cvs" (for regular users) or as "cvsadmin" (for power
users). Correspondingly, your 'cvsacl' file will only list 'cvs'
and 'cvsadmin' as the userIds in the second column.
Commentary: A potential weakness of this is that the xinetd
cvspserver process will need to run as 'root' in order to switch
between the 'cvs' and the 'cvsadmin' userIds. Some sysadmins don't
like situations like this and may want to chroot the process.
Talk to them about this point...
b) The second option is to use the client-side userId as the basis for
your restrictions. In this case, all the xinetd cvspserver processes
can run as userId 'cvs' and no 'root' userId is required. If you have
a 'passwd' file that lists 'cvs' as the effective run-time userId for
all your users, then no changes to this file are needed. Your 'cvsacl'
file will use the individual, client-side userIds in its 2nd column.
As long as the userIds in pserver's 'passwd' file match those userIds
that your Linux server know about, this approach is ideal if you are
planning to move from pserver to SSH access at some later point in time.
Just by switching the CVSROOT var from CVSROOT=:pserver:<userId>... to
CVSROOT=:ext:<userId>..., users can switch over to SSH access without
any other administrative changes. When all users have switched over to
SSH, the inherently insecure xinetd cvspserver process can be disabled.
[http://ximbiot.com/cvs/manual/cvs-1.11.17/cvs_2.html#SEC32]
:TODO: The only potential glitch with the SSH approach is the possibility
that each user can have differing umasks that might interfere with one
another, especially during a transition from pserver to SSH. As noted
in the ToDo section, this needs a good strategy and set of tests for that
yet...
2. Put two lines, as the *only* non-comment lines, in your commitinfo file:
ALL $CVSROOT/CVSROOT/commit_prep
ALL $CVSROOT/CVSROOT/cvs_acls [-d][-u $USER ][-f <logfilename>]
where "-d" turns on debug trace
"-u $USER" passes the client-side userId to cvs_acls
"-f <logfilename"> overrides the default filename used to log
restricted commit attempts.
(These are handled in the processArgs() subroutine.)
If you are using client-side userIds to restrict access to your
repository, make sure that they are in this order since the commit_prep
script is required in order to pass the $USER parameter.
A final note about the repository matching pattern. The example above
uses "ALL" but note that this means that the cvs_acls script will run
for each and every commit in your repository. Obviously, in a large
repository this adds up to a lot of overhead that may not be necesary.
A better strategy is to use a repository pattern that is more specific
to the areas that you wish to secure.
3. Install this file as $CVSROOT/CVSROOT/cvs_acls and make it executable.
4. Create a file named CVSROOT/cvsacl and optionally add it to
CVSROOT/checkoutlist and check it in. See the CVS manual's
administrative files section about checkoutlist. Typically:
$ cvs checkout CVSROOT
$ cd CVSROOT
[ create the cvsacl file, include 'commitinfo' line ]
[ add cvsacl to checkoutlist ]
$ cvs add cvsacl
$ cvs commit -m 'Added cvsacl for use with cvs_acls.' cvsacl checkoutlist
Note: The format of the 'cvsacl' file is described in detail immediately
below but here is an important set up point:
Make sure to include a line like the following:
deny||CVSROOT/commitinfo CVSROOT/cvsacl
allow|cvsadmin|CVSROOT/commitinfo CVSROOT/cvsacl
that restricts access to commitinfo and cvsacl since this would be one of
the easiest "end runs" around this ACL approach. ('commitinfo' has the
line that executes the cvs_acls script and, of course, all the
restrictions are in 'cvsacl'.)
5. (Optional) Create a 'restrict_msg' file in the $CVSROOT/CVSROOT directory.
Whenever there is a restricted file or dir message, cvs_acls will look
for this file and, if it exists, print its contents as part of the
commit-denial message. This gives you a chance to print any site-specific
information (e.g., who to call, what procedures to look up,...) whenever
a commit is denied.
=head1 Format of the cvsacl file
The 'cvsacl' file determines whether you may commit files. It contains lines
read from top to bottom, keeping track of whether a given user, repository
and branch combination is "allowed" or "denied." The script will assume
"allowed" on all repository paths until 'allow' and 'deny' rules change
that default.
The normal pattern is to specify an 'deny' rule to turn off
access to ALL users, then follow it with a matching 'allow' rule that will
turn on access for a select set of users. In the case of multiple rules for
the same user, repository and branch, the last one takes precedence.
Blank lines and lines with only comments are ignored. Any other lines not
beginning with "allow" or "deny" are logged to the restrict_log file.
Lines beginning with "allow" or "deny" are assumed to be '|'-separated
triples: (All spaces and tabs are ignored in a line.)
{allow.*,deny.*} [|user,user,... [|repos,repos,... [|branch,branch,...]]]
1. String starting with "allow" or "deny".
2. Optional, comma-separated list of usernames.
3. Optional, comma-separated list of repository pathnames.
These are pathnames relative to $CVSROOT. They can be directories or
filenames. A directory name allows or restricts access to all files and
directories below it. One line can have either directories or filenames
but not both.
4. Optional, comma-separated list of branch tags.
If not specified, all branches are assumed. Use HEAD to reference the
main branch.
Example: (Note: No in-line comments.)
# ----- Make whole repository unavailable.
deny
# ----- Except for user "dgg".
allow|dgg
# ----- Except when "fred" or "john" commit to the
# module whose repository is "bin/ls"
allow|fred, john|bin/ls
# ----- Except when "ed" commits to the "stable"
# branch of the "bin/ls" repository
allow|ed|/bin/ls|stable
=head1 Program Logic
CVS passes to @ARGV an absolute directory pathname (the repository
appended to your $CVSROOT variable), followed by a list of filenames
within that directory that are to be committed.
The script walks through the 'cvsacl' file looking for matches on
the username, repository and branch.
A username match is simply the user's name appearing in the second
column of the cvsacl line in a space-or-comma separate list. If
blank, then any user will match.
A repository match:
=over 2
=item *
Each entry in the modules section of the current 'cvsacl' line is
examined to see if it is a dir or a file. The line must have
either files or dirs, but not both. (To simplify the logic.)
=item *
If neither, then assume the 'cvsacl' file was set up in error and
skip that 'allow' line.
=item *
If a dir, then each dir pattern is matched separately against the
beginning of each of the committed files in @ARGV.
=item *
If a file, then each file pattern is matched exactly against each
of the files to be committed in @ARGV.
=item *
Repository and branch must BOTH match together. This is to cover
the use case where a user has multiple branches checked out in
a single work directory. Commit files can be from different
branches.
A branch match is either:
=over 4
=item *
When no branches are listed in the fourth column. ("Match any.")
=item *
All elements from the fourth column are matched against each of
the tag names for $ARGV[1..$#ARGV] found in the %branches file.
=back
=item *
'allow' match remove that match from the tally map.
=item *
Restricted ('deny') matches are saved in the %repository_matches
table.
=item *
If there is a match on user, repository and branch:
If repository, branch and user match
if 'deny'
add %repository_matches entries to %restricted_entries
else if 'allow'
remove %repository_matches entries from %restricted_entries
=item *
At the end of all the 'cvsacl' line checks, check to see if there
are any entries in the %restricted_entries. If so, then deny the
commit.
=back
=head2 Pseudocode
read CVS/Entries file and create branch{file}->{branch} hash table
+ for each 'allow' and 'deny' line in the 'cvsacl' file:
| user match?
| - Yes: set $user_match = 1;
| repository and branch match?
| - Yes: add to %repository_matches;
| did user, repository match?
| - Yes: if 'deny' then
| add %repository_matches -> %restricted_entries
| if 'allow' then
| remove %repository_matches <- %restricted_entries
+ end for loop
any saved restrictions?
no: exit,
set exit code allowing commits and exit
yes: report restrictions,
set exit code prohibiting commits and exit
=head2 Sanity Check
1) file allow trumps a dir deny
deny||java/lib
allow||java/lib/README
2) dir allow can undo a file deny
deny||java/lib/README
allow||java/lib
3) file deny trumps a dir allow
allow||java/lib
deny||java/lib/README
4) dir deny trumps a file allow
allow||java/lib/README
deny||java/lib
... so last match always takes precedence
=cut
$debug = 0; # Set to 1 for debug messages
%repository_matches = (); # hash of match file and pattern from 'cvsacl'
# repository_matches --> [branch, matching-pattern]
# (Used during module/branch matching loop)
%restricted_entries = (); # hash table of restricted commit files (from @ARGV)
# restricted_entries --> branch
# (If user/module/branch all match on an 'deny'
# line, then entries added to this map.)
%branch; # hash table of key: commit file; value: branch
# Built from ".../CVS/Entries" file of directory
# currently being examined
# ---------------------------------------------------------------- get CVSROOT
$cvsroot = $ENV{'CVSROOT'};
die "Must set CVSROOT\n" if !$cvsroot;
if ($cvsroot =~ /:([\/\w]*)$/) { # Filter ":pserver:", ":local:"-type prefixes
$cvsroot = $1;
}
# ------------------------------------------------------------- set file paths
$entries = "CVS/Entries"; # client-side file???
$cvsaclfile = $cvsroot . "/CVSROOT/cvsacl";
$restrictfile = $cvsroot . "/CVSROOT/restrict_msg";
$restrictlog = $cvsroot . "/CVSROOT/restrict_log";
# --------------------------------------------------------------- process args
$user_name = processArgs(\@ARGV);
print("$$ \@ARGV after processArgs is: @ARGV.\n") if $debug;
print("$$ ========== Begin $PROGRAM_NAME for \"$ARGV[0]\" repository. ========== \n") if $debug;
# --------------------------------------------------------------- filter @ARGV
eval "print STDERR \$die='Unknown parameter $1\n' if !defined \$$1; \$$1=\$';"
while ($ARGV[0] =~ /^(\w+)=/ && shift(@ARGV));
exit 255 if $die; # process any variable=value switches
print("$$ \@ARGV after shift processing contains:",join("\, ",@ARGV),".\n") if $debug;
# ---------------------------------------------------------------- get cvsroot
($repository = shift) =~ s:^$cvsroot/::;
grep($_ = $repository . '/' . $_, @ARGV);
print("$$ \$cvsroot is: $cvsroot.\n") if $debug;
print "$$ Repos: $repository\n","$$ ==== ",join("\n$$ ==== ",@ARGV),"\n" if $debug;
$exit_val = 0; # presume good exit value for commit
# ----------------------------------------------------------------------------
# ---------------------------------- create hash table $branch{file -> branch}
# ----------------------------------------------------------------------------
# Here's a typical Entries file:
#
# /checkoutlist/1.4/Wed Feb 4 23:51:23 2004//
# /cvsacl/1.3/Tue Feb 24 23:05:43 2004//
# ...
# /verifymsg/1.1/Fri Mar 16 19:56:24 2001//
# D/backup////
# D/temp////
open(ENTRIES, $entries) || die("Cannot open $entries.\n");
print("$$ File / Branch\n") if $debug;
my $i = 0;
while(<ENTRIES>) {
chop;
next if /^\s*$/; # Skip blank lines
$i = $i + 1;
if (m|
/ # 1st slash
([\w.-]*) # file name -> $1
/ # 2nd slash
.* # revision number
/ # 3rd slash
.* # date and time
/ # 4th slash
.* # keyword
/ # 5th slash
T? # 'T' constant
(\w*) # branch -> #2
|x) {
$branch{$repository . '/' . $1} = ($2) ? $2 : "HEAD";
print "$$ CVS Entry $i: $1/$2\n" if $debug;
}
}
close(ENTRIES);
# ----------------------------------------------------------------------------
# ------------------------------------- evaluate each active line from 'cvsacl'
# ----------------------------------------------------------------------------
open (CVSACL, $cvsaclfile) || exit(0); # It is ok for cvsacl file not to exist
while (<CVSACL>) {
chop;
next if /^\s*\#/; # skip comments
next if /^\s*$/; # skip blank lines
# --------------------------------------------- parse current 'cvsacl' line
print("$$ ==========\n$$ Processing \'cvsacl\' line: $_.\n") if $debug;
($cvsacl_flag, $cvsacl_userIds, $cvsacl_modules, $cvsacl_branches) = split(/[\s,]*\|[\s,]*/, $_);
# ------------------------------ Validate 'allow' or 'deny' line prefix
if ($cvsacl_flag !~ /^allow/ && $cvsacl_flag !~ /^deny/) {
print ("Bad cvsacl line: $_\n") if $debug;
$log_text = sprintf "Bad cvsacl line: %s", $_;
write_restrictlog_record($log_text);
next;
}
# -------------------------------------------------- init loop match flags
$user_match = 0;
%repository_matches = ();
# ------------------------------------------------------------------------
# ---------------------------------------------------------- user matching
# ------------------------------------------------------------------------
# $user_name considered "in user list" if actually in list or is NULL
$user_match = (!$cvsacl_userIds || grep ($_ eq $user_name, split(/[\s,]+/,$cvsacl_userIds)));
print "$$ \$user_name: $user_name \$user_match match flag is: $user_match.\n" if $debug;
if (!$user_match) {
next; # no match, skip to next 'cvsacl' line
}
# ------------------------------------------------------------------------
# ---------------------------------------------------- repository matching
# ------------------------------------------------------------------------
if (!$cvsacl_modules) { # blank module list = all modules
if (!$cvsacl_branches) { # blank branch list = all branches
print("$$ Adding all modules to \%repository_matches; null " .
"\$cvsacl_modules and \$cvsacl_branches.\n") if $debug;
for $commit_object (@ARGV) {
$repository_matches{$commit_object} = [$branch{$commit_object}, $cvsacl_modules];
print("$$ \$repository_matches{$commit_object} = " .
"[$branch{$commit_object}, $cvsacl_modules].\n") if $debug;
}
}
else { # need to check for repository match
@branch_list = split (/[\s,]+/,$cvsacl_branches);
print("$$ Branches from \'cvsacl\' record: ", join(", ",@branch_list),".\n") if $debug;
for $commit_object (@ARGV) {
if (grep($branch{$commit_object}, @branch_list)) {
$repository_matches{$commit_object} = [$branch{$commit_object}, $cvsacl_modules];
print("$$ \$repository_matches{$commit_object} = " .
"[$branch{$commit_object}, $cvsacl_modules].\n") if $debug;
}
}
}
}
else {
# ----------------------------------- check every argument combination
# parse 'cvsacl' modules to array
my @module_list = split(/[\s,]+/,$cvsacl_modules);
# ------------- Check all modules in list for either file or directory
my $fileType = "";
if (($fileType = checkFileness(@module_list)) eq "") {
next; # skip bad file types
}
# ---------- Check each combination of 'cvsacl' modules vs. @ARGV files
print("$$ Checking matches for \@module_list: ", join("\, ",@module_list), ".\n") if $debug;
# loop thru all command-line commit objects
for $commit_object (@ARGV) {
# loop thru all modules on 'cvsacl' line
for $cvsacl_module (@module_list) {
print("$$ Is \'cvsacl\': $cvsacl_modules pattern in: \@ARGV " .
"\$commit_object: $commit_object?\n") if $debug;
# Do match of beginning of $commit_object
checkModuleMatch($fileType, $commit_object, $cvsacl_module);
} # end for commit objects
} # end for cvsacl modules
} # end if
print("$$ Matches for: \%repository_matches: ", join("\, ", (keys %repository_matches)), ".\n") if $debug;
# ------------------------------------------------------------------------
# ----------------------------------------------------- setting exit value
# ------------------------------------------------------------------------
if ($user_match && %repository_matches) {
print("$$ An \"$cvsacl_flag\" match on User(s): $cvsacl_userIds; Module(s):" .
" $cvsacl_modules; Branch(es): $cvsacl_branches.\n") if $debug;
if ($cvsacl_flag eq "deny") {
# Add all matches to the hash of restricted modules
foreach $commitFile (keys %repository_matches) {
print("$$ Adding \%repository_matches entry: $commitFile.\n") if $debug;
$restricted_entries{$commitFile} = $repository_matches{$commitFile}[0];
}
}
else {
# Remove all matches from the restricted modules hash
foreach $commitFile (keys %repository_matches) {
print("$$ Removing \%repository_matches entry: $commitFile.\n") if $debug;
delete $restricted_entries{$commitFile};
}
}
}
print "$$ ==== End of processing for \'cvsacl\' line: $_.\n" if $debug;
}
close(CVSACL);
# ----------------------------------------------------------------------------
# --------------------------------------- determine final 'commit' disposition
# ----------------------------------------------------------------------------
if (%restricted_entries) { # any restricted entries?
$exit_val = 1; # don't commit
print("**** Access denied: Insufficient authority for user: '$user_name\' " .
"to commit to \'$repository\'.\n**** Contact CVS Administrators if " .
"you require update access to these directories or files.\n");
print("**** file(s)/dir(s) restricted were:\n\t", join("\n\t",keys %restricted_entries), "\n");
printOptionalRestrictionMessage();
write_restrictlog();
}
elsif (!$exit_val && $debug) {
print "**** Access allowed: Sufficient authority for commit.\n";
}
print "$$ ==== \$exit_val = $exit_val\n" if $debug;
exit($exit_val);
# ----------------------------------------------------------------------------
# -------------------------------------------------------------- end of "main"
# ----------------------------------------------------------------------------
# ----------------------------------------------------------------------------
# -------------------------------------------------------- process script args
# ----------------------------------------------------------------------------
sub processArgs {
# This subroutine is passed a reference to @ARGV.
# If @ARGV contains a "-u" entry, use that as the effective userId. In this
# case, the userId is the client-side userId that has been passed to this
# script by the commit_prep script. (This is why the commit_prep script must
# be placed *before* the cvs_acls script in the commitinfo admin file.)
# Otherwise, pull the userId from the server-side environment.
my $userId = "";
my ($argv) = shift; # pick up ref to @ARGV
my @argvClone = (); # immutable copy for foreach loop
for ($i=0; $i<(scalar @{$argv}); $i++) {
$argvClone[$i]=$argv->[$i];
}
print("$$ \@_ to processArgs is: @_.\n") if $debug;
# Parse command line arguments (file list is seen as one arg)
foreach $arg (@argvClone) {
print("$$ \$arg for processArgs loop is: $arg.\n") if $debug;
# Set $debug flag?
if ($arg eq '-d') {
shift @ARGV;
$debug = 1;
print("$$ \$debug flag set on.\n") if $debug;
print STDERR "Debug turned on...\n";
}
# Passing in a client-side userId?
elsif ($arg eq '-u') {
shift @ARGV;
$userId = shift @ARGV;
print("$$ client-side \$userId set to: $userId.\n") if $debug;
}
# An override for the default restrictlog file?
elsif ($arg eq '-f') {
shift @ARGV;
$restrictlog = shift @ARGV;
}
else {
next;
}
}
# No client-side userId passed? then get from server env
if (!$userId) {
$userId = $ENV{"USER"} if !($userId = $ENV{"LOGNAME"});
print("$$ server-side \$userId set to: $userId.\n") if $debug;
}
print("$$ processArgs returning \$userId: $userId.\n") if $debug;
return $userId;
}
# ----------------------------------------------------------------------------
# --------------------- Check all modules in list for either file or directory
# ----------------------------------------------------------------------------
sub checkFileness {
# Module patterns on the 'cvsacl' record can be files or directories.
# If it's a directory, we pattern-match the directory name from 'cvsacl'
# against the left side of the committed filename to see if the file is in
# that hierarchy. By contrast, files use an explicit match. If the entries
# are neither files nor directories, then the cvsacl file has been set up
# incorrectly; we return a "" and the caller skips that line as invalid.
#
# This function determines whether the entries on the 'cvsacl' record are all
# directories or all files; it cannot be a mixture. This restriction put in
# to simplify the logic (without taking away much functionality).
my @module_list = @_;
print("$$ Checking \"fileness\" or \"dir-ness\" for \@module_list entries.\n") if $debug;
print("$$ Entries are: ", join("\, ",@module_list), ".\n") if $debug;
my $filetype = "";
for $cvsacl_module (@module_list) {
my $reposDirName = $cvsroot . '/' . $cvsacl_module;
my $reposFileName = $reposDirName . "\,v";
print("$$ In checkFileness: \$reposDirName: $reposDirName; \$reposFileName: $reposFileName.\n") if $debug;
if (((-d $reposDirName) && ($filetype eq "file")) || ((-f $reposFileName) && ($filetype eq "dir"))) {
print("Can\'t mix files and directories on single \'cvsacl\' file record; skipping entry.\n");
print(" Please contact a CVS administrator.\n");
$filetype = "";
last;
}
elsif (-d $reposDirName) {
$filetype = "dir";
print("$$ $reposDirName is a directory.\n") if $debug;
}
elsif (-f $reposFileName) {
$filetype = "file";
print("$$ $reposFileName is a regular file.\n") if $debug;
}
else {
print("***** Item to commit was neither a regular file nor a directory.\n");
print("***** Current \'cvsacl\' line ignored.\n");
print("***** Possible problem with \'cvsacl\' admin file. Please contact a CVS administrator.\n");
$filetype = "";
$text = sprintf("Module entry on cvsacl line: %s is not a valid file or directory.\n", $cvsacl_module);
write_restrictlog_record($text);
last;
} # end if
} # end for
print("$$ checkFileness will return \$filetype: $filetype.\n") if $debug;
return $filetype;
}
# ----------------------------------------------------------------------------
# ----------------------------------------------------- check for module match
# ----------------------------------------------------------------------------
sub checkModuleMatch {
# This subroutine checks for a match between the directory or file pattern
# specified in the 'cvsacl' file (i.e., $cvsacl_modules) versus the commit file
# objects passed into the script via @ARGV (i.e., $commit_object).
# The directory pattern only has to match the beginning portion of the commit
# file's name for a match since all files under that directory are considered
# a match. File patterns must exactly match.
# Since (theoretically, if not normally in practice) a working directory can
# contain a mixture of files from different branches, this routine checks to
# see if there is also a match on branch before considering the file
# comparison a match.
my $match_flag = "";
print("$$ \@_ in checkModuleMatch is: @_.\n") if $debug;
my ($type,$commit_object,$cvsacl_module) = @_;
if ($type eq "file") { # Do exact file match of $commit_object
if ($commit_object eq $cvsacl_module) {
$match_flag = "file";
} # Do dir match at beginning of $commit_object
}
elsif ($commit_object =~ /^$cvsacl_module\//) {
$match_flag = "dir";
}
if ($match_flag) {
print("$$ \$repository: $repository matches \$commit_object: $commit_object.\n") if $debug;
if (!$cvsacl_branches) { # empty branch pattern matches all
print("$$ blank \'cvsacl\' branch matches all commit files.\n") if $debug;
$repository_matches{$commit_object} = [$branch{$commit_object}, $cvsacl_module];
print("$$ \$repository_matches{$commit_object} = [$branch{$commit_object}, $cvsacl_module].\n") if $debug;
}
else { # otherwise check branch hash table
@branch_list = split (/[\s,]+/,$cvsacl_branches);
print("$$ Branches from \'cvsacl\' record: ", join(", ",@branch_list),".\n") if $debug;
if (grep(/$branch{$commit_object}/, @branch_list)) {
$repository_matches{$commit_object} = [$branch{$commit_object}, $cvsacl_module];
print("$$ \$repository_matches{$commit_object} = [$branch{$commit_object}, " .
"$cvsacl_module].\n") if $debug;
}
}
}
}
# ----------------------------------------------------------------------------
# ------------------------------------------------------- check for file match
# ----------------------------------------------------------------------------
sub printOptionalRestrictionMessage {
# This subroutine optionally prints site-specific file restriction information
# whenever a restriction condition is met. If the file 'restrict_msg' does
# not exist, the routine immediately exits. If there is a 'restrict_msg' file
# then all the contents are printed at the end of the standard restriction
# message.
# As seen from examining the definition of $restrictfile, the default filename
# is: $CVSROOT/CVSROOT/restrict_msg.
open (RESTRICT, $restrictfile) || return; # It is ok for cvsacl file not to exist
while (<RESTRICT>) {
chop;
# print out each line
print("**** $_\n");
}
}
# ----------------------------------------------------------------------------
# ---------------------------------------------------------- write log message
# ----------------------------------------------------------------------------
sub write_restrictlog {
# This subroutine iterates through the list of restricted entries and logs
# each one to the error logfile.
# write each line in @text out separately
foreach $commitfile (keys %restricted_entries) {
$log_text = sprintf "Commit attempt by: %s for: %s on branch: %s",
$user_name, $commitfile, $branch{$commitfile};
write_restrictlog_record($log_text);
}
}
# ----------------------------------------------------------------------------
# ---------------------------------------------------------- write log message
# ----------------------------------------------------------------------------
sub write_restrictlog_record {
# This subroutine receives a scalar string and writes it out to the
# $restrictlog file as a separate line. Each line is prepended with the date
# and time in the format: "2004/01/30 12:00:00 ".
$text = shift;
# return quietly if there is a problem opening the log file.
open(FILE, ">>$restrictlog") || return;
(@time) = localtime();
# write each line in @text out separately
$log_record = sprintf "%04d/%02d/%02d %02d:%02d:%02d %s.\n",
$time[5]+1900, $time[4]+1, $time[3], $time[2], $time[1], $time[0], $text;
print FILE $log_record;
print("$$ restrict_log record being written: $log_record to $restrictlog.\n") if $debug;
close(FILE);
}

View File

@ -1,52 +0,0 @@
.\" Contributed by Lowell Skoog <fluke!lowell@uunet.uu.net>
.TH CVSCHECK LOCAL "4 March 1991" FLUKE
.SH NAME
cvscheck \- identify files added, changed, or removed in a CVS working
directory
.SH SYNOPSIS
.B cvscheck
.SH DESCRIPTION
This command is a housekeeping aid. It should be run in a working
directory that has been checked out using CVS. It identifies files
that have been added, changed, or removed in the working directory, but
not CVS
.BR commit ted.
It also determines whether the files have been CVS
.BR add ed
or CVS
.BR remove d.
For directories, this command determines only whether they have been
.BR add ed.
It operates in the current directory only.
.LP
This command provides information that is available using CVS
.B status
and CVS
.BR diff .
The advantage of
.B cvscheck
is that its output is very concise. It saves you the strain (and
potential error) of interpreting the output of CVS
.B status
and
.BR diff .
.LP
See
.BR cvs (local)
or
.BR cvshelp (local)
for instructions on how to add or remove a file or directory in a
CVS-controlled package.
.SH DIAGNOSTICS
The exit status is 0 if no files have been added, changed, or removed
from the current directory. Otherwise, the command returns a count of
the adds, changes, and deletes.
.SH SEE ALSO
.BR cvs (local),
.BR cvshelp (local)
.SH AUTHOR
Lowell Skoog
.br
Software Technology Group
.br
Technical Computing

View File

@ -1,95 +0,0 @@
#! /bin/sh
#
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# cvscheck - identify files added, changed, or removed
# in CVS working directory
#
# Contributed by Lowell Skoog <fluke!lowell@uunet.uu.net>
#
# This program should be run in a working directory that has been
# checked out using CVS. It identifies files that have been added,
# changed, or removed in the working directory, but not "cvs
# committed". It also determines whether the files have been "cvs
# added" or "cvs removed". For directories, it is only practical to
# determine whether they have been added.
name=cvscheck
changes=0
# If we can't run CVS commands in this directory
cvs status . > /dev/null 2>&1
if [ $? != 0 ] ; then
# Bail out
echo "$name: there is no version here; bailing out" 1>&2
exit 1
fi
# Identify files added to working directory
for file in .* * ; do
# Skip '.' and '..'
if [ $file = '.' -o $file = '..' ] ; then
continue
fi
# If a regular file
if [ -f $file ] ; then
if cvs status $file | grep -s '^From:[ ]*New file' ; then
echo "file added: $file - not CVS committed"
changes=`expr $changes + 1`
elif cvs status $file | grep -s '^From:[ ]*no entry for' ; then
echo "file added: $file - not CVS added, not CVS committed"
changes=`expr $changes + 1`
fi
# Else if a directory
elif [ -d $file -a $file != CVS.adm ] ; then
# Move into it
cd $file
# If CVS commands don't work inside
cvs status . > /dev/null 2>&1
if [ $? != 0 ] ; then
echo "directory added: $file - not CVS added"
changes=`expr $changes + 1`
fi
# Move back up
cd ..
fi
done
# Identify changed files
changedfiles=`cvs diff | egrep '^diff' | awk '{print $3}'`
for file in $changedfiles ; do
echo "file changed: $file - not CVS committed"
changes=`expr $changes + 1`
done
# Identify files removed from working directory
removedfiles=`cvs status | egrep '^File:[ ]*no file' | awk '{print $4}'`
# Determine whether each file has been cvs removed
for file in $removedfiles ; do
if cvs status $file | grep -s '^From:[ ]*-' ; then
echo "file removed: $file - not CVS committed"
else
echo "file removed: $file - not CVS removed, not CVS committed"
fi
changes=`expr $changes + 1`
done
exit $changes

View File

@ -1,561 +0,0 @@
.\" Contributed by Lowell Skoog <fluke!lowell@uunet.uu.net>
.\" Full space in nroff; half space in troff
.de SP
.if n .sp
.if t .sp .5
..
.\" Start a command example
.de XS
.SP
.in +.5i
.ft B
.nf
..
.\" End a command example
.de XE
.fi
.ft P
.in -.5i
.SP
..
.TH CVSHELP LOCAL "17 March 1991" FLUKE
.SH NAME
cvshelp \- advice on using the Concurrent Versions System
.SH DESCRIPTION
This man page is based on experience using CVS.
It is bound to change as we gain more experience.
If you come up with better advice than is found here,
contact the Software Technology
Group and we will add it to this page.
.SS "Getting Started"
Use the following steps to prepare to use CVS:
.TP
\(bu
Take a look at the CVS manual page to see what it can do for you, and
if it fits your environment (or can possibly be made to fit your
environment).
.XS
man cvs
.XE
If things look good, continue on...
.TP
\(bu
Setup the master source repository. Choose a directory with
ample disk space available for source files. This is where the RCS
`,v' files will be stored. Say you choose
.B /src/master
as the root
of your source repository. Make the
.SB CVSROOT.adm
directory in the root of the source repository:
.XS
mkdir /src/master/CVSROOT.adm
.XE
.TP
\(bu
Populate this directory with the
.I loginfo
and
.I modules
files from the
.B "/usr/doc/local/cvs"
directory. Edit these files to reflect your local source repository
environment \- they may be quite small initially, but will grow as
sources are added to your source repository. Turn these files into
RCS controlled files:
.XS
cd /src/master/CVSROOT.adm
ci \-m'Initial loginfo file' loginfo
ci \-m'Initial modules file' modules
.XE
.TP
\(bu
Run the command:
.XS
mkmodules /src/master/CVSROOT.adm
.XE
This will build the
.BR ndbm (3)
file for the modules database.
.TP
\(bu
Remember to edit the
.I modules
file manually when sources are checked
in with
.B checkin
or CVS
.BR add .
A copy of the
.I modules
file for editing can be retrieved with the command:
.XS
cvs checkout CVSROOT.adm
.XE
.TP
\(bu
Have all users of the CVS system set the
.SM CVSROOT
environment variable appropriately to reflect the placement of your
source repository. If the above example is used, the following
commands can be placed in a
.I .login
or
.I .profile
file:
.XS
setenv CVSROOT /src/master
.XE
for csh users, and
.XS
CVSROOT=/src/master; export CVSROOT
.XE
for sh users.
.SS "Placing Locally Written Sources Under CVS Control"
Say you want to place the `whizbang' sources under
CVS control. Say further that the sources have never
been under revision control before.
.TP
\(bu
Move the source hierarchy (lock, stock, and barrel)
into the master source repository:
.XS
mv ~/whizbang $CVSROOT
.XE
.TP
\(bu
Clean out unwanted object files:
.XS
cd $CVSROOT/whizbang
make clean
.XE
.TP
\(bu
Turn every file in the hierarchy into an RCS controlled file:
.XS
descend \-f 'ci \-t/dev/null \-m"Placed under CVS control" \-nV\fR\fIx\fR\fB_\fR\fIy\fR\fB *'
.XE
In this example, the initial release tag is \fBV\fIx\fB_\fIy\fR,
representing version \fIx\fR.\fIy\fR.
.LP
You can use CVS on sources that are already under RCS control.
The following example shows how.
In this example, the source package is called `skunkworks'.
.TP
\(bu
Move the source hierarchy into the master source
repository:
.XS
mv ~/skunkworks $CVSROOT
.XE
.TP
\(bu
Clean out unwanted object files:
.XS
cd $CVSROOT/skunkworks
make clean
.XE
.TP
\(bu
Clean out unwanted working files, leaving only the RCS `,v' files:
.XS
descend \-r rcsclean
.XE
Note: If any working files have been checked out and changed,
.B rcsclean
will fail. Check in the modified working files
and run the command again.
.TP
\(bu
Get rid of
.SB RCS
subdirectories. CVS does not use them.
.XS
descend \-r \-f 'mv RCS/*,v .'
descend \-r \-f 'rmdir RCS'
.XE
.TP
\(bu
Delete any unwanted files that remain in the source hierarchy. Then
make sure all files are under RCS control:
.XS
descend \-f 'ci \-t/dev/null \-m"Placed under CVS control" \-n\fR\fItag\fR\fB *'
.XE
.I tag
is the latest symbolic revision tag that you applied to your package
(if any). Note: This command will probably generate lots of error
messages (for directories and existing RCS files) that you can
ignore.
.SS "Placing a Third-Party Source Distribution Under CVS Control"
The
.B checkin
command checks third-party sources into CVS. The
difference between third-party sources and locally
written sources is that third-party sources must be checked into a
separate branch (called the
.IR "vendor branch" )
of the RCS tree. This makes it possible to merge local changes to
the sources with later releases from the vendor.
.TP
\(bu
Save the original distribution kit somewhere. For example, if the
master source repository is
.B /src/master
the distribution kit could be saved in
.BR /src/dist .
Organize the distribution directory so that each release
is clearly identifiable.
.TP
\(bu
Unpack the package in a scratch directory, for example
.BR ~/scratch .
.TP
\(bu
Create a repository for the package.
In this example, the package is called `Bugs-R-Us 4.3'.
.XS
mkdir $CVSROOT/bugs
.XE
.TP
\(bu
Check in the unpacked files:
.XS
cd ~/scratch
checkin \-m 'Bugs-R-Us 4.3 distribution' bugs VENDOR V4_3
.XE
There is nothing magic about the tag `VENDOR', which is applied to
the vendor branch. You can use whatever tag you want. `VENDOR' is a
useful convention.
.TP
\(bu
Never modify vendor files before checking them in.
Check in the files
.I exactly
as you unpacked them.
If you check in locally modified files, future vendor releases may
wipe out your local changes.
.SS "Working With CVS-Controlled Sources"
To use or edit the sources, you must check out a private copy.
For the following examples, the master files are assumed to reside in
.BR "$CVSROOT/behemoth" .
The working directory is
.BR "~/work" .
See
.BR cvs (local)
for more details on the commands mentioned below.
.TP
.I "To Check Out Working Files
Use CVS
.BR checkout :
.XS
cd ~/work
cvs checkout behemoth
.XE
There is nothing magic about the working directory. CVS will check
out sources anywhere you like. Once you have a working copy of the
sources, you can compile or edit them as desired.
.TP
.I "To Display Changes You Have Made"
Use CVS
.BR diff
to display detailed changes, equivalent to
.BR rcsdiff (local).
You can also use
.BR cvscheck (local)
to list files added, changed, and removed in
the directory, but not yet
.BR commit ted.
You must be in a directory containing working files.
.TP
.I "To Display Revision Information"
Use CVS
.BR log ,
which is equivalent to
.BR rlog (local).
You must be in a directory containing working files.
.TP
.I "To Update Working Files"
Use CVS
.BR update
in a directory containing working files.
This command brings your working files up
to date with changes checked into the
master repository since you last checked out or updated
your files.
.TP
.I "To Check In Your Changes"
Use CVS
.BR commit
in a directory containing working files.
This command checks your changes into the master repository.
You can specify files by name or use
.XS
cvs commit \-a
.XE
to
.B commit
all the files you have changed.
.TP
.I "To Add a File"
Add the file to the working directory.
Use CVS
.B add
to mark the file as added.
Use CVS
.B commit
to add the file to the master repository.
.TP
.I "To Remove a File"
Remove the file from the working directory.
Use CVS
.B remove
to mark the file as removed.
Use CVS
.B commit
to move the file from its current location in the master repository
to the CVS
.IR Attic
directory.
.TP
.I "To Add a Directory"
Add the directory to the working directory.
Use CVS
.B add
to add the directory to the master repository.
.TP
.I "To Remove a Directory"
.br
You shouldn't remove directories under CVS. You should instead remove
their contents and then prune them (using the
.B \-f
and
.B \-p
options) when you
.B checkout
or
.B update
your working files.
.TP
.I "To Tag a Release"
Use CVS
.B tag
to apply a symbolic tag to the latest revision of each file in the
master repository. For example:
.XS
cvs tag V2_1 behemoth
.XE
.TP
.I "To Retrieve an Exact Copy of a Previous Release"
During a CVS
.B checkout
or
.BR update ,
use the
.B \-r
option to retrieve revisions associated with a symbolic tag.
Use the
.B \-f
option to ignore all RCS files that do not contain the
tag.
Use the
.B \-p
option to prune directories that wind up empty because none
of their files matched the tag. Example:
.XS
cd ~/work
cvs checkout \-r V2_1 \-f \-p behemoth
.XE
.SS "Logging Changes"
It is a good idea to keep a change log together with the
sources. As a minimum, the change log should name and describe each
tagged release. The change log should also be under CVS control and
should be tagged along with the sources.
.LP
.BR cvslog (local)
can help. This command logs
changes reported during CVS
.B commit
operations. It automatically
updates a change log file in your working directory. When you are
finished making changes, you (optionally) edit the change log file and
then commit it to the master repository.
.LP
Note: You must edit the change log to describe a new release
and
.B commit
it to the master repository
.I before
.BR tag ging
the release using CVS. Otherwise, the release description will not be
included in the tagged package.
.LP
See
.BR cvslog (local)
for more information.
.SS "Merging a Subsequent Third-Party Distribution"
The initial steps in this process are identical to placing a
third-party distribution under CVS for the first time: save the
distribution kit and unpack the package in a scratch directory. From
that point the steps diverge.
The following example considers release 5.0 of the
Bugs-R-Us package.
.TP
\(bu
Check in the sources after unpacking them:
.XS
cd ~/scratch
checkin \-m 'Bugs-R-Us 5.0 distribution' bugs VENDOR V5_0 \\
| tee ~/WARNINGS
.XE
It is important to save the output of
.B checkin
in a file
because it lists the sources that have been locally modified.
It is best to save the file in a different directory (for example,
your home directory). Otherwise,
.B checkin
will try to check it into the master repository.
.TP
\(bu
In your usual working directory, check out a fresh copy of the
distribution that you just checked in.
.XS
cd ~/work
cvs checkout \-r VENDOR bugs
.XE
The
.B checkout
command shown above retrieves the latest revision on the vendor branch.
.TP
\(bu
See the `WARNINGS' file for a list of all locally modified
sources.
For each locally modified source,
look at the differences between
the new distribution and the latest local revision:
.XS
cvs diff \-r \fR\fILocalRev file\fR\fB
.XE
In this command,
.I LocalRev
is the latest
numeric or symbolic revision
on the RCS trunk of
.IR file .
You can use CVS
.B log
to get the revision history.
.TP
\(bu
If your local modifications to a file have been incorporated into
the vendor's distribution, then you should reset the default RCS
branch for that file to the vendor branch. CVS doesn't provide a
mechanism to do this. You have to do it by hand in the master
repository:
.XS
rcs \-bVENDOR \fR\fIfile\fR\fB,v
.XE
.TP
\(bu
If your local modifications need to be merged with the
new distribution, use CVS
.B join
to do it:
.XS
cvs join \-r VENDOR \fR\fIfile\fR\fB
.XE
The resulting file will be placed in your working directory.
Edit it to resolve any overlaps.
.TP
\(bu
Test the merged package.
.TP
\(bu
Commit all modified files to the repository:
.XS
cvs commit \-a
.XE
.TP
\(bu
Tag the repository with a new local tag.
.SS "Applying Patches to Third-Party Sources"
Patches are handled in a manner very similar to complete
third-party distributions. This example considers patches applied to
Bugs-R-Us release 5.0.
.TP
\(bu
Save the patch files together with the distribution kit
to which they apply.
The patch file names should clearly indicate the patch
level.
.TP
\(bu
In a scratch directory, check out the last `clean' vendor copy \- the
highest revision on the vendor branch with
.IR "no local changes" :
.XS
cd ~/scratch
cvs checkout \-r VENDOR bugs
.XE
.TP
\(bu
Use
.BR patch (local)
to apply the patches. You should now have an image of the
vendor's software just as though you had received a complete,
new release.
.TP
\(bu
Proceed with the steps described for merging a subsequent third-party
distribution.
.TP
\(bu
Note: When you get to the step that requires you
to check out the new distribution after you have
checked it into the vendor branch, you should move to a different
directory. Do not attempt to
.B checkout
files in the directory in
which you applied the patches. If you do, CVS will try to merge the
changes that you made during patching with the version being checked
out and things will get very confusing. Instead,
go to a different directory (like your working directory) and
check out the files there.
.SS "Advice to Third-Party Source Hackers"
As you can see from the preceding sections, merging local changes
into third-party distributions remains difficult, and probably
always will. This fact suggests some guidelines:
.TP
\(bu
Minimize local changes.
.I Never
make stylistic changes.
Change makefiles only as much as needed for installation. Avoid
overhauling anything. Pray that the vendor does the same.
.TP
\(bu
Avoid renaming files or moving them around.
.TP
\(bu
Put independent, locally written files like help documents, local
tools, or man pages in a sub-directory called `local-additions'.
Locally written files that are linked into an existing executable
should be added right in with the vendor's sources (not in a
`local-additions' directory).
If, in the future,
the vendor distributes something
equivalent to your locally written files
you can CVS
.B remove
the files from the `local-additions' directory at that time.
.SH SEE ALSO
.BR cvs (local),
.BR checkin (local),
.BR cvslog (local),
.BR cvscheck (local)
.SH AUTHOR
Lowell Skoog
.br
Software Technology Group
.br
Technical Computing

View File

@ -1,201 +0,0 @@
#!/bin/sh
# Copyright (C) 2000-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# This program is intended to take a check.log file generated by a failed run of
# sanity.sh as input and run expr line by line on it. It seems a much easier
# way of spotting a single failed line in a 100 line test result.
#
#
# Contributed by Derek R. Price <derek.price@openavenue.com>
#
usage ()
{
echo "\
usage: $0 [-afh] [file...]
-a process alternate pattern
-f process first pattern (default)
-h print this text
file files to process (default = check.log)"
}
# Do a line by line match with expr
#
# INPUTS
# $1 = text file name
# $2 = pattern file name
expr_line_by_line ()
{
dcl_line=0
dcl_wrong=
# We are assuming a newline at the end of the file. The way sanity.sh
# uses echo to create the log message guarantees this newline and since
# expr ignores the last newline when the anchor is present anyhow, no
# information is being lost in the transition
while test $dcl_line -lt `wc -l <$1` -a $dcl_line -lt `wc -l <$2`; do
dcl_line=`expr $dcl_line + 1`
if test `sed -ne${dcl_line}p <$1 |wc -c` -eq 1 \
-a `sed -ne${dcl_line}p <$2 |wc -c` -eq 1; then
# This is a workaround for what I am calling a bug in GNU
# expr - it won't match the empty string to the empty
# string. In this case the assumption is that a single
# character is always a newline. Since we already checked
# for the end of the file, we know sed will echo the
# newline.
:
elif expr "`sed -ne${dcl_line}p <$1`" : \
"`sed -ne${dcl_line}p <$2`\$" >/dev/null; then
:
else
echo "$dcl_line: `sed -ne${dcl_line}p <$1`"
echo "$dcl_line: `sed -ne${dcl_line}p <$2`\$"
dcl_wrong="$dcl_wrong $dcl_line"
fi
done
if test `wc -l <$1` -ne `wc -l <$2`; then
echo "output & pattern contain differing number of lines"
elif test -z "$dcl_wrong"; then
echo "no mismatched lines"
else
echo "mismatched lines: $dcl_wrong"
fi
}
# Process a single check.log file
#
# INPUTS
# $1 = filename
process_check_log ()
{
# abort if we can't find any expressions
if grep '^\*\* got: $' <$1 >/dev/null; then
:
else
echo "WARNING: No expressions in file: $1" >&2
echo " Either not a check.log or sanity.sh exited for some other reason," >&2
echo " like bad exit status. Try tail." >&2
return
fi
dcl_exprfiles=""
if grep '^\*\* or: $' <$1 >/dev/null; then
# file contains a second regex
if test $dcl_dofirst -eq 1; then
# get the first pattern
sed -ne '/^\*\* expected: $/,/^\*\* or: $/p' <$1 >/tmp/dcle$$
dcl_exprfiles="$dcl_exprfiles /tmp/dcle$$"
fi
if test $dcl_doalternate -eq 1; then
# get the alternate pattern
sed -ne '/^\*\* or: $/,/^\*\* got: $/p' <$1 >/tmp/dclo$$
dcl_exprfiles="$dcl_exprfiles /tmp/dclo$$"
else
echo "WARNING: Ignoring alternate pattern in file: $1" >&2
fi
else
# file doesn't contain a second regex
if test $dcl_dofirst = 1; then
# get the only pattern
sed -ne '/^\*\* expected: $/,/^\*\* got: $/p' <$1 >/tmp/dcle$$
dcl_exprfiles="$dcl_exprfiles /tmp/dcle$$"
fi
if test $dcl_doalternate -eq 1; then
echo "WARNING: No alternate pattern in file: $1" >&2
fi
fi
# and get the actual output
sed -ne '/^\*\* got: $/,$p' <$1 >/tmp/dclg$$
sed -ne '1D
$D
p' </tmp/dclg$$ >/tmp/dclh$$
mv /tmp/dclh$$ /tmp/dclg$$
# compare the output against each pattern requested
for dcl_f in $dcl_exprfiles; do
sed -ne '1D
$D
p' <$dcl_f >/tmp/dclp$$
mv /tmp/dclp$$ $dcl_f
case $dcl_f in
/tmp/dcle*)
echo "********** $1 : Primary **********"
;;
/tmp/dclo*)
echo "********** $1 : Alternate **********"
;;
esac
expr_line_by_line /tmp/dclg$$ $dcl_f
rm $dcl_f
done
rm /tmp/dclg$$
}
###
### MAIN
###
# set up defaults
dcl_doalternate=0
dcl_dofirst=0
# process options
while getopts afh arg; do
case $arg in
a)
dcl_doalternate=1
;;
f)
dcl_dofirst=1
;;
\?|h)
usage
exit 1
;;
esac
done
# dispose of processed args
shift `expr $OPTIND - 1`
OPTIND=1
# set the default mode
if test $dcl_doalternate -eq 0; then
dcl_dofirst=1
fi
# set default arg
if test $# -eq 0; then
if test -f src/check.log && test -r src/check.log; then
set src/check.log
else
set check.log
fi
fi
for file in "$@"; do
process_check_log $file;
done
exit 0

View File

@ -1,114 +0,0 @@
.TH DESCEND 1 "31 March 1992"
.SH NAME
descend \- walk directory tree and execute a command at each node
.SH SYNOPSIS
.B descend
[
.B \-afqrv
]
.I command
[
.I directory
\&.\|.\|.
]
.SH DESCRIPTION
.B descend
walks down a directory tree and executes a command at each node. It
is not as versatile as
.BR find (1),
but it has a simpler syntax. If no
.I directory
is specified,
.B descend
starts at the current one.
.LP
Unlike
.BR find ,
.B descend
can be told to skip the special directories associated with RCS,
CVS, and SCCS. This makes
.B descend
especially handy for use with these packages. It can be used with
other commands too, of course.
.LP
.B descend
is a poor man's way to make any command recursive. Note:
.B descend
does not follow symbolic links to directories unless they are
specified on the command line.
.SH OPTIONS
.TP 15
.B \-a
.I All.
Descend into directories that begin with '.'.
.TP
.B \-f
.I Force.
Ignore errors during descent. Normally,
.B descend
quits when an error occurs.
.TP
.B \-q
.I Quiet.
Suppress the message `In directory
.IR directory '
that is normally printed during the descent.
.TP
.B \-r
.I Restricted.
Don't descend into the special directories
.SB RCS,
.SB CVS,
.SB CVS.adm,
and
.SB SCCS.
.TP
.B \-v
.I Verbose.
Print
.I command
before executing it.
.SH EXAMPLES
.TP 15
.B "descend ls"
Cheap substitute for `ls -R'.
.TP 15
.B "descend -f 'rm *' tree"
Strip `tree' of its leaves. This command descends the `tree'
directory, removing all regular files. Since
.BR rm (1)
does not remove directories, this command leaves the directory
structure of `tree' intact, but denuded. The
.B \-f
option is required to keep
.B descend
from quitting. You could use `rm \-f' instead.
.TP
.B "descend -r 'co RCS/*'" /project/src/
Check out every RCS file under the directory
.BR "/project/src" .
.TP
.B "descend -r 'cvs diff'"
Perform CVS `diff' operation on every directory below (and including)
the current one.
.SH DIAGNOSTICS
Returns 1 if errors occur (and the
.B \-f
option is not used). Otherwise returns 0.
.SH SEE ALSO
.BR find (1),
.BR rcsintro (1),
.BR cvs (1),
.BR sccs (1)
.SH AUTHOR
Lowell Skoog
.br
Software Technology Group
.br
John Fluke Mfg. Co., Inc.
.SH BUGS
Shell metacharacters in
.I command
may have bizarre effects. In particular, compound commands
(containing ';', '[', and ']' characters) will not work. It is best
to enclose complicated commands in single quotes \(aa\ \(aa.

View File

@ -1,127 +0,0 @@
#! /bin/sh
#
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# descend - walk down a directory tree and execute a command at each node
fullname=$0
name=descend
usage="Usage: $name [-afqrv] command [directory ...]\n
\040\040-a\040\040All: descend into directories starting with '.'\n
\040\040-f\040\040Force: ignore errors during descent\n
\040\040-q\040\040Quiet: don't print directory names\n
\040\040-r\040\040Restricted: don't descend into RCS, CVS.adm, SCCS directories\n
\040\040-v\040\040Verbose: print command before executing it"
# Scan for options
while getopts afqrv option; do
case $option in
a)
alldirs=$option
options=$options" "-$option
;;
f)
force=$option
options=$options" "-$option
;;
q)
verbose=
quiet=$option
options=$options" "-$option
;;
r)
restricted=$option
options=$options" "-$option
;;
v)
verbose=$option
quiet=
options=$options" "-$option
;;
\?)
/usr/5bin/echo $usage 1>&2
exit 1
;;
esac
done
shift `expr $OPTIND - 1`
# Get command to execute
if [ $# -lt 1 ] ; then
/usr/5bin/echo $usage 1>&2
exit 1
else
command=$1
shift
fi
# If no directory specified, use '.'
if [ $# -lt 1 ] ; then
default_dir=.
fi
# For each directory specified
for dir in $default_dir "$@" ; do
# Spawn sub-shell so we return to starting directory afterward
(cd $dir
# Execute specified command
if [ -z "$quiet" ] ; then
echo In directory `hostname`:`pwd`
fi
if [ -n "$verbose" ] ; then
echo $command
fi
eval "$command" || if [ -z "$force" ] ; then exit 1; fi
# Collect dot file names if necessary
if [ -n "$alldirs" ] ; then
dotfiles=.*
else
dotfiles=
fi
# For each file in current directory
for file in $dotfiles * ; do
# Skip '.' and '..'
if [ "$file" = "." -o "$file" = ".." ] ; then
continue
fi
# If a directory but not a symbolic link
if [ -d "$file" -a ! -h "$file" ] ; then
# If not skipping this type of directory
if [ \( "$file" != "RCS" -a \
"$file" != "SCCS" -a \
"$file" != "CVS" -a \
"$file" != "CVS.adm" \) \
-o -z "$restricted" ] ; then
# Recursively descend into it
$fullname $options "$command" "$file" \
|| if [ -z "$force" ] ; then exit 1; fi
fi
# Else if a directory AND a symbolic link
elif [ -d "$file" -a -h "$file" ] ; then
if [ -z "$quiet" ] ; then
echo In directory `hostname`:`pwd`/$file: symbolic link: skipping
fi
fi
done
) || if [ -z "$force" ] ; then exit 1; fi
done

View File

@ -1,481 +0,0 @@
echo 'directory.3':
sed 's/^X//' >'directory.3' <<'!'
X.TH DIRECTORY 3 imported
X.DA 9 Oct 1985
X.SH NAME
Xopendir, readdir, telldir, seekdir, rewinddir, closedir \- high-level directory operations
X.SH SYNOPSIS
X.B #include <sys/types.h>
X.br
X.B #include <ndir.h>
X.PP
X.SM
X.B DIR
X.B *opendir(filename)
X.br
X.B char *filename;
X.PP
X.SM
X.B struct direct
X.B *readdir(dirp)
X.br
X.B DIR *dirp;
X.PP
X.SM
X.B long
X.B telldir(dirp)
X.br
X.B DIR *dirp;
X.PP
X.SM
X.B seekdir(dirp, loc)
X.br
X.B DIR *dirp;
X.br
X.B long loc;
X.PP
X.SM
X.B rewinddir(dirp)
X.br
X.B DIR *dirp;
X.PP
X.SM
X.B closedir(dirp)
X.br
X.B DIR *dirp;
X.SH DESCRIPTION
XThis library provides high-level primitives for directory scanning,
Xsimilar to those available for 4.2BSD's (very different) directory system.
X.\"The purpose of this library is to simulate
X.\"the new flexible length directory names of 4.2bsd UNIX
X.\"on top of the old directory structure of v7.
XIt incidentally provides easy portability to and from 4.2BSD (insofar
Xas such portability is not compromised by other 4.2/VAX dependencies).
X.\"It allows programs to be converted immediately
X.\"to the new directory access interface,
X.\"so that they need only be relinked
X.\"when moved to 4.2bsd.
X.\"It is obtained with the loader option
X.\".BR \-lndir .
X.PP
X.I Opendir
Xopens the directory named by
X.I filename
Xand associates a
X.I directory stream
Xwith it.
X.I Opendir
Xreturns a pointer to be used to identify the
X.I directory stream
Xin subsequent operations.
XThe pointer
X.SM
X.B NULL
Xis returned if
X.I filename
Xcannot be accessed or is not a directory.
X.PP
X.I Readdir
Xreturns a pointer to the next directory entry.
XIt returns
X.B NULL
Xupon reaching the end of the directory or detecting
Xan invalid
X.I seekdir
Xoperation.
X.PP
X.I Telldir
Xreturns the current location associated with the named
X.I directory stream.
X.PP
X.I Seekdir
Xsets the position of the next
X.I readdir
Xoperation on the
X.I directory stream.
XThe new position reverts to the one associated with the
X.I directory stream
Xwhen the
X.I telldir
Xoperation was performed.
XValues returned by
X.I telldir
Xare good only for the lifetime of the DIR pointer from
Xwhich they are derived.
XIf the directory is closed and then reopened,
Xthe
X.I telldir
Xvalue may be invalidated
Xdue to undetected directory compaction in 4.2BSD.
XIt is safe to use a previous
X.I telldir
Xvalue immediately after a call to
X.I opendir
Xand before any calls to
X.I readdir.
X.PP
X.I Rewinddir
Xresets the position of the named
X.I directory stream
Xto the beginning of the directory.
X.PP
X.I Closedir
Xcauses the named
X.I directory stream
Xto be closed,
Xand the structure associated with the DIR pointer to be freed.
X.PP
XA
X.I direct
Xstructure is as follows:
X.PP
X.RS
X.nf
Xstruct direct {
X /* unsigned */ long d_ino; /* inode number of entry */
X unsigned short d_reclen; /* length of this record */
X unsigned short d_namlen; /* length of string in d_name */
X char d_name[MAXNAMLEN + 1]; /* name must be no longer than this */
X};
X.fi
X.RE
X.PP
XThe
X.I d_reclen
Xfield is meaningless in non-4.2BSD systems and should be ignored.
XThe use of a
X.I long
Xfor
X.I d_ino
Xis also a 4.2BSDism;
X.I ino_t
X(see
X.IR types (5))
Xshould be used elsewhere.
XThe macro
X.I DIRSIZ(dp)
Xgives the minimum memory size needed to hold the
X.I direct
Xvalue pointed to by
X.IR dp ,
Xwith the minimum necessary allocation for
X.IR d_name .
X.PP
XThe preferred way to search the current directory for entry ``name'' is:
X.PP
X.RS
X.nf
X len = strlen(name);
X dirp = opendir(".");
X if (dirp == NULL) {
X fprintf(stderr, "%s: can't read directory .\\n", argv[0]);
X return NOT_FOUND;
X }
X while ((dp = readdir(dirp)) != NULL)
X if (dp->d_namlen == len && strcmp(dp->d_name, name) == 0) {
X closedir(dirp);
X return FOUND;
X }
X closedir(dirp);
X return NOT_FOUND;
X.RE
X.\".SH LINKING
X.\"This library is accessed by specifying ``-lndir'' as the
X.\"last argument to the compile line, e.g.:
X.\".PP
X.\" cc -I/usr/include/ndir -o prog prog.c -lndir
X.SH "SEE ALSO"
Xopen(2),
Xclose(2),
Xread(2),
Xlseek(2)
X.SH HISTORY
XWritten by
XKirk McKusick at Berkeley (ucbvax!mckusick).
XMiscellaneous bug fixes from elsewhere.
XThe size of the data structure has been decreased to avoid excessive
Xspace waste under V7 (where filenames are 14 characters at most).
XFor obscure historical reasons, the include file is also available
Xas
X.IR <ndir/sys/dir.h> .
XThe Berkeley version lived in a separate library (\fI\-lndir\fR),
Xwhereas ours is
Xpart of the C library, although the separate library is retained to
Xmaximize compatibility.
X.PP
XThis manual page has been substantially rewritten to be informative in
Xthe absence of a 4.2BSD manual.
X.SH BUGS
XThe
X.I DIRSIZ
Xmacro actually wastes a bit of space due to some padding requirements
Xthat are an artifact of 4.2BSD.
X.PP
XThe returned value of
X.I readdir
Xpoints to a static area that will be overwritten by subsequent calls.
X.PP
XThere are some unfortunate name conflicts with the \fIreal\fR V7
Xdirectory structure definitions.
!
echo 'dir.h':
sed 's/^X//' >'dir.h' <<'!'
X/* dir.h 4.4 82/07/25 */
X
X/*
X * A directory consists of some number of blocks of DIRBLKSIZ
X * bytes, where DIRBLKSIZ is chosen such that it can be transferred
X * to disk in a single atomic operation (e.g. 512 bytes on most machines).
X *
X * Each DIRBLKSIZ byte block contains some number of directory entry
X * structures, which are of variable length. Each directory entry has
X * a struct direct at the front of it, containing its inode number,
X * the length of the entry, and the length of the name contained in
X * the entry. These are followed by the name padded to a 4 byte boundary
X * with null bytes. All names are guaranteed null terminated.
X * The maximum length of a name in a directory is MAXNAMLEN.
X *
X * The macro DIRSIZ(dp) gives the amount of space required to represent
X * a directory entry. Free space in a directory is represented by
X * entries which have dp->d_reclen >= DIRSIZ(dp). All DIRBLKSIZ bytes
X * in a directory block are claimed by the directory entries. This
X * usually results in the last entry in a directory having a large
X * dp->d_reclen. When entries are deleted from a directory, the
X * space is returned to the previous entry in the same directory
X * block by increasing its dp->d_reclen. If the first entry of
X * a directory block is free, then its dp->d_ino is set to 0.
X * Entries other than the first in a directory do not normally have
X * dp->d_ino set to 0.
X */
X#define DIRBLKSIZ 512
X#ifdef VMUNIX
X#define MAXNAMLEN 255
X#else
X#define MAXNAMLEN 14
X#endif
X
Xstruct direct {
X /* unsigned */ long d_ino; /* inode number of entry */
X unsigned short d_reclen; /* length of this record */
X unsigned short d_namlen; /* length of string in d_name */
X char d_name[MAXNAMLEN + 1]; /* name must be no longer than this */
X};
X
X/*
X * The DIRSIZ macro gives the minimum record length which will hold
X * the directory entry. This requires the amount of space in struct direct
X * without the d_name field, plus enough space for the name with a terminating
X * null byte (dp->d_namlen+1), rounded up to a 4 byte boundary.
X */
X#undef DIRSIZ
X#define DIRSIZ(dp) \
X ((sizeof (struct direct) - (MAXNAMLEN+1)) + (((dp)->d_namlen+1 + 3) &~ 3))
X
X#ifndef KERNEL
X/*
X * Definitions for library routines operating on directories.
X */
Xtypedef struct _dirdesc {
X int dd_fd;
X long dd_loc;
X long dd_size;
X char dd_buf[DIRBLKSIZ];
X} DIR;
X#ifndef NULL
X#define NULL 0
X#endif
Xextern DIR *opendir();
Xextern struct direct *readdir();
Xextern long telldir();
X#ifdef void
Xextern void seekdir();
Xextern void closedir();
X#endif
X#define rewinddir(dirp) seekdir((dirp), (long)0)
X#endif KERNEL
!
echo 'makefile':
sed 's/^X//' >'makefile' <<'!'
XDIR = closedir.o opendir.o readdir.o seekdir.o telldir.o
XCFLAGS=-O -I. -Dvoid=int
XDEST=..
X
Xall: $(DIR)
X
Xmv: $(DIR)
X mv $(DIR) $(DEST)
X
Xcpif: dir.h
X cp dir.h /usr/include/ndir.h
X
Xclean:
X rm -f *.o
!
echo 'closedir.c':
sed 's/^X//' >'closedir.c' <<'!'
Xstatic char sccsid[] = "@(#)closedir.c 4.2 3/10/82";
X
X#include <sys/types.h>
X#include <dir.h>
X
X/*
X * close a directory.
X */
Xvoid
Xclosedir(dirp)
X register DIR *dirp;
X{
X close(dirp->dd_fd);
X dirp->dd_fd = -1;
X dirp->dd_loc = 0;
X free((char *)dirp);
X}
!
echo 'opendir.c':
sed 's/^X//' >'opendir.c' <<'!'
X/* Copyright (c) 1982 Regents of the University of California */
X
Xstatic char sccsid[] = "@(#)opendir.c 4.4 11/12/82";
X
X#include <sys/types.h>
X#include <sys/stat.h>
X#include <dir.h>
X
X/*
X * open a directory.
X */
XDIR *
Xopendir(name)
X char *name;
X{
X register DIR *dirp;
X register int fd;
X struct stat statbuf;
X char *malloc();
X
X if ((fd = open(name, 0)) == -1)
X return NULL;
X if (fstat(fd, &statbuf) == -1 || !(statbuf.st_mode & S_IFDIR)) {
X close(fd);
X return NULL;
X }
X if ((dirp = (DIR *)malloc(sizeof(DIR))) == NULL) {
X close (fd);
X return NULL;
X }
X dirp->dd_fd = fd;
X dirp->dd_loc = 0;
X dirp->dd_size = 0; /* so that telldir will work before readdir */
X return dirp;
X}
!
echo 'readdir.c':
sed 's/^X//' >'readdir.c' <<'!'
X/* Copyright (c) 1982 Regents of the University of California */
X
Xstatic char sccsid[] = "@(#)readdir.c 4.3 8/8/82";
X
X#include <sys/types.h>
X#include <dir.h>
X
X/*
X * read an old stlye directory entry and present it as a new one
X */
X#define ODIRSIZ 14
X
Xstruct olddirect {
X ino_t od_ino;
X char od_name[ODIRSIZ];
X};
X
X/*
X * get next entry in a directory.
X */
Xstruct direct *
Xreaddir(dirp)
X register DIR *dirp;
X{
X register struct olddirect *dp;
X static struct direct dir;
X
X for (;;) {
X if (dirp->dd_loc == 0) {
X dirp->dd_size = read(dirp->dd_fd, dirp->dd_buf,
X DIRBLKSIZ);
X if (dirp->dd_size <= 0) {
X dirp->dd_size = 0;
X return NULL;
X }
X }
X if (dirp->dd_loc >= dirp->dd_size) {
X dirp->dd_loc = 0;
X continue;
X }
X dp = (struct olddirect *)(dirp->dd_buf + dirp->dd_loc);
X dirp->dd_loc += sizeof(struct olddirect);
X if (dp->od_ino == 0)
X continue;
X dir.d_ino = dp->od_ino;
X strncpy(dir.d_name, dp->od_name, ODIRSIZ);
X dir.d_name[ODIRSIZ] = '\0'; /* insure null termination */
X dir.d_namlen = strlen(dir.d_name);
X dir.d_reclen = DIRBLKSIZ;
X return (&dir);
X }
X}
!
echo 'seekdir.c':
sed 's/^X//' >'seekdir.c' <<'!'
Xstatic char sccsid[] = "@(#)seekdir.c 4.9 3/25/83";
X
X#include <sys/param.h>
X#include <dir.h>
X
X/*
X * seek to an entry in a directory.
X * Only values returned by "telldir" should be passed to seekdir.
X */
Xvoid
Xseekdir(dirp, loc)
X register DIR *dirp;
X long loc;
X{
X long curloc, base, offset;
X struct direct *dp;
X extern long lseek();
X
X curloc = telldir(dirp);
X if (loc == curloc)
X return;
X base = loc & ~(DIRBLKSIZ - 1);
X offset = loc & (DIRBLKSIZ - 1);
X (void) lseek(dirp->dd_fd, base, 0);
X dirp->dd_size = 0;
X dirp->dd_loc = 0;
X while (dirp->dd_loc < offset) {
X dp = readdir(dirp);
X if (dp == NULL)
X return;
X }
X}
!
echo 'telldir.c':
sed 's/^X//' >'telldir.c' <<'!'
Xstatic char sccsid[] = "@(#)telldir.c 4.1 2/21/82";
X
X#include <sys/types.h>
X#include <dir.h>
X
X/*
X * return a pointer into a directory
X */
Xlong
Xtelldir(dirp)
X DIR *dirp;
X{
X long lseek();
X
X return (lseek(dirp->dd_fd, 0L, 1) - dirp->dd_size + dirp->dd_loc);
X}
!
echo done

View File

@ -1,112 +0,0 @@
Date: Tue, 16 Jun 1992 17:05:23 +0200
From: Steven.Pemberton@cwi.nl
Message-Id: <9206161505.AA06927.steven@sijs.cwi.nl>
To: berliner@Sun.COM
Subject: cvs
INTRODUCTION TO USING CVS
CVS is a system that lets groups of people work simultaneously on
groups of files (for instance program sources).
It works by holding a central 'repository' of the most recent version
of the files. You may at any time create a personal copy of these
files; if at a later date newer versions of the files are put in the
repository, you can 'update' your copy.
You may edit your copy of the files freely. If new versions of the
files have been put in the repository in the meantime, doing an update
merges the changes in the central copy into your copy.
(It can be that when you do an update, the changes in the
central copy clash with changes you have made in your own
copy. In this case cvs warns you, and you have to resolve the
clash in your copy.)
When you are satisfied with the changes you have made in your copy of
the files, you can 'commit' them into the central repository.
(When you do a commit, if you haven't updated to the most
recent version of the files, cvs tells you this; then you have
to first update, resolve any possible clashes, and then redo
the commit.)
USING CVS
Suppose that a number of repositories have been stored in
/usr/src/cvs. Whenever you use cvs, the environment variable
CVSROOT must be set to this (for some reason):
CVSROOT=/usr/src/cvs
export CVSROOT
TO CREATE A PERSONAL COPY OF A REPOSITORY
Suppose you want a copy of the files in repository 'views' to be
created in your directory src. Go to the place where you want your
copy of the directory, and do a 'checkout' of the directory you
want:
cd $HOME/src
cvs checkout views
This creates a directory called (in this case) 'views' in the src
directory, containing a copy of the files, which you may now work
on to your heart's content.
TO UPDATE YOUR COPY
Use the command 'cvs update'.
This will update your copy with any changes from the central
repository, telling you which files have been updated (their names
are displayed with a U before them), and which have been modified
by you and not yet committed (preceded by an M). You will be
warned of any files that contain clashes, the clashes will be
marked in the file surrounded by lines of the form <<<< and >>>>.
TO COMMIT YOUR CHANGES
Use the command 'cvs commit'.
You will be put in an editor to make a message that describes the
changes that you have made (for future reference). Your changes
will then be added to the central copy.
ADDING AND REMOVING FILES
It can be that the changes you want to make involve a completely
new file, or removing an existing one. The commands to use here
are:
cvs add <filename>
cvs remove <filename>
You still have to do a commit after these commands. You may make
any number of new files in your copy of the repository, but they
will not be committed to the central copy unless you do a 'cvs add'.
OTHER USEFUL COMMANDS AND HINTS
To see the commit messages for files, and who made them, use:
cvs log [filenames]
To see the differences between your version and the central version:
cvs diff [filenames]
To give a file a new name, rename it and do an add and a remove.
To lose your changes and go back to the version from the
repository, delete the file and do an update.
After an update where there have been clashes, your original
version of the file is saved as .#file.version.
All the cvs commands mentioned accept a flag '-n', that doesn't do
the action, but lets you see what would happen. For instance, you
can use 'cvs -n update' to see which files would be updated.
MORE INFORMATION
This is necessarily a very brief introduction. See the manual page
(man cvs) for full details.

View File

@ -1,238 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
# XXX: FIXME: handle multiple '-f logfile' arguments
#
# XXX -- I HATE Perl! This *will* be re-written in shell/awk/sed soon!
#
# Usage: log.pl [-u user] [[-m mailto] ...] [-s] [-V] -f logfile 'dirname file ...'
#
# -u user - $USER passed from loginfo
# -m mailto - for each user to receive cvs log reports
# (multiple -m's permitted)
# -s - to prevent "cvs status -v" messages
# -V - without '-s', don't pass '-v' to cvs status
# -f logfile - for the logfile to append to (mandatory,
# but only one logfile can be specified).
# here is what the output looks like:
#
# From: woods@kuma.domain.top
# Subject: CVS update: testmodule
#
# Date: Wednesday November 23, 1994 @ 14:15
# Author: woods
#
# Update of /local/src-CVS/testmodule
# In directory kuma:/home/kuma/woods/work.d/testmodule
#
# Modified Files:
# test3
# Added Files:
# test6
# Removed Files:
# test4
# Log Message:
# - wow, what a test
#
# (and for each file the "cvs status -v" output is appended unless -s is used)
#
# ==================================================================
# File: test3 Status: Up-to-date
#
# Working revision: 1.41 Wed Nov 23 14:15:59 1994
# Repository revision: 1.41 /local/src-CVS/cvs/testmodule/test3,v
# Sticky Options: -ko
#
# Existing Tags:
# local-v2 (revision: 1.7)
# local-v1 (revision: 1.1.1.2)
# CVS-1_4A2 (revision: 1.1.1.2)
# local-v0 (revision: 1.2)
# CVS-1_4A1 (revision: 1.1.1.1)
# CVS (branch: 1.1.1)
use strict;
use IO::File;
my $cvsroot = $ENV{'CVSROOT'};
# turn off setgid
#
$) = $(;
my $dostatus = 1;
my $verbosestatus = 1;
my $users;
my $login;
my $donefiles;
my $logfile;
my @files;
# parse command line arguments
#
while (@ARGV) {
my $arg = shift @ARGV;
if ($arg eq '-m') {
$users = "$users " . shift @ARGV;
} elsif ($arg eq '-u') {
$login = shift @ARGV;
} elsif ($arg eq '-f') {
($logfile) && die "Too many '-f' args";
$logfile = shift @ARGV;
} elsif ($arg eq '-s') {
$dostatus = 0;
} elsif ($arg eq '-V') {
$verbosestatus = 0;
} else {
($donefiles) && die "Too many arguments!\n";
$donefiles = 1;
@files = split(/ /, $arg);
}
}
# the first argument is the module location relative to $CVSROOT
#
my $modulepath = shift @files;
my $mailcmd = "| Mail -s 'CVS update: $modulepath'";
# Initialise some date and time arrays
#
my @mos = ('January','February','March','April','May','June','July',
'August','September','October','November','December');
my @days = ('Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday');
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime;
$year += 1900;
# get a login name for the guy doing the commit....
#
if ($login eq '') {
$login = getlogin || (getpwuid($<))[0] || "nobody";
}
# open log file for appending
#
my $logfh = new IO::File ">>" . $logfile
or die "Could not open(" . $logfile . "): $!\n";
# send mail, if there's anyone to send to!
#
my $mailfh;
if ($users) {
$mailcmd = "$mailcmd $users";
$mailfh = new IO::File $mailcmd
or die "Could not Exec($mailcmd): $!\n";
}
# print out the log Header
#
$logfh->print ("\n");
$logfh->print ("****************************************\n");
$logfh->print ("Date:\t$days[$wday] $mos[$mon] $mday, $year @ $hour:" . sprintf("%02d", $min) . "\n");
$logfh->print ("Author:\t$login\n\n");
if ($mailfh) {
$mailfh->print ("\n");
$mailfh->print ("Date:\t$days[$wday] $mos[$mon] $mday, $year @ $hour:" . sprintf("%02d", $min) . "\n");
$mailfh->print ("Author:\t$login\n\n");
}
# print the stuff from logmsg that comes in on stdin to the logfile
#
my $infh = new IO::File "< -";
foreach ($infh->getlines) {
$logfh->print;
if ($mailfh) {
$mailfh->print ($_);
}
}
undef $infh;
$logfh->print ("\n");
# after log information, do an 'cvs -Qq status -v' on each file in the arguments.
#
if ($dostatus != 0) {
while (@files) {
my $file = shift @files;
if ($file eq "-") {
$logfh->print ("[input file was '-']\n");
if ($mailfh) {
$mailfh->print ("[input file was '-']\n");
}
last;
}
my $rcsfh = new IO::File;
my $pid = $rcsfh->open ("-|");
if ( !defined $pid )
{
die "fork failed: $!";
}
if ($pid == 0)
{
my @command = ('cvs', '-nQq', 'status');
if ($verbosestatus)
{
push @command, '-v';
}
push @command, $file;
exec @command;
die "cvs exec failed: $!";
}
my $line;
while ($line = $rcsfh->getline) {
$logfh->print ($line);
if ($mailfh) {
$mailfh->print ($line);
}
}
undef $rcsfh;
}
}
$logfh->close()
or die "Write to $logfile failed: $!";
if ($mailfh)
{
$mailfh->close;
die "Pipe to $mailcmd failed" if $?;
}
## must exit cleanly
##
exit 0;

View File

@ -1,749 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
# Perl filter to handle the log messages from the checkin of files in
# a directory. This script will group the lists of files by log
# message, and mail a single consolidated log message at the end of
# the commit.
#
# This file assumes a pre-commit checking program that leaves the
# names of the first and last commit directories in a temporary file.
#
# IMPORTANT: what the above means is, this script interacts with
# commit_prep, in that they have to agree on the tmpfile name to use.
# See $LAST_FILE below.
#
# How this works: CVS triggers this script once for each directory
# involved in the commit -- in other words, a single commit can invoke
# this script N times. It knows when it's on the last invocation by
# examining the contents of $LAST_FILE. Between invocations, it
# caches information for its future incarnations in various temporary
# files in /tmp, which are named according to the process group and
# the committer (by themselves, neither of these are unique, but
# together they almost always are, unless the same user is doing two
# commits simultaneously). The final invocation is the one that
# actually sends the mail -- it gathers up the cached information,
# combines that with what it found out on this pass, and sends a
# commit message to the appropriate mailing list.
#
# (Ask Karl Fogel <kfogel@collab.net> if questions.)
#
# Contributed by David Hampton <hampton@cisco.com>
# Roy Fielding removed useless code and added log/mail of new files
# Ken Coar added special processing (i.e., no diffs) for binary files
#
############################################################
#
# Configurable options
#
############################################################
#
# Where do you want the RCS ID and delta info?
# 0 = none,
# 1 = in mail only,
# 2 = in both mail and logs.
#
$rcsidinfo = 2;
#if you are using CVS web then set this to some value... if not set it to ""
#
# When set properly, this will cause links to aspects of the project to
# print in the commit emails.
#$CVSWEB_SCHEME = "http";
#$CVSWEB_DOMAIN = "nongnu.org";
#$CVSWEB_PORT = "80";
#$CVSWEB_URI = "source/browse/";
#$SEND_URL = "true";
$SEND_DIFF = "true";
# Set this to a domain to have CVS pretend that all users who make
# commits have mail accounts within that domain.
#$EMULATE_LOCAL_MAIL_USER="nongnu.org";
# Set this to '-c' for context diffs; defaults to '-u' for unidiff format.
$difftype = '-uN';
############################################################
#
# Constants
#
############################################################
$STATE_NONE = 0;
$STATE_CHANGED = 1;
$STATE_ADDED = 2;
$STATE_REMOVED = 3;
$STATE_LOG = 4;
$TMPDIR = $ENV{'TMPDIR'} || '/tmp';
$FILE_PREFIX = '#cvs.';
$LAST_FILE = "$TMPDIR/${FILE_PREFIX}lastdir"; # Created by commit_prep!
$ADDED_FILE = "$TMPDIR/${FILE_PREFIX}files.added";
$REMOVED_FILE = "$TMPDIR/${FILE_PREFIX}files.removed";
$LOG_FILE = "$TMPDIR/${FILE_PREFIX}files.log";
$BRANCH_FILE = "$TMPDIR/${FILE_PREFIX}files.branch";
$MLIST_FILE = "$TMPDIR/${FILE_PREFIX}files.mlist";
$SUMMARY_FILE = "$TMPDIR/${FILE_PREFIX}files.summary";
$CVSROOT = $ENV{'CVSROOT'};
$MAIL_CMD = "| /usr/lib/sendmail -i -t";
#$MAIL_CMD = "| /var/qmail/bin/qmail-inject";
$MAIL_FROM = 'commitlogger'; #not needed if EMULATE_LOCAL_MAIL_USER
$SUBJECT_PRE = 'CVS update:';
############################################################
#
# Subroutines
#
############################################################
sub format_names {
local($dir, @files) = @_;
local(@lines);
$lines[0] = sprintf(" %-08s", $dir);
foreach $file (@files) {
if (length($lines[$#lines]) + length($file) > 60) {
$lines[++$#lines] = sprintf(" %8s", " ");
}
$lines[$#lines] .= " ".$file;
}
@lines;
}
sub cleanup_tmpfiles {
local(@files);
opendir(DIR, $TMPDIR);
push(@files, grep(/^${FILE_PREFIX}.*\.${id}\.${cvs_user}$/, readdir(DIR)));
closedir(DIR);
foreach (@files) {
unlink "$TMPDIR/$_";
}
}
sub write_logfile {
local($filename, @lines) = @_;
open(FILE, ">$filename") || die ("Cannot open log file $filename: $!\n");
print(FILE join("\n", @lines), "\n");
close(FILE);
}
sub append_to_file {
local($filename, $dir, @files) = @_;
if (@files) {
local(@lines) = &format_names($dir, @files);
open(FILE, ">>$filename") || die ("Cannot open file $filename: $!\n");
print(FILE join("\n", @lines), "\n");
close(FILE);
}
}
sub write_line {
local($filename, $line) = @_;
open(FILE, ">$filename") || die("Cannot open file $filename: $!\n");
print(FILE $line, "\n");
close(FILE);
}
sub append_line {
local($filename, $line) = @_;
open(FILE, ">>$filename") || die("Cannot open file $filename: $!\n");
print(FILE $line, "\n");
close(FILE);
}
sub read_line {
local($filename) = @_;
local($line);
open(FILE, "<$filename") || die("Cannot open file $filename: $!\n");
$line = <FILE>;
close(FILE);
chomp($line);
$line;
}
sub read_line_nodie {
local($filename) = @_;
local($line);
open(FILE, "<$filename") || return ("");
$line = <FILE>;
close(FILE);
chomp($line);
$line;
}
sub read_file_lines {
local($filename) = @_;
local(@text) = ();
open(FILE, "<$filename") || return ();
while (<FILE>) {
chomp;
push(@text, $_);
}
close(FILE);
@text;
}
sub read_file {
local($filename, $leader) = @_;
local(@text) = ();
open(FILE, "<$filename") || return ();
while (<FILE>) {
chomp;
push(@text, sprintf(" %-10s %s", $leader, $_));
$leader = "";
}
close(FILE);
@text;
}
sub read_logfile {
local($filename, $leader) = @_;
local(@text) = ();
open(FILE, "<$filename") || die ("Cannot open log file $filename: $!\n");
while (<FILE>) {
chomp;
push(@text, $leader.$_);
}
close(FILE);
@text;
}
#
# do an 'cvs -Qn status' on each file in the arguments, and extract info.
#
sub change_summary {
local($out, @filenames) = @_;
local(@revline);
local($file, $rev, $rcsfile, $line, $vhost, $cvsweb_base);
while (@filenames) {
$file = shift @filenames;
if ("$file" eq "") {
next;
}
open(RCS, "-|") || exec "$cvsbin/cvs", '-Qn', 'status', '--', $file;
$rev = "";
$delta = "";
$rcsfile = "";
while (<RCS>) {
if (/^[ \t]*Repository revision/) {
chomp;
@revline = split(' ', $_);
$rev = $revline[2];
$rcsfile = $revline[3];
$rcsfile =~ s,^$CVSROOT/,,;
$rcsfile =~ s/,v$//;
}
}
close(RCS);
if ($rev ne '' && $rcsfile ne '') {
open(RCS, "-|") || exec "$cvsbin/cvs", '-Qn', 'log', "-r$rev",
'--', $file;
while (<RCS>) {
if (/^date:/) {
chomp;
$delta = $_;
$delta =~ s/^.*;//;
$delta =~ s/^[\s]+lines://;
}
}
close(RCS);
}
$diff = "\n\n";
$vhost = $path[0];
if ($CVSWEB_PORT eq "80") {
$cvsweb_base = "$CVSWEB_SCHEME://$vhost.$CVSWEB_DOMAIN/$CVSWEB_URI";
}
else {
$cvsweb_base = "$CVSWEB_SCHEME://$vhost.$CVSWEB_DOMAIN:$CVSWEB_PORT/$CVSWEB_URI";
}
if ($SEND_URL eq "true") {
$diff .= $cvsweb_base . join("/", @path) . "/$file";
}
#
# If this is a binary file, don't try to report a diff; not only is
# it meaningless, but it also screws up some mailers. We rely on
# Perl's 'is this binary' algorithm; it's pretty good. But not
# perfect.
#
if (($file =~ /\.(?:pdf|gif|jpg|mpg)$/i) || (-B $file)) {
if ($SEND_URL eq "true") {
$diff .= "?rev=$rev&content-type=text/x-cvsweb-markup\n\n";
}
if ($SEND_DIFF eq "true") {
$diff .= "\t<<Binary file>>\n\n";
}
}
else {
#
# Get the differences between this and the previous revision,
# being aware that new files always have revision '1.1' and
# new branches always end in '.n.1'.
#
if ($rev =~ /^(.*)\.([0-9]+)$/) {
$prev = $2 - 1;
$prev_rev = $1 . '.' . $prev;
$prev_rev =~ s/\.[0-9]+\.0$//;# Truncate if first rev on branch
if ($rev eq '1.1') {
if ($SEND_URL eq "true") {
$diff .= "?rev=$rev&content-type=text/x-cvsweb-markup\n\n";
}
if ($SEND_DIFF eq "true") {
open(DIFF, "-|")
|| exec "$cvsbin/cvs", '-Qn', 'update', '-p', '-r1.1',
'--', $file;
$diff .= "Index: $file\n=================================="
. "=================================\n";
}
}
else {
if ($SEND_URL eq "true") {
$diff .= ".diff?r1=$prev_rev&r2=$rev\n\n";
}
if ($SEND_DIFF eq "true") {
$diff .= "(In the diff below, changes in quantity "
. "of whitespace are not shown.)\n\n";
open(DIFF, "-|")
|| exec "$cvsbin/cvs", '-Qn', 'diff', "$difftype",
'-b', "-r$prev_rev", "-r$rev", '--', $file;
}
}
if ($SEND_DIFF eq "true") {
while (<DIFF>) {
$diff .= $_;
}
close(DIFF);
}
$diff .= "\n\n";
}
}
&append_line($out, sprintf("%-9s%-12s%s%s", $rev, $delta,
$rcsfile, $diff));
}
}
sub build_header {
local($header);
delete $ENV{'TZ'};
local($sec,$min,$hour,$mday,$mon,$year) = localtime(time);
$header = sprintf(" User: %-8s\n Date: %02d/%02d/%02d %02d:%02d:%02d",
$cvs_user, $year%100, $mon+1, $mday,
$hour, $min, $sec);
# $header = sprintf("%-8s %02d/%02d/%02d %02d:%02d:%02d",
# $login, $year%100, $mon+1, $mday,
# $hour, $min, $sec);
}
# !!! Destination Mailing-list and history file mappings here !!!
#sub mlist_map
#{
# local($path) = @_;
# my $domain = "nongnu.org";
#
# if ($path =~ /^([^\/]+)/) {
# return "cvs\@$1.$domain";
# } else {
# return "cvs\@$domain";
# }
#}
sub derive_subject_from_changes_file ()
{
my $subj = "";
for ($i = 0; ; $i++)
{
open (CH, "<$CHANGED_FILE.$i.$id.$cvs_user") or last;
while (my $change = <CH>)
{
# A changes file looks like this:
#
# src foo.c newfile.html
# www index.html project_nav.html
#
# Each line is " Dir File1 File2 ..."
# We only care about Dir, since the subject line should
# summarize.
$change =~ s/^[ \t]*//;
$change =~ /^([^ \t]+)[ \t]*/;
my $dir = $1;
# Fold to rightmost directory component
$dir =~ /([^\/]+)$/;
$dir = $1;
if ($subj eq "") {
$subj = $dir;
} else {
$subj .= ", $dir";
}
}
close (CH);
}
if ($subj ne "") {
$subj = "MODIFIED: $subj ...";
}
else {
# NPM: See if there's any file-addition notifications.
my $added = &read_line_nodie("$ADDED_FILE.$i.$id.$cvs_user");
if ($added ne "") {
$subj .= "ADDED: $added ";
}
# print "derive_subject_from_changes_file().. added== $added \n";
## NPM: See if there's any file-removal notications.
my $removed = &read_line_nodie("$REMOVED_FILE.$i.$id.$cvs_user");
if ($removed ne "") {
$subj .= "REMOVED: $removed ";
}
# print "derive_subject_from_changes_file().. removed== $removed \n";
## NPM: See if there's any branch notifications.
my $branched = &read_line_nodie("$BRANCH_FILE.$i.$id.$cvs_user");
if ($branched ne "") {
$subj .= "BRANCHED: $branched";
}
# print "derive_subject_from_changes_file().. branched== $branched \n";
## NPM: DEFAULT: DIRECTORY CREATION (c.f. "Check for a new directory first" in main mody)
if ($subj eq "") {
my $subject = join("/", @path);
$subj = "NEW: $subject";
}
}
return $subj;
}
sub mail_notification
{
local($addr_list, @text) = @_;
local($mail_to);
my $subj = &derive_subject_from_changes_file ();
if ($EMULATE_LOCAL_MAIL_USER ne "") {
$MAIL_FROM = "$cvs_user\@$EMULATE_LOCAL_MAIL_USER";
}
$mail_to = join(", ", @{$addr_list});
print "Mailing the commit message to $mail_to (from $MAIL_FROM)\n";
$ENV{'MAILUSER'} = $MAIL_FROM;
# Commented out on hocus, so comment it out here. -kff
# $ENV{'QMAILINJECT'} = 'f';
open(MAIL, "$MAIL_CMD -f$MAIL_FROM");
print MAIL "From: $MAIL_FROM\n";
print MAIL "To: $mail_to\n";
print MAIL "Subject: $SUBJECT_PRE $subj\n\n";
print(MAIL join("\n", @text));
close(MAIL);
# print "Mailing the commit message to $MAIL_TO...\n";
#
# #added by jrobbins@collab.net 1999/12/15
# # attempt to get rid of anonymous
# $ENV{'MAILUSER'} = 'commitlogger';
# $ENV{'QMAILINJECT'} = 'f';
#
# open(MAIL, "| /var/qmail/bin/qmail-inject");
# print(MAIL "To: $MAIL_TO\n");
# print(MAIL "Subject: cvs commit: $ARGV[0]\n");
# print(MAIL join("\n", @text));
# close(MAIL);
}
## process the command line arguments sent to this script
## it returns an array of files, %s, sent from the loginfo
## command
sub process_argv
{
local(@argv) = @_;
local(@files);
local($arg);
print "Processing log script arguments...\n";
while (@argv) {
$arg = shift @argv;
if ($arg eq '-u') {
$cvs_user = shift @argv;
} else {
($donefiles) && die "Too many arguments!\n";
$donefiles = 1;
$ARGV[0] = $arg;
@files = split(' ', $arg);
}
}
return @files;
}
#############################################################
#
# Main Body
#
############################################################
#
# Setup environment
#
umask (002);
# Connect to the database
$cvsbin = "/usr/bin";
#
# Initialize basic variables
#
$id = getpgrp();
$state = $STATE_NONE;
$cvs_user = $ENV{'USER'} || getlogin || (getpwuid($<))[0] || sprintf("uid#%d",$<);
@files = process_argv(@ARGV);
@path = split('/', $files[0]);
if ($#path == 0) {
$dir = ".";
} else {
$dir = join('/', @path[1..$#path]);
}
#print("ARGV - ", join(":", @ARGV), "\n");
#print("files - ", join(":", @files), "\n");
#print("path - ", join(":", @path), "\n");
#print("dir - ", $dir, "\n");
#print("id - ", $id, "\n");
#
# Map the repository directory to an email address for commitlogs to be sent
# to.
#
#$mlist = &mlist_map($files[0]);
##########################
#
# Check for a new directory first. This will always appear as a
# single item in the argument list, and an empty log message.
#
if ($ARGV[0] =~ /New directory/) {
$header = &build_header;
@text = ();
push(@text, $header);
push(@text, "");
push(@text, " ".$ARGV[0]);
&mail_notification([ $mlist ], @text);
exit 0;
}
#
# Iterate over the body of the message collecting information.
#
while (<STDIN>) {
chomp; # Drop the newline
if (/^Revision\/Branch:/) {
s,^Revision/Branch:,,;
push (@branch_lines, split);
next;
}
# next if (/^[ \t]+Tag:/ && $state != $STATE_LOG);
if (/^Modified Files/) { $state = $STATE_CHANGED; next; }
if (/^Added Files/) { $state = $STATE_ADDED; next; }
if (/^Removed Files/) { $state = $STATE_REMOVED; next; }
if (/^Log Message/) { $state = $STATE_LOG; next; }
s/[ \t\n]+$//; # delete trailing space
push (@changed_files, split) if ($state == $STATE_CHANGED);
push (@added_files, split) if ($state == $STATE_ADDED);
push (@removed_files, split) if ($state == $STATE_REMOVED);
if ($state == $STATE_LOG) {
if (/^PR:$/i ||
/^Reviewed by:$/i ||
/^Submitted by:$/i ||
/^Obtained from:$/i) {
next;
}
push (@log_lines, $_);
}
}
#
# Strip leading and trailing blank lines from the log message. Also
# compress multiple blank lines in the body of the message down to a
# single blank line.
# (Note, this only does the mail and changes log, not the rcs log).
#
while ($#log_lines > -1) {
last if ($log_lines[0] ne "");
shift(@log_lines);
}
while ($#log_lines > -1) {
last if ($log_lines[$#log_lines] ne "");
pop(@log_lines);
}
for ($i = $#log_lines; $i > 0; $i--) {
if (($log_lines[$i - 1] eq "") && ($log_lines[$i] eq "")) {
splice(@log_lines, $i, 1);
}
}
#
# Find the log file that matches this log message
#
for ($i = 0; ; $i++) {
last if (! -e "$LOG_FILE.$i.$id.$cvs_user");
@text = &read_logfile("$LOG_FILE.$i.$id.$cvs_user", "");
last if ($#text == -1);
last if (join(" ", @log_lines) eq join(" ", @text));
}
#
# Spit out the information gathered in this pass.
#
&write_logfile("$LOG_FILE.$i.$id.$cvs_user", @log_lines);
&append_to_file("$BRANCH_FILE.$i.$id.$cvs_user", $dir, @branch_lines);
&append_to_file("$ADDED_FILE.$i.$id.$cvs_user", $dir, @added_files);
&append_to_file("$CHANGED_FILE.$i.$id.$cvs_user", $dir, @changed_files);
&append_to_file("$REMOVED_FILE.$i.$id.$cvs_user", $dir, @removed_files);
&append_line("$MLIST_FILE.$i.$id.$cvs_user", $mlist);
if ($rcsidinfo) {
&change_summary("$SUMMARY_FILE.$i.$id.$cvs_user", (@changed_files, @added_files));
}
#
# Check whether this is the last directory. If not, quit.
#
if (-e "$LAST_FILE.$id.$cvs_user") {
$_ = &read_line("$LAST_FILE.$id.$cvs_user");
$tmpfiles = $files[0];
$tmpfiles =~ s,([^a-zA-Z0-9_/]),\\$1,g;
if (! grep(/$tmpfiles$/, $_)) {
print "More commits to come...\n";
exit 0
}
}
#
# This is it. The commits are all finished. Lump everything together
# into a single message, fire a copy off to the mailing list, and drop
# it on the end of the Changes file.
#
$header = &build_header;
#
# Produce the final compilation of the log messages
#
@text = ();
@mlist_list = ();
push(@text, $header);
push(@text, "");
for ($i = 0; ; $i++) {
last if (! -e "$LOG_FILE.$i.$id.$cvs_user");
push(@text, &read_file("$BRANCH_FILE.$i.$id.$cvs_user", "Branch:"));
push(@text, &read_file("$CHANGED_FILE.$i.$id.$cvs_user", "Modified:"));
push(@text, &read_file("$ADDED_FILE.$i.$id.$cvs_user", "Added:"));
push(@text, &read_file("$REMOVED_FILE.$i.$id.$cvs_user", "Removed:"));
push(@text, " Log:");
push(@text, &read_logfile("$LOG_FILE.$i.$id.$cvs_user", " "));
push(@mlist_list, &read_file_lines("$MLIST_FILE.$i.$id.$cvs_user"));
if ($rcsidinfo == 2) {
if (-e "$SUMMARY_FILE.$i.$id.$cvs_user") {
push(@text, " ");
push(@text, " Revision Changes Path");
push(@text, &read_logfile("$SUMMARY_FILE.$i.$id.$cvs_user", " "));
}
}
push(@text, "");
}
#
# Now generate the extra info for the mail message..
#
if ($rcsidinfo == 1) {
$revhdr = 0;
for ($i = 0; ; $i++) {
last if (! -e "$LOG_FILE.$i.$id.$cvs_user");
if (-e "$SUMMARY_FILE.$i.$id.$cvs_user") {
if (!$revhdr++) {
push(@text, "Revision Changes Path");
}
push(@text, &read_logfile("$SUMMARY_FILE.$i.$id.$cvs_user", ""));
}
}
if ($revhdr) {
push(@text, ""); # consistancy...
}
}
%mlist_hash = ();
foreach (@mlist_list) { $mlist_hash{ $_ } = 1; }
#
# Mail out the notification.
#
&mail_notification([ keys(%mlist_hash) ], @text);
&cleanup_tmpfiles;
exit 0;

View File

@ -1,115 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
# From: clyne@niwot.scd.ucar.EDU (John Clyne)
# Date: Fri, 28 Feb 92 09:54:21 MST
#
# BTW, i wrote a perl script that is similar to 'nfpipe' except that in
# addition to logging to a file it provides a command line option for mailing
# change notices to a group of users. Obviously you probably wouldn't want
# to mail every change. But there may be certain directories that are commonly
# accessed by a group of users who would benefit from an email notice.
# Especially if they regularly beat on the same directory. Anyway if you
# think anyone would be interested here it is.
#
# File: mfpipe
#
# Author: John Clyne
# National Center for Atmospheric Research
# PO 3000, Boulder, Colorado
#
# Date: Wed Feb 26 18:34:53 MST 1992
#
# Description: Tee standard input to mail a list of users and to
# a file. Used by CVS logging.
#
# Usage: mfpipe [-f file] [user@host...]
#
# Environment: CVSROOT
# Path to CVS root.
#
# Files:
#
#
# Options: -f file
# Capture output to 'file'
#
$header = "Log Message:\n";
$mailcmd = "| mail -s 'CVS update notice'";
$whoami = `whoami`;
chop $whoami;
$date = `date`;
chop $date;
$cvsroot = $ENV{'CVSROOT'};
while (@ARGV) {
$arg = shift @ARGV;
if ($arg eq '-f') {
$file = shift @ARGV;
}
else {
$users = "$users $arg";
}
}
if ($users) {
$mailcmd = "$mailcmd $users";
open(MAIL, $mailcmd) || die "Execing $mail: $!\n";
}
if ($file) {
$logfile = "$cvsroot/LOG/$file";
open(FILE, ">> $logfile") || die "Opening $logfile: $!\n";
}
print FILE "$whoami $date--------BEGIN LOG ENTRY-------------\n" if ($logfile);
while (<>) {
print FILE $log if ($log && $logfile);
print FILE $_ if ($logfile);
print MAIL $_ if ($users);
$log = "log: " if ($_ eq $header);
}
close FILE;
die "Write failed" if $?;
close MAIL;
die "Mail failed" if $?;
exit 0;

File diff suppressed because it is too large Load Diff

View File

@ -1,193 +0,0 @@
#! /bin/sh
#
# Copyright (c) 1989-2005 The Free Software Foundation, Inc.
# Portions Copyright (c) 1989, Brian Berliner
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# Based on the CVS 1.0 checkin csh script.
# Contributed by Per Cederqvist <ceder@signum.se>.
# Rewritten in sh by David MacKenzie <djm@cygnus.com>.
#
#############################################################################
#
# Check in sources that previously were under RCS or no source control system.
#
# The repository is the directory where the sources should be deposited.
#
# Traverses the current directory, ensuring that an
# identical directory structure exists in the repository directory. It
# then checks the files in in the following manner:
#
# 1) If the file doesn't yet exist, check it in as revision 1.1
#
# The script also is somewhat verbose in letting the user know what is
# going on. It prints a diagnostic when it creates a new file, or updates
# a file that has been modified on the trunk.
#
# Bugs: doesn't put the files in branch 1.1.1
# doesn't put in release and vendor tags
#
#############################################################################
usage="Usage: rcs-to-cvs [-v] [-m message] [-f message_file] repository"
vbose=0
message=""
if [ -d /var/tmp ]; then message_file=/var/tmp/checkin.$$; else message_file=/usr/tmp/checkin.$$; fi
got_one=0
if [ $# -lt 1 ]; then
echo "$usage" >&2
exit 1
fi
while [ $# -ne 0 ]; do
case "$1" in
-v)
vbose=1
;;
-m)
shift
echo $1 > $message_file
got_one=1
;;
-f)
shift
message_file=$1
got_one=2
;;
*)
break
esac
shift
done
if [ $# -lt 1 ]; then
echo "$usage" >&2
exit 1
fi
repository=$1
shift
if [ -z "$CVSROOT" ]; then
echo "Please the environmental variable CVSROOT to the root" >&2
echo " of the tree you wish to update" >&2
exit 1
fi
if [ $got_one -eq 0 ]; then
echo "Please Edit this file to contain the RCS log information" >$message_file
echo "to be associated with this directory (please remove these lines)">>$message_file
${EDITOR-vi} $message_file
got_one=1
fi
# Ya gotta share.
umask 0
update_dir=${CVSROOT}/${repository}
[ ! -d ${update_dir} ] && mkdir $update_dir
if [ -d SCCS ]; then
echo SCCS files detected! >&2
exit 1
fi
if [ -d RCS ]; then
co RCS/*
fi
for name in * .[a-zA-Z0-9]*
do
case "$name" in
RCS | *~ | \* | .\[a-zA-Z0-9\]\* ) continue ;;
esac
echo $name
if [ $vbose -ne 0 ]; then
echo "Updating ${repository}/${name}"
fi
if [ -d "$name" ]; then
if [ ! -d "${update_dir}/${name}" ]; then
echo "WARNING: Creating new directory ${repository}/${name}"
mkdir "${update_dir}/${name}"
if [ $? -ne 0 ]; then
echo "ERROR: mkdir failed - aborting" >&2
exit 1
fi
fi
cd "$name"
if [ $? -ne 0 ]; then
echo "ERROR: Couldn\'t cd to $name - aborting" >&2
exit 1
fi
if [ $vbose -ne 0 ]; then
$0 -v -f $message_file "${repository}/${name}"
else
$0 -f $message_file "${repository}/${name}"
fi
if [ $? -ne 0 ]; then
exit 1
fi
cd ..
else # if not directory
if [ ! -f "$name" ]; then
echo "WARNING: $name is neither a regular file"
echo " nor a directory - ignored"
continue
fi
file="${update_dir}/${name},v"
comment=""
if grep -s '\$Log.*\$' "${name}"; then # If $Log keyword
myext=`echo $name | sed 's,.*\.,,'`
[ "$myext" = "$name" ] && myext=
case "$myext" in
c | csh | e | f | h | l | mac | me | mm | ms | p | r | red | s | sh | sl | cl | ml | el | tex | y | ye | yr | "" )
;;
* )
echo "For file ${file}:"
grep '\$Log.*\$' "${name}"
echo -n "Please insert a comment leader for file ${name} > "
read comment
;;
esac
fi
if [ ! -f "$file" ]; then # If not exists in repository
if [ ! -f "${update_dir}/Attic/${name},v" ]; then
echo "WARNING: Creating new file ${repository}/${name}"
if [ -f RCS/"${name}",v ]; then
echo "MSG: Copying old rcs file."
cp RCS/"${name}",v "$file"
else
if [ -n "${comment}" ]; then
rcs -q -i -c"${comment}" -t${message_file} -m'.' "$file"
fi
ci -q -u1.1 -t${message_file} -m'.' "$file"
if [ $? -ne 0 ]; then
echo "ERROR: Initial check-in of $file failed - aborting" >&2
exit 1
fi
fi
else
file="${update_dir}/Attic/${name},v"
echo "WARNING: IGNORED: ${repository}/Attic/${name}"
continue
fi
else # File existed
echo "ERROR: File exists in repository: Ignored: $file"
continue
fi
fi
done
[ $got_one -eq 1 ] && rm -f $message_file
exit 0

View File

@ -1,742 +0,0 @@
#! /bin/sh
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# RCS to ChangeLog generator
# Generate a change log prefix from RCS files (perhaps in the CVS repository)
# and the ChangeLog (if any).
# Output the new prefix to standard output.
# You can edit this prefix by hand, and then prepend it to ChangeLog.
# Ignore log entries that start with `#'.
# Clump together log entries that start with `{topic} ',
# where `topic' contains neither white space nor `}'.
Help='The default FILEs are the files registered under the working directory.
Options:
-c CHANGELOG Output a change log prefix to CHANGELOG (default ChangeLog).
-h HOSTNAME Use HOSTNAME in change log entries (default current host).
-i INDENT Indent change log lines by INDENT spaces (default 8).
-l LENGTH Try to limit log lines to LENGTH characters (default 79).
-L FILE Use rlog-format FILE for source of logs.
-R If no FILEs are given and RCS is used, recurse through working directory.
-r OPTION Pass OPTION to subsidiary log command.
-t TABWIDTH Tab stops are every TABWIDTH characters (default 8).
-u "LOGIN<tab>FULLNAME<tab>MAILADDR" Assume LOGIN has FULLNAME and MAILADDR.
-v Append RCS revision to file names in log lines.
--help Output help.
--version Output version number.
Report bugs to <bug-gnu-emacs@gnu.org>.'
Id='$Id: rcs2log,v 1.48 2001/09/05 23:07:46 eggert Exp $'
# Copyright 1992, 1993, 1994, 1995, 1996, 1997, 1998, 2001, 2003
# Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; see the file COPYING. If not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
Copyright='Copyright 1992-2003 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute copies of this program
under the terms of the GNU General Public License.
For more information about these matters, see the files named COPYING.
Author: Paul Eggert <eggert@twinsun.com>'
# functions
@MKTEMP_SH_FUNCTION@
# Use the traditional C locale.
LANG=C
LANGUAGE=C
LC_ALL=C
LC_COLLATE=C
LC_CTYPE=C
LC_MESSAGES=C
LC_NUMERIC=C
LC_TIME=C
export LANG LANGUAGE LC_ALL LC_COLLATE LC_CTYPE LC_MESSAGES LC_NUMERIC LC_TIME
# These variables each contain a single ASCII character.
# Unfortunately, there's no portable way of writing these characters
# in older Unix implementations, other than putting them directly into
# this text file.
SOH='' # SOH, octal code 001
tab=' '
nl='
'
# Parse options.
# defaults
: ${MKTEMP="@MKTEMP@"}
: ${AWK=awk}
: ${TMPDIR=/tmp}
changelog=ChangeLog # change log file name
datearg= # rlog date option
hostname= # name of local host (if empty, will deduce it later)
indent=8 # indent of log line
length=79 # suggested max width of log line
logins= # login names for people we know fullnames and mailaddrs of
loginFullnameMailaddrs= # login<tab>fullname<tab>mailaddr triplets
logTZ= # time zone for log dates (if empty, use local time)
recursive= # t if we want recursive rlog
revision= # t if we want revision numbers
rlog_options= # options to pass to rlog
rlogfile= # log file to read from
tabwidth=8 # width of horizontal tab
while :
do
case $1 in
-c) changelog=${2?}; shift;;
-i) indent=${2?}; shift;;
-h) hostname=${2?}; shift;;
-l) length=${2?}; shift;;
-L) rlogfile=${2?}; shift;;
-[nu]) # -n is obsolescent; it is replaced by -u.
case $1 in
-n) case ${2?}${3?}${4?} in
*"$tab"* | *"$nl"*)
echo >&2 "$0: -n '$2' '$3' '$4': tabs, newlines not allowed"
exit 1;;
esac
login=$2
lfm=$2$tab$3$tab$4
shift; shift; shift;;
-u)
# If $2 is not tab-separated, use colon for separator.
case ${2?} in
*"$nl"*)
echo >&2 "$0: -u '$2': newlines not allowed"
exit 1;;
*"$tab"*)
t=$tab;;
*)
t=':';;
esac
case $2 in
*"$t"*"$t"*"$t"*)
echo >&2 "$0: -u '$2': too many fields"
exit 1;;
*"$t"*"$t"*)
uf="[^$t]*$t" # An unselected field, followed by a separator.
sf="\\([^$t]*\\)" # The selected field.
login=`expr "X$2" : "X$sf"`
lfm="$login$tab"`
expr "X$2" : "$uf$sf"
`"$tab"`
expr "X$2" : "$uf$uf$sf"
`;;
*)
echo >&2 "$0: -u '$2': not enough fields"
exit 1;;
esac
shift;;
esac
case $logins in
'') logins=$login;;
?*) logins=$logins$nl$login;;
esac
case $loginFullnameMailaddrs in
'') loginFullnameMailaddrs=$lfm;;
?*) loginFullnameMailaddrs=$loginFullnameMailaddrs$nl$lfm;;
esac;;
-r)
case $rlog_options in
'') rlog_options=${2?};;
?*) rlog_options=$rlog_options$nl${2?};;
esac
shift;;
-R) recursive=t;;
-t) tabwidth=${2?}; shift;;
-v) revision=t;;
--version)
set $Id
rcs2logVersion=$3
echo >&2 "rcs2log (GNU Emacs) $rcs2logVersion$nl$Copyright"
exit 0;;
-*) echo >&2 "Usage: $0 [OPTION]... [FILE ...]$nl$Help"
case $1 in
--help) exit 0;;
*) exit 1;;
esac;;
*) break;;
esac
shift
done
month_data='
m[0]="Jan"; m[1]="Feb"; m[2]="Mar"
m[3]="Apr"; m[4]="May"; m[5]="Jun"
m[6]="Jul"; m[7]="Aug"; m[8]="Sep"
m[9]="Oct"; m[10]="Nov"; m[11]="Dec"
'
logdir=`$MKTEMP -d $TMPDIR/rcs2log.XXXXXX`
test -n "$logdir" || exit
llogout=$logdir/l
trap exit 1 2 13 15
trap "rm -fr $logdir 2>/dev/null" 0
# If no rlog-format log file is given, generate one into $rlogfile.
case $rlogfile in
'')
rlogfile=$logdir/r
# If no rlog options are given,
# log the revisions checked in since the first ChangeLog entry.
# Since ChangeLog is only by date, some of these revisions may be duplicates of
# what's already in ChangeLog; it's the user's responsibility to remove them.
case $rlog_options in
'')
if test -s "$changelog"
then
e='
/^[0-9]+-[0-9][0-9]-[0-9][0-9]/{
# ISO 8601 date
print $1
exit
}
/^... ... [ 0-9][0-9] [ 0-9][0-9]:[0-9][0-9]:[0-9][0-9] [0-9]+ /{
# old-fashioned date and time (Emacs 19.31 and earlier)
'"$month_data"'
year = $5
for (i=0; i<=11; i++) if (m[i] == $2) break
dd = $3
printf "%d-%02d-%02d\n", year, i+1, dd
exit
}
'
d=`$AWK "$e" <"$changelog"` || exit
case $d in
?*) datearg="-d>$d";;
esac
fi;;
esac
# Use TZ specified by ChangeLog local variable, if any.
if test -s "$changelog"
then
extractTZ='
/^.*change-log-time-zone-rule['"$tab"' ]*:['"$tab"' ]*"\([^"]*\)".*/{
s//\1/; p; q
}
/^.*change-log-time-zone-rule['"$tab"' ]*:['"$tab"' ]*t.*/{
s//UTC0/; p; q
}
'
logTZ=`tail "$changelog" | sed -n "$extractTZ"`
case $logTZ in
?*) TZ=$logTZ; export TZ;;
esac
fi
# If CVS is in use, examine its repository, not the normal RCS files.
if test ! -f CVS/Repository
then
rlog=rlog
repository=
else
rlog='cvs -q log'
repository=`sed 1q <CVS/Repository` || exit
test ! -f CVS/Root || CVSROOT=`cat <CVS/Root` || exit
case $CVSROOT in
*:/*:/*)
echo >&2 "$0: $CVSROOT: CVSROOT has multiple ':/'s"
exit 1;;
*:/*)
# remote repository
pository=`expr "X$repository" : '.*:\(/.*\)'`;;
*)
# local repository
case $repository in
/*) ;;
*) repository=${CVSROOT?}/$repository;;
esac
if test ! -d "$repository"
then
echo >&2 "$0: $repository: bad repository (see CVS/Repository)"
exit 1
fi
pository=$repository;;
esac
# Ensure that $pository ends in exactly one slash.
while :
do
case $pository in
*//) pository=`expr "X$pository" : 'X\(.*\)/'`;;
*/) break;;
*) pository=$pository/; break;;
esac
done
fi
# Use $rlog's -zLT option, if $rlog supports it.
case `$rlog -zLT 2>&1` in
*' option'*) ;;
*)
case $rlog_options in
'') rlog_options=-zLT;;
?*) rlog_options=-zLT$nl$rlog_options;;
esac;;
esac
# With no arguments, examine all files under the RCS directory.
case $# in
0)
case $repository in
'')
oldIFS=$IFS
IFS=$nl
case $recursive in
t)
RCSdirs=`find . -name RCS -type d -print`
filesFromRCSfiles='s|,v$||; s|/RCS/|/|; s|^\./||'
files=`
{
case $RCSdirs in
?*) find $RCSdirs \
-type f \
! -name '*_' \
! -name ',*,' \
! -name '.*_' \
! -name .rcsfreeze.log \
! -name .rcsfreeze.ver \
-print;;
esac
find . -name '*,v' -print
} |
sort -u |
sed "$filesFromRCSfiles"
`;;
*)
files=
for file in RCS/.* RCS/* .*,v *,v
do
case $file in
RCS/. | RCS/.. | RCS/,*, | RCS/*_) continue;;
RCS/.rcsfreeze.log | RCS/.rcsfreeze.ver) continue;;
RCS/.\* | RCS/\* | .\*,v | \*,v) test -f "$file" || continue;;
RCS/*,v | RCS/.*,v) ;;
RCS/* | RCS/.*) test -f "$file" || continue;;
esac
case $files in
'') files=$file;;
?*) files=$files$nl$file;;
esac
done
case $files in
'') exit 0;;
esac;;
esac
set x $files
shift
IFS=$oldIFS;;
esac;;
esac
case $datearg in
?*) $rlog $rlog_options "$datearg" ${1+"$@"} >$rlogfile;;
'') $rlog $rlog_options ${1+"$@"} >$rlogfile;;
esac || exit;;
esac
# Get the full name of each author the logs mention, and set initialize_fullname
# to awk code that initializes the `fullname' awk associative array.
# Warning: foreign authors (i.e. not known in the passwd file) are mishandled;
# you have to fix the resulting output by hand.
initialize_fullname=
initialize_mailaddr=
case $loginFullnameMailaddrs in
?*)
case $loginFullnameMailaddrs in
*\"* | *\\*)
sed 's/["\\]/\\&/g' >$llogout <<EOF || exit
$loginFullnameMailaddrs
EOF
loginFullnameMailaddrs=`cat $llogout`;;
esac
oldIFS=$IFS
IFS=$nl
for loginFullnameMailaddr in $loginFullnameMailaddrs
do
IFS=$tab
set x $loginFullnameMailaddr
login=$2
fullname=$3
mailaddr=$4
initialize_fullname="$initialize_fullname
fullname[\"$login\"] = \"$fullname\""
initialize_mailaddr="$initialize_mailaddr
mailaddr[\"$login\"] = \"$mailaddr\""
done
IFS=$oldIFS;;
esac
case $logins in
?*)
sort -u -o $llogout <<EOF
$logins
EOF
;;
'')
: ;;
esac >$llogout || exit
output_authors='/^date: / {
if ($2 ~ /^[0-9]*[-\/][0-9][0-9][-\/][0-9][0-9]$/ && $3 ~ /^[0-9][0-9]:[0-9][0-9]:[0-9][0-9][-+0-9:]*;$/ && $4 == "author:" && $5 ~ /^[^;]*;$/) {
print substr($5, 1, length($5)-1)
}
}'
authors=`
$AWK "$output_authors" <"$rlogfile" | sort -u | comm -23 - $llogout
`
case $authors in
?*)
cat >$llogout <<EOF || exit
$authors
EOF
initialize_author_script='s/["\\]/\\&/g; s/.*/author[\"&\"] = 1/'
initialize_author=`sed -e "$initialize_author_script" <$llogout`
awkscript='
BEGIN {
alphabet = "abcdefghijklmnopqrstuvwxyz"
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
'"$initialize_author"'
}
{
if (author[$1]) {
fullname = $5
if (fullname ~ /[0-9]+-[^(]*\([0-9]+\)$/) {
# Remove the junk from fullnames like "0000-Admin(0000)".
fullname = substr(fullname, index(fullname, "-") + 1)
fullname = substr(fullname, 1, index(fullname, "(") - 1)
}
if (fullname ~ /,[^ ]/) {
# Some sites put comma-separated junk after the fullname.
# Remove it, but leave "Bill Gates, Jr" alone.
fullname = substr(fullname, 1, index(fullname, ",") - 1)
}
abbr = index(fullname, "&")
if (abbr) {
a = substr($1, 1, 1)
A = a
i = index(alphabet, a)
if (i) A = substr(ALPHABET, i, 1)
fullname = substr(fullname, 1, abbr-1) A substr($1, 2) substr(fullname, abbr+1)
}
# Quote quotes and backslashes properly in full names.
# Do not use gsub; traditional awk lacks it.
quoted = ""
rest = fullname
for (;;) {
p = index(rest, "\\")
q = index(rest, "\"")
if (p) {
if (q && q<p) p = q
} else {
if (!q) break
p = q
}
quoted = quoted substr(rest, 1, p-1) "\\" substr(rest, p, 1)
rest = substr(rest, p+1)
}
printf "fullname[\"%s\"] = \"%s%s\"\n", $1, quoted, rest
author[$1] = 0
}
}
'
initialize_fullname=`
{
(getent passwd $authors) ||
(
cat /etc/passwd
for author in $authors
do NIS_PATH= nismatch $author passwd.org_dir
done
ypmatch $authors passwd
)
} 2>/dev/null |
$AWK -F: "$awkscript"
`$initialize_fullname;;
esac
# Function to print a single log line.
# We don't use awk functions, to stay compatible with old awk versions.
# `Log' is the log message.
# `files' contains the affected files.
printlogline='{
# Following the GNU coding standards, rewrite
# * file: (function): comment
# to
# * file (function): comment
if (Log ~ /^\([^)]*\): /) {
i = index(Log, ")")
filefunc = substr(Log, 1, i)
while ((j = index(filefunc, "\n"))) {
files = files " " substr(filefunc, 1, j-1)
filefunc = substr(filefunc, j+1)
}
files = files " " filefunc
Log = substr(Log, i+3)
}
# If "label: comment" is too long, break the line after the ":".
sep = " "
i = index(Log, "\n")
if ('"$length"' <= '"$indent"' + 1 + length(files) + i) sep = "\n" indent_string
# Print the label.
printf "%s*%s:", indent_string, files
# Print each line of the log.
while (i) {
logline = substr(Log, 1, i-1)
if (logline ~ /[^'"$tab"' ]/) {
printf "%s%s\n", sep, logline
} else {
print ""
}
sep = indent_string
Log = substr(Log, i+1)
i = index(Log, "\n")
}
}'
# Pattern to match the `revision' line of rlog output.
rlog_revision_pattern='^revision [0-9]+\.[0-9]+(\.[0-9]+\.[0-9]+)*(['"$tab"' ]+locked by: [^'"$tab"' $,.0-9:;@]*[^'"$tab"' $,:;@][^'"$tab"' $,.0-9:;@]*;)?['"$tab"' ]*$'
case $hostname in
'')
hostname=`(
hostname || uname -n || uuname -l || cat /etc/whoami
) 2>/dev/null` || {
echo >&2 "$0: cannot deduce hostname"
exit 1
}
case $hostname in
*.*) ;;
*)
domainname=`(domainname) 2>/dev/null` &&
case $domainname in
*.*) hostname=$hostname.$domainname;;
esac;;
esac;;
esac
# Process the rlog output, generating ChangeLog style entries.
# First, reformat the rlog output so that each line contains one log entry.
# Transliterate \n to SOH so that multiline entries fit on a single line.
# Discard irrelevant rlog output.
$AWK '
BEGIN {
pository = "'"$pository"'"
SOH="'"$SOH"'"
}
/^RCS file: / {
if (pository != "") {
filename = substr($0, 11)
if (substr(filename, 1, length(pository)) == pository) {
filename = substr(filename, length(pository) + 1)
}
if (filename ~ /,v$/) {
filename = substr(filename, 1, length(filename) - 2)
}
if (filename ~ /(^|\/)Attic\/[^\/]*$/) {
i = length(filename)
while (substr(filename, i, 1) != "/") i--
filename = substr(filename, 1, i - 6) substr(filename, i + 1)
}
}
rev = "?"
}
/^Working file: / { if (repository == "") filename = substr($0, 15) }
/'"$rlog_revision_pattern"'/, /^(-----------*|===========*)$/ {
line = $0
if (line ~ /'"$rlog_revision_pattern"'/) {
rev = $2
next
}
if (line ~ /^date: [0-9][- +\/0-9:]*;/) {
date = $2
if (date ~ /\//) {
# This is a traditional RCS format date YYYY/MM/DD.
# Replace "/"s with "-"s to get ISO format.
newdate = ""
while ((i = index(date, "/")) != 0) {
newdate = newdate substr(date, 1, i-1) "-"
date = substr(date, i+1)
}
date = newdate date
}
time = substr($3, 1, length($3) - 1)
author = substr($5, 1, length($5)-1)
printf "%s%s%s%s%s%s%s%s%s%s", filename, SOH, rev, SOH, date, SOH, time, SOH, author, SOH
rev = "?"
next
}
if (line ~ /^branches: /) { next }
if (line ~ /^(-----------*|===========*)$/) { print ""; next }
if (line == "Initial revision" || line ~ /^file .+ was initially added on branch .+\.$/) {
line = "New file."
}
printf "%s%s", line, SOH
}
' <"$rlogfile" |
# Now each line is of the form
# FILENAME@REVISION@YYYY-MM-DD@HH:MM:SS[+-TIMEZONE]@AUTHOR@LOG
# where @ stands for an SOH (octal code 001),
# and each line of LOG is terminated by SOH instead of \n.
# Sort the log entries, first by date+time (in reverse order),
# then by author, then by log entry, and finally by file name and revision
# (just in case).
sort -t"$SOH" +2 -4r +4 +0 |
# Finally, reformat the sorted log entries.
$AWK -F"$SOH" '
BEGIN {
logTZ = "'"$logTZ"'"
revision = "'"$revision"'"
# Initialize the fullname and mailaddr associative arrays.
'"$initialize_fullname"'
'"$initialize_mailaddr"'
# Initialize indent string.
indent_string = ""
i = '"$indent"'
if (0 < '"$tabwidth"')
for (; '"$tabwidth"' <= i; i -= '"$tabwidth"')
indent_string = indent_string "\t"
while (1 <= i--)
indent_string = indent_string " "
}
{
newlog = ""
for (i = 6; i < NF; i++) newlog = newlog $i "\n"
# Ignore log entries prefixed by "#".
if (newlog ~ /^#/) { next }
if (Log != newlog || date != $3 || author != $5) {
# The previous log and this log differ.
# Print the old log.
if (date != "") '"$printlogline"'
# Logs that begin with "{clumpname} " should be grouped together,
# and the clumpname should be removed.
# Extract the new clumpname from the log header,
# and use it to decide whether to output a blank line.
newclumpname = ""
sep = "\n"
if (date == "") sep = ""
if (newlog ~ /^\{[^'"$tab"' }]*}['"$tab"' ]/) {
i = index(newlog, "}")
newclumpname = substr(newlog, 1, i)
while (substr(newlog, i+1) ~ /^['"$tab"' ]/) i++
newlog = substr(newlog, i+1)
if (clumpname == newclumpname) sep = ""
}
printf sep
clumpname = newclumpname
# Get ready for the next log.
Log = newlog
if (files != "")
for (i in filesknown)
filesknown[i] = 0
files = ""
}
if (date != $3 || author != $5) {
# The previous date+author and this date+author differ.
# Print the new one.
date = $3
time = $4
author = $5
zone = ""
if (logTZ && ((i = index(time, "-")) || (i = index(time, "+"))))
zone = " " substr(time, i)
# Print "date[ timezone] fullname <email address>".
# Get fullname and email address from associative arrays;
# default to author and author@hostname if not in arrays.
if (fullname[author])
auth = fullname[author]
else
auth = author
printf "%s%s %s ", date, zone, auth
if (mailaddr[author])
printf "<%s>\n\n", mailaddr[author]
else
printf "<%s@%s>\n\n", author, "'"$hostname"'"
}
if (! filesknown[$1]) {
filesknown[$1] = 1
if (files == "") files = " " $1
else files = files ", " $1
if (revision && $2 != "?") files = files " " $2
}
}
END {
# Print the last log.
if (date != "") {
'"$printlogline"'
printf "\n"
}
}
' &&
# Exit successfully.
exec rm -fr $logdir
# Local Variables:
# tab-width:4
# End:

View File

@ -1,156 +0,0 @@
#! /bin/sh
#
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
############################################################
# Error checking
#
if [ ! -d SCCS ] ; then
mkdir SCCS
fi
logfile=/tmp/rcs2sccs_$$_log
rm -f $logfile
tmpfile=/tmp/rcs2sccs_$$_tmp
rm -f $tmpfile
emptyfile=/tmp/rcs2sccs_$$_empty
echo -n "" > $emptyfile
initialfile=/tmp/rcs2sccs_$$_init
echo "Initial revision" > $initialfile
sedfile=/tmp/rcs2sccs_$$_sed
rm -f $sedfile
revfile=/tmp/rcs2sccs_$$_rev
rm -f $revfile
commentfile=/tmp/rcs2sccs_$$_comment
rm -f $commentfile
# create the sed script
cat > $sedfile << EOF
s,;Id;,%Z%%M% %I% %E%,g
s,;SunId;,%Z%%M% %I% %E%,g
s,;RCSfile;,%M%,g
s,;Revision;,%I%,g
s,;Date;,%E%,g
s,;Id:.*;,%Z%%M% %I% %E%,g
s,;SunId:.*;,%Z%%M% %I% %E%,g
s,;RCSfile:.*;,%M%,g
s,;Revision:.*;,%I%,g
s,;Date:.*;,%E%,g
EOF
sed -e 's/;/\\$/g' $sedfile > $tmpfile
cp $tmpfile $sedfile
############################################################
# Loop over every RCS file in RCS dir
#
if sort -k 1,1 /dev/null 2>/dev/null
then sort_each_field='-k 1 -k 2 -k 3 -k 4 -k 5 -k 6 -k 7 -k 8 -k 9'
else sort_each_field='+0 +1 +2 +3 +4 +5 +6 +7 +8'
fi
for vfile in *,v; do
# get rid of the ",v" at the end of the name
file=`echo $vfile | sed -e 's/,v$//'`
# work on each rev of that file in ascending order
firsttime=1
rlog $file | grep "^revision [0-9][0-9]*\." | awk '{print $2}' | sed -e 's/\./ /g' | sort -n -u $sort_each_field | sed -e 's/ /./g' > $revfile
for rev in `cat $revfile`; do
if [ $? != 0 ]; then
echo ERROR - revision
exit
fi
# get file into current dir and get stats
date=`rlog -r$rev $file | grep "^date: " | awk '{print $2; exit}' | sed -e 's/^19\|^20//'`
time=`rlog -r$rev $file | grep "^date: " | awk '{print $3; exit}' | sed -e 's/;//'`
author=`rlog -r$rev $file | grep "^date: " | awk '{print $5; exit}' | sed -e 's/;//'`
date="$date $time"
echo ""
rlog -r$rev $file | sed -e '/^branches: /d' -e '1,/^date: /d' -e '/^===========/d' -e 's/$/\\/' | awk '{if ((total += length($0) + 1) < 510) print $0}' > $commentfile
echo "==> file $file, rev=$rev, date=$date, author=$author"
rm -f $file
co -r$rev $file >> $logfile 2>&1
if [ $? != 0 ]; then
echo ERROR - co
exit
fi
echo checked out of RCS
# add SCCS keywords in place of RCS keywords
sed -f $sedfile $file > $tmpfile
if [ $? != 0 ]; then
echo ERROR - sed
exit
fi
echo performed keyword substitutions
rm -f $file
cp $tmpfile $file
# check file into SCCS
if [ "$firsttime" = "1" ]; then
firsttime=0
echo about to do sccs admin
echo sccs admin -n -i$file $file < $commentfile
sccs admin -n -i$file $file < $commentfile >> $logfile 2>&1
if [ $? != 0 ]; then
echo ERROR - sccs admin
exit
fi
echo initial rev checked into SCCS
else
case $rev in
*.*.*.*)
brev=`echo $rev | sed -e 's/\.[0-9]*$//'`
sccs admin -fb $file 2>>$logfile
echo sccs get -e -p -r$brev $file
sccs get -e -p -r$brev $file >/dev/null 2>>$logfile
;;
*)
echo sccs get -e -p $file
sccs get -e -p $file >/dev/null 2>> $logfile
;;
esac
if [ $? != 0 ]; then
echo ERROR - sccs get
exit
fi
sccs delta $file < $commentfile >> $logfile 2>&1
if [ $? != 0 ]; then
echo ERROR - sccs delta -r$rev $file
exit
fi
echo checked into SCCS
fi
sed -e "s;^d D $rev ../../.. ..:..:.. [^ ][^ ]*;d D $rev $date $author;" SCCS/s.$file > $tmpfile
rm -f SCCS/s.$file
cp $tmpfile SCCS/s.$file
chmod 444 SCCS/s.$file
sccs admin -z $file
if [ $? != 0 ]; then
echo ERROR - sccs admin -z
exit
fi
done
rm -f $file
done
############################################################
# Clean up
#
echo cleaning up...
rm -f $tmpfile $emptyfile $initialfile $sedfile $commentfile
echo ===================================================
echo " Conversion Completed Successfully"
echo ===================================================
rm -f *,v

View File

@ -1,265 +0,0 @@
#! @PERL@ -T
# -*-Perl-*-
# Copyright (C) 1994-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
###############################################################################
###############################################################################
###############################################################################
#
# THIS SCRIPT IS PROBABLY BROKEN. REMOVING THE -T SWITCH ON THE #! LINE ABOVE
# WOULD FIX IT, BUT THIS IS INSECURE. WE RECOMMEND FIXING THE ERRORS WHICH THE
# -T SWITCH WILL CAUSE PERL TO REPORT BEFORE RUNNING THIS SCRIPT FROM A CVS
# SERVER TRIGGER. PLEASE SEND PATCHES CONTAINING THE CHANGES YOU FIND
# NECESSARY TO RUN THIS SCRIPT WITH THE TAINT-CHECKING ENABLED BACK TO THE
# <@PACKAGE_BUGREPORT@> MAILING LIST.
#
# For more on general Perl security and taint-checking, please try running the
# `perldoc perlsec' command.
#
###############################################################################
###############################################################################
###############################################################################
# Author: John Rouillard (rouilj@cs.umb.edu)
# Supported: Yeah right. (Well what do you expect for 2 hours work?)
# Blame-to: rouilj@cs.umb.edu
# Complaints to: Anybody except Brian Berliner, he's blameless for
# this script.
# Acknowlegements: The base code for this script has been acquired
# from the log.pl script.
# rcslock.pl - A program to prevent commits when a file to be ckecked
# in is locked in the repository.
# There are times when you need exclusive access to a file. This
# often occurs when binaries are checked into the repository, since
# cvs's (actually rcs's) text based merging mechanism won't work. This
# script allows you to use the rcs lock mechanism (rcs -l) to make
# sure that no changes to a repository are able to be committed if
# those changes would result in a locked file being changed.
# WARNING:
# This script will work only if locking is set to strict.
#
# Setup:
# Add the following line to the commitinfo file:
# ALL /local/location/for/script/lockcheck [options]
# Where ALL is replaced by any suitable regular expression.
# Options are -v for verbose info, or -d for debugging info.
# The %s will provide the repository directory name and the names of
# all changed files.
# Use:
# When a developer needs exclusive access to a version of a file, s/he
# should use "rcs -l" in the repository tree to lock the version they
# are working on. CVS will automagically release the lock when the
# commit is performed.
# Method:
# An "rlog -h" is exec'ed to give info on all about to be
# committed files. This (header) information is parsed to determine
# if any locks are outstanding and what versions of the file are
# locked. This filename, version number info is used to index an
# associative array. All of the files to be committed are checked to
# see if any locks are outstanding. If locks are outstanding, the
# version number of the current file (taken from the CVS/Entries
# subdirectory) is used in the key to determine if that version is
# locked. If the file being checked in is locked by the person doing
# the checkin, the commit is allowed, but if the lock is held on that
# version of a file by another person, the commit is not allowed.
$ext = ",v"; # The extension on your rcs files.
$\="\n"; # I hate having to put \n's at the end of my print statements
$,=' '; # Spaces should occur between arguments to print when printed
# turn off setgid
#
$) = $(;
#
# parse command line arguments
#
require 'getopts.pl';
&Getopts("vd"); # verbose or debugging
# Verbose is useful when debugging
$opt_v = $opt_d if defined $opt_d;
# $files[0] is really the name of the subdirectory.
# @files = split(/ /,$ARGV[0]);
@files = @ARGV[0..$#ARGV];
$cvsroot = $ENV{'CVSROOT'};
#
# get login name
#
$login = getlogin || (getpwuid($<))[0] || "nobody";
#
# save the current directory since we have to return here to parse the
# CVS/Entries file if a lock is found.
#
$pwd = `/bin/pwd`;
chop $pwd;
print "Starting directory is $pwd" if defined $opt_d ;
#
# cd to the repository directory and check on the files.
#
print "Checking directory ", $files[0] if defined $opt_v ;
if ( $files[0] =~ /^\// )
{
print "Directory path is $files[0]" if defined $opt_d ;
chdir $files[0] || die "Can't change to repository directory $files[0]" ;
}
else
{
print "Directory path is $cvsroot/$files[0]" if defined $opt_d ;
chdir ($cvsroot . "/" . $files[0]) ||
die "Can't change to repository directory $files[0] in $cvsroot" ;
}
# Open the rlog process and apss all of the file names to that one
# process to cut down on exec overhead. This may backfire if there
# are too many files for the system buffer to handle, but if there are
# that many files, chances are that the cvs repository is not set up
# cleanly.
print "opening rlog -h @files[1..$#files] |" if defined $opt_d;
open( RLOG, "rlog -h @files[1..$#files] |") || die "Can't run rlog command" ;
# Create the locks associative array. The elements in the array are
# of two types:
#
# The name of the RCS file with a value of the total number of locks found
# for that file,
# or
#
# The name of the rcs file concatenated with the version number of the lock.
# The value of this element is the name of the locker.
# The regular expressions used to split the rcs info may have to be changed.
# The current ones work for rcs 5.6.
$lock = 0;
while (<RLOG>)
{
chop;
next if /^$/; # ditch blank lines
if ( $_ =~ /^RCS file: (.*)$/ )
{
$curfile = $1;
next;
}
if ( $_ =~ /^locks: strict$/ )
{
$lock = 1 ;
next;
}
if ( $lock )
{
# access list: is the line immediately following the list of locks.
if ( /^access list:/ )
{ # we are done getting lock info for this file.
$lock = 0;
}
else
{ # We are accumulating lock info.
# increment the lock count
$locks{$curfile}++;
# save the info on the version that is locked. $2 is the
# version number $1 is the name of the locker.
$locks{"$curfile" . "$2"} = $1
if /[ ]*([a-zA-Z._]*): ([0-9.]*)$/;
print "lock by $1 found on $curfile version $2" if defined $opt_d;
}
}
}
# Lets go back to the starting directory and see if any locked files
# are ones we are interested in.
chdir $pwd;
# fo all of the file names (remember $files[0] is the directory name
foreach $i (@files[1..$#files])
{
if ( defined $locks{$i . $ext} )
{ # well the file has at least one lock outstanding
# find the base version number of our file
&parse_cvs_entry($i,*entry);
# is our version of this file locked?
if ( defined $locks{$i . $ext . $entry{"version"}} )
{ # if so, it is by us?
if ( $login ne ($by = $locks{$i . $ext . $entry{"version"}}) )
{# crud somebody else has it locked.
$outstanding_lock++ ;
print "$by has file $i locked for version " , $entry{"version"};
}
else
{ # yeah I have it locked.
print "You have a lock on file $i for version " , $entry{"version"}
if defined $opt_v;
}
}
}
}
exit $outstanding_lock;
### End of main program
sub parse_cvs_entry
{ # a very simple minded hack at parsing an entries file.
local ( $file, *entry ) = @_;
local ( @pp );
open(ENTRIES, "< CVS/Entries") || die "Can't open entries file";
while (<ENTRIES>)
{
if ( $_ =~ /^\/$file\// )
{
@pp = split('/');
$entry{"name"} = $pp[1];
$entry{"version"} = $pp[2];
$entry{"dates"} = $pp[3];
$entry{"name"} = $pp[4];
$entry{"name"} = $pp[5];
$entry{"sticky"} = $pp[6];
return;
}
}
}

View File

@ -1,327 +0,0 @@
#! @CSH@ -f
# Copyright (C) 1995-2005 The Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# Sccs2rcs is a script to convert an existing SCCS
# history into an RCS history without losing any of
# the information contained therein.
# It has been tested under the following OS's:
# SunOS 3.5, 4.0.3, 4.1
# Ultrix-32 2.0, 3.1
#
# Things to note:
# + It will NOT delete or alter your ./SCCS history under any circumstances.
#
# + Run in a directory where ./SCCS exists and where you can
# create ./RCS
#
# + /usr/local/bin is put in front of the default path.
# (SCCS under Ultrix is set-uid sccs, bad bad bad, so
# /usr/local/bin/sccs here fixes that)
#
# + Date, time, author, comments, branches, are all preserved.
#
# + If a command fails somewhere in the middle, it bombs with
# a message -- remove what it's done so far and try again.
# "rm -rf RCS; sccs unedit `sccs tell`; sccs clean"
# There is no recovery and exit is far from graceful.
# If a particular module is hanging you up, consider
# doing it separately; move it from the current area so that
# the next run will have a better chance or working.
# Also (for the brave only) you might consider hacking
# the s-file for simpler problems: I've successfully changed
# the date of a delta to be in sync, then run "sccs admin -z"
# on the thing.
#
# + After everything finishes, ./SCCS will be moved to ./old-SCCS.
#
# This file may be copied, processed, hacked, mutilated, and
# even destroyed as long as you don't tell anyone you wrote it.
#
# Ken Cox
# Viewlogic Systems, Inc.
# kenstir@viewlogic.com
# ...!harvard!cg-atla!viewlog!kenstir
#
# Various hacks made by Brian Berliner before inclusion in CVS contrib area.
#
# Modified to detect SCCS binary files. If binary, skip the keyword
# substitution and flag the RCS file as binary (using rcs -i -kb).
# -Allan G. Schrum schrum@ofsoptics.com agschrum@mindspring.com
# Fri Sep 26 10:40:40 EDT 2003
#
# $FreeBSD$
#we'll assume the user set up the path correctly
# for the Pmax, /usr/ucb/sccs is suid sccs, what a pain
# /usr/local/bin/sccs should override /usr/ucb/sccs there
set path = (/usr/local/bin $path)
############################################################
# Error checking
#
if (! -w .) then
echo "Error: ./ not writeable by you."
exit 1
endif
if (! -d SCCS) then
echo "Error: ./SCCS directory not found."
exit 1
endif
set edits = (`sccs tell`)
if ($#edits) then
echo "Error: $#edits file(s) out for edit...clean up before converting."
exit 1
endif
if (-d RCS) then
echo "Warning: RCS directory exists"
if (`ls -a RCS | wc -l` > 2) then
echo "Error: RCS directory not empty"
exit 1
endif
else
mkdir RCS
endif
sccs clean
set logfile = /tmp/sccs2rcs_$$_log
rm -f $logfile
set tmpfile = /tmp/sccs2rcs_$$_tmp
rm -f $tmpfile
set emptyfile = /tmp/sccs2rcs_$$_empty
echo -n "" > $emptyfile
set initialfile = /tmp/sccs2rcs_$$_init
echo "Initial revision" > $initialfile
set sedfile = /tmp/sccs2rcs_$$_sed
rm -f $sedfile
set revfile = /tmp/sccs2rcs_$$_rev
rm -f $revfile
# the quotes surround the dollar signs to fool RCS when I check in this script
set sccs_keywords = (\
'%W%[ ]*%G%'\
'%W%[ ]*%E%'\
'%W%'\
'%Z%%M%[ ]*%I%[ ]*%G%'\
'%Z%%M%[ ]*%I%[ ]*%E%'\
'%M%[ ]*%I%[ ]*%G%'\
'%M%[ ]*%I%[ ]*%E%'\
'%M%'\
'%I%'\
'%G%'\
'%E%'\
'%U%')
set rcs_keywords = (\
'$'Id'$'\
'$'Id'$'\
'$'Id'$'\
'$'SunId'$'\
'$'SunId'$'\
'$'Id'$'\
'$'Id'$'\
'$'RCSfile'$'\
'$'Revision'$'\
'$'Date'$'\
'$'Date'$'\
'')
############################################################
# Get some answers from user
#
echo ""
echo "Do you want to be prompted for a description of each"
echo "file as it is checked in to RCS initially?"
echo -n "(y=prompt for description, n=null description) [y] ?"
set ans = $<
if ((_$ans == _) || (_$ans == _y) || (_$ans == _Y)) then
set nodesc = 0
else
set nodesc = 1
endif
echo ""
echo "The default keyword substitutions are as follows and are"
echo "applied in the order specified:"
set i = 1
while ($i <= $#sccs_keywords)
# echo ' '\"$sccs_keywords[$i]\"' ==> '\"$rcs_keywords[$i]\"
echo " $sccs_keywords[$i] ==> $rcs_keywords[$i]"
@ i = $i + 1
end
echo ""
echo -n "Do you want to change them [n] ?"
set ans = $<
if ((_$ans != _) && (_$ans != _n) && (_$ans != _N)) then
echo "You can't always get what you want."
echo "Edit this script file and change the variables:"
echo ' $sccs_keywords'
echo ' $rcs_keywords'
else
echo "good idea."
endif
# create the sed script
set i = 1
while ($i <= $#sccs_keywords)
echo "s,$sccs_keywords[$i],$rcs_keywords[$i],g" >> $sedfile
@ i = $i + 1
end
onintr ERROR
sort -k 1,1 /dev/null >& /dev/null
if ($status == 0) then
set sort_each_field = '-k 1 -k 2 -k 3 -k 4 -k 5 -k 6 -k 7 -k 8 -k 9'
else
set sort_each_field = '+0 +1 +2 +3 +4 +5 +6 +7 +8'
endif
############################################################
# Loop over every s-file in SCCS dir
#
foreach sfile (SCCS/s.*)
# get rid of the "s." at the beginning of the name
set file = `echo $sfile:t | sed -e "s/^..//"`
# work on each rev of that file in ascending order
set firsttime = 1
# Only scan the file up to the "I" keyword, then see if
# the "f" keyword is set to binary. The SCCS file has
# <ctrl>-aI denoting the start of the file (or end of header).
set binary = (`sed -e '/^.I/,$d' < $sfile | grep '^.f e 1$'`)
#if ($#binary) then
# echo This is a binary file
#else
# echo This is not a binary file
#endif
sccs prs $file | grep "^D " | @AWK@ '{print $2}' | sed -e 's/\./ /g' | sort -n -u $sort_each_field | sed -e 's/ /./g' > $revfile
foreach rev (`cat $revfile`)
if ($status != 0) goto ERROR
# get file into current dir and get stats
# Is the substr stuff and the +0 in the following awk script really
# necessary? It seems to me that if we didn't find the date format
# we expected in the output we have other problems.
# Note: Solaris awk does not like the following line. Use gawk
# mawk, or nawk instead.
set date = `sccs prs -r$rev $file | @AWK@ '/^D / {print (substr($3,0,2)+0<70?20:19) $3, $4; exit}'`
set author = `sccs prs -r$rev $file | @AWK@ '/^D / {print $5; exit}'`
echo ""
echo "==> file $file, rev=$rev, date=$date, author=$author"
sccs edit -r$rev $file >>& $logfile
if ($status != 0) goto ERROR
echo checked out of SCCS
# add RCS keywords in place of SCCS keywords (only if not binary)
if ($#binary == 0) then
sed -f $sedfile $file > $tmpfile
if ($status != 0) goto ERROR
echo performed keyword substitutions
cp $tmpfile $file
endif
# check file into RCS
if ($firsttime) then
set firsttime = 0
if ($#binary) then
echo this is a binary file
# Mark initial, empty file as binary
rcs -i -kb -t$emptyfile $file
endif
if ($nodesc) then
echo about to do ci
echo ci -f -r$rev -d"$date" -w$author -t$emptyfile $file
ci -f -r$rev -d"$date" -w$author -t$emptyfile $file < $initialfile >>& $logfile
if ($status != 0) goto ERROR
echo initial rev checked into RCS without description
else
echo ""
echo Enter a brief description of the file $file \(end w/ Ctrl-D\):
cat > $tmpfile
ci -f -r$rev -d"$date" -w$author -t$tmpfile $file < $initialfile >>& $logfile
if ($status != 0) goto ERROR
echo initial rev checked into RCS
endif
else
# get RCS lock
set lckrev = `echo $rev | sed -e 's/\.[0-9]*$//'`
if ("$lckrev" =~ [0-9]*.*) then
# need to lock the brach -- it is OK if the lock fails
rcs -l$lckrev $file >>& $logfile
else
# need to lock the trunk -- must succeed
rcs -l $file >>& $logfile
if ($status != 0) goto ERROR
endif
echo got lock
sccs prs -r$rev $file | grep "." > $tmpfile
# it's OK if grep fails here and gives status == 1
# put the delta message in $tmpfile
ed $tmpfile >>& $logfile <<EOF
/COMMENTS
1,.d
w
q
EOF
ci -f -r$rev -d"$date" -w$author $file < $tmpfile >>& $logfile
if ($status != 0) goto ERROR
echo checked into RCS
endif
sccs unedit $file >>& $logfile
if ($status != 0) goto ERROR
end
rm -f $file
end
############################################################
# Clean up
#
echo cleaning up...
mv SCCS old-SCCS
rm -f $tmpfile $emptyfile $initialfile $sedfile
echo ===================================================
echo " Conversion Completed Successfully"
echo ""
echo " SCCS history now in old-SCCS/"
echo ===================================================
set exitval = 0
goto cleanup
ERROR:
foreach f (`sccs tell`)
sccs unedit $f
end
echo ""
echo ""
echo Danger\! Danger\!
echo Some command exited with a non-zero exit status.
echo Log file exists in $logfile.
echo ""
echo Incomplete history in ./RCS -- remove it
echo Original unchanged history in ./SCCS
set exitval = 1
cleanup:
# leave log file
rm -f $tmpfile $emptyfile $initialfile $sedfile $revfile
exit $exitval

View File

@ -1,93 +0,0 @@
;; -*- lisp-interaction -*-
;; -*- emacs-lisp -*-
;;
;; Set emacs up for editing code using CVS indentation conventions.
;; See HACKING for more on what those conventions are.
;; To use, put in your .emacs:
;; (load "c-mode")
;; (load "cvs-format.el")
;; You need to load c-mode first or else when c-mode autoloads it will
;; clobber the settings from cvs-format.el. Using c-mode-hook perhaps would
;; be a cleaner way to handle that. Or see below about (set-c-style "BSD").
;;
;; Credits: Originally from the personal .emacs file of Rich Pixley,
;; then rich@cygnus.com, circa 1992. He sez "feel free to copy."
;;
;;
;;
;; This section sets constants used by c-mode for formating
;;
;;
;; If `c-auto-newline' is non-`nil', newlines are inserted both
;;before and after braces that you insert, and after colons and semicolons.
;;Correct C indentation is done on all the lines that are made this way.
(setq c-auto-newline nil)
;;*Non-nil means TAB in C mode should always reindent the current line,
;;regardless of where in the line point is when the TAB command is used.
;;It might be desirable to set this to nil for CVS, since unlike GNU
;; CVS often uses comments over to the right separated by TABs.
;; Depends some on whether you're in the habit of using TAB to
;; reindent.
;(setq c-tab-always-indent nil)
;;; It seems to me that
;;; `M-x set-c-style BSD RET'
;;; or
;;; (set-c-style "BSD")
;;; takes care of the indentation parameters correctly.
;; C does not have anything analogous to particular function names for which
;;special forms of indentation are desirable. However, it has a different
;;need for customization facilities: many different styles of C indentation
;;are in common use.
;;
;; There are six variables you can set to control the style that Emacs C
;;mode will use.
;;
;;`c-indent-level'
;; Indentation of C statements within surrounding block. The surrounding
;; block's indentation is the indentation of the line on which the
;; open-brace appears.
(setq c-indent-level 4)
;;`c-continued-statement-offset'
;; Extra indentation given to a substatement, such as the then-clause of
;; an if or body of a while.
(setq c-continued-statement-offset 4)
;;`c-brace-offset'
;; Extra indentation for line if it starts with an open brace.
(setq c-brace-offset -4)
;;`c-brace-imaginary-offset'
;; An open brace following other text is treated as if it were this far
;; to the right of the start of its line.
(setq c-brace-imaginary-offset 0)
;;`c-argdecl-indent'
;; Indentation level of declarations of C function arguments.
(setq c-argdecl-indent 4)
;;`c-label-offset'
;; Extra indentation for line that is a label, or case or default.
;; This doesn't quite do the right thing for CVS switches, which use the
;; switch (foo)
;; {
;; case 0:
;; break;
;; style. But if one manually aligns the first case, then the rest
;; should work OK.
(setq c-label-offset -4)
;;;; eof

View File

@ -1,584 +0,0 @@
#! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2006-10-15.18
# Copyright (C) 1999, 2000, 2003, 2004, 2005, 2006 Free Software
# Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva <oliva@dcc.unicamp.br>.
case $1 in
'')
echo "$0: No command. Try \`$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by `PROGRAMS ARGS'.
object Object file output by `PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputing dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say).
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
alpha=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
## The second -e expression handles DOS-style file names with drive letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the `deleted header file' problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
tr ' ' '
' < "$tmpdepfile" |
## Some versions of gcc put a space before the `:'. On the theory
## that the space means something, we add a space to the output as
## well.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like `#:fec' to the end of the
# dependency line.
tr ' ' '
' < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' | \
tr '
' ' ' >> $depfile
echo >> $depfile
# The second pass generates a dummy entry for each header file.
tr ' ' '
' < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> $depfile
else
# The sourcefile does not contain any dependencies, so just
# store a dummy comment line, to avoid errors with the Makefile
# "include basename.Plo" scheme.
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts `$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
stripped=`echo "$object" | sed 's/\(.*\)\..*$/\1/'`
tmpdepfile="$stripped.u"
if test "$libtool" = yes; then
"$@" -Wc,-M
else
"$@" -M
fi
stat=$?
if test -f "$tmpdepfile"; then :
else
stripped=`echo "$stripped" | sed 's,^.*/,,'`
tmpdepfile="$stripped.u"
fi
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
if test -f "$tmpdepfile"; then
outname="$stripped.o"
# Each line is of the form `foo.o: dependent.h'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed -e "s,^$outname:,$object :," < "$tmpdepfile" > "$depfile"
sed -e "s,^$outname: \(.*\)$,\1:," < "$tmpdepfile" >> "$depfile"
else
# The sourcefile does not contain any dependencies, so just
# store a dummy comment line, to avoid errors with the Makefile
# "include basename.Plo" scheme.
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
icc)
# Intel's C compiler understands `-MD -MF file'. However on
# icc -MD -MF foo.d -c -o sub/foo.o sub/foo.c
# ICC 7.0 will fill foo.d with something like
# foo.o: sub/foo.c
# foo.o: sub/foo.h
# which is wrong. We want:
# sub/foo.o: sub/foo.c
# sub/foo.o: sub/foo.h
# sub/foo.c:
# sub/foo.h:
# ICC 7.1 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using \ :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" |
sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
dir=`echo "$object" | sed -e 's|/[^/]*$|/|'`
test "x$dir" = "x$object" && dir=
base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'`
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[a-z]*:,$object:," "$tmpdepfile" > "$depfile"
# Add `dependent.h:' lines.
sed -ne '2,${; s/^ *//; s/ \\*$//; s/$/:/; p;}' "$tmpdepfile" >> "$depfile"
else
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. `cc -MD -o foo.o ...' puts the dependencies into `foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in `foo.d' instead, so we check for that too.
# Subdirectories are respected.
dir=`echo "$object" | sed -e 's|/[^/]*$|/|'`
test "x$dir" = "x$object" && dir=
base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'`
if test "$libtool" = yes; then
# With Tru64 cc, shared objects can also be used to make a
# static library. This mechanism is used in libtool 1.4 series to
# handle both shared and static libraries in a single compilation.
# With libtool 1.4, dependencies were output in $dir.libs/$base.lo.d.
#
# With libtool 1.5 this exception was removed, and libtool now
# generates 2 separate objects for the 2 libraries. These two
# compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir.libs/$base.lo.d # libtool 1.4
tmpdepfile2=$dir$base.o.d # libtool 1.5
tmpdepfile3=$dir.libs/$base.o.d # libtool 1.5
tmpdepfile4=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.o.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
tmpdepfile4=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile"
# That's a tab and a space in the [].
sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile"
else
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# Remove `-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for `:'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as `c:/foo/bar' could be seen as target `c' otherwise.
"$@" $dashmflag |
sed 's:^[ ]*[^: ][^:][^:]*\:[ ]*:'"$object"'\: :' > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
tr ' ' '
' < "$tmpdepfile" | \
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no
for arg in "$@"; do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix="`echo $object | sed 's/^.*\././'`"
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
sed '1,2d' "$tmpdepfile" | tr ' ' '
' | \
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# Remove `-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E |
sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' |
sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o,
# because we must use -o when running libtool.
"$@" || exit $?
IFS=" "
for arg
do
case "$arg" in
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::echo "`cygpath -u \\"\1\\"`":p' | sort | uniq > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
. "$tmpdepfile" | sed 's% %\\ %g' | sed -n '/^\(.*\)$/ s:: \1 \\:p' >> "$depfile"
echo " " >> "$depfile"
. "$tmpdepfile" | sed 's% %\\ %g' | sed -n '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,534 +0,0 @@
2005-09-04 Derek Price <derek@ximbiot.com>
* Makefile.am (EXTRA_DIST): Add .cvsignore.
2004-11-05 Conrad T. Pino <Conrad@Pino.com>
* libdiff.dep: Regenerated after complete rebuild.
2004-05-15 Derek Price <derek@ximbiot.com>
* libdiff.dsp: Header file list updated.
* libdiff.dep: Regenerated for "libdiff.dsp" changes.
* libdiff.mak: Regenerated for "libdiff.dsp" changes.
(Patch from Conrad Pino <conrad@pino.com>.)
2004-05-13 Derek Price <derek@ximbiot.com>
* .cvsignore: Changed for "libdiff.dsp" changes.
* libdiff.dep: Added for "../cvsnt.dsw" changes.
* libdiff.dsp: Changed for "../cvsnt.dsw" changes.
* libdiff.mak: Regenerated for "../cvsnt.dsw" changes.
(Patch from Conrad Pino <conrad@pino.com>.)
2004-03-20 Derek Price <derek@ximbiot.com>
* diff.c (diff_run): Update string arg to const.
* diffrun.h: Update prototype to match.
2003-07-12 Larry Jones <lawrence.jones@eds.com>
* io.c (find_identical_ends): Update to match current diffutils
code and improve handling of files with no newline at end.
(Patch from Andrew Moise <chops@demiurgestudios.com>.)
2003-06-13 Derek Price <derek@ximbiot.com>
* diff3.c (read_diff): Fix memory leak.
(Patch from Kenneth Lorber <keni@his.com>.)
2003-05-21 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerate with Automake version 1.7.5.
2003-05-09 Derek Price <derek@ximbiot.com>
* system.h: Define S_ISSOCK on SCO OpenServer.
2003-04-10 Larry Jones <lawrence.jones@eds.com>
* Makefile.in: Regenerated.
2003-02-25 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerated.
2003-02-01 Larry Jones <lawrence.jones@eds.com>
* util.c (finish_output): Handle EINTR from waitpid.
2002-09-24 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerated using Automake 1.6.3.
2002-09-24 Larry Jones <lawrence.jones@eds.com>
* system.h: Use HAVE_STRUCT_STAT_ST_BLKSIZE instead of the
obsolete HAVE_ST_BLKSIZE.
2002-09-24 Derek Price <derek@ximbiot.com>
* Makefile.in: Regenerated.
2002-04-30 Derek Price <oberon@umich.edu>
* Makefile.in: Regenerated with automake 1.6.
2002-04-28 Derek Price <oberon@umich.edu>
* diff.c: Use the system fnmatch.h when present.
2001-09-04 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with automake 1.5.
2001-08-09 Derek Price <dprice@collab.net>
* system.h: Source some header files when present to eliminate warning
messages under Windows.
(Patch from "Manfred Klug" <manklu@web.de>.)
2001-08-07 Derek Price <dprice@collab.net>
* build_diff.com: Turn on verify to get a better trace of the DCL.
* diff3.c: Eliminate compiler warning. The VMS read rval is ssize_t
(signed). The VMS size_t appears to be unsigned.
* io.c: Eliminate compiler warning (ssize_t).
(Patch from Mike Marciniszyn <Mike.Marciniszyn@sanchez.com>.)
2001-08-06 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated.
2001-07-04 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with new Automake release candidate 1.4h.
2001-06-28 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated with new version of Automake.
2001-05-07 Larry Jones <larry.jones@sdrc.com>
* diff3.c (diff3_run): Put the name of the output file in the error
message instead of "could not open output file" to aid in debugging.
2001-04-25 Derek Price <dprice@collab.net>
* Makefile.in: Regenerated using AM 1.4e as of today at 18:10 -0400.
2001-03-24 Noel Cragg <noel@shave.red-bean.com>
* diff.c: fix typo in usage string.
2001-03-20 Derek Price <derek.price@openavenue.com>
for Karl Tomlinson <k.tomlinson@auckland.ac.nz>
* diff3.c (main): changed the common file of the two diffs to
OLDFILE for merges and edscripts so that the diffs are more likely
to contain the intended changes. Not changing the horizon-lines
arg for the second diff. If the two diffs have the same parameters
equal changes in each diff are more likely to appear the same.
* analyze.c (shift_boundaries): undid Paul Eggert's patch to fix
the diff3 merge bug described in ccvs/doc/DIFFUTILS-2.7-BUG. The
patch is no longer necessary now that diff3 does its differences
differently. I think the hunk merges provide a better indication
of the area modified by the user now that the diffs are actually
done between the appropriate revisions.
2001-03-15 Derek Price <derek.price@openavenue.com>
* Makefile.am (INCLUDES): Add -I$(top_srcdir)/lib for platforms which
need the regex library there.
* Makefile.in: Regenerated.
2001-03-14 Derek Price <derek.price@openavenue.com>
* .cvsignore: Added '.deps'.
Pavel Roskin <proski@gnu.org>
* Makefile.am: New file.
* Makefile.in: Regenerated.
2001-02-22 Derek Price <derek.price@openavenue.com>
Pavel Roskin <proski@gnu.org>
* Makefile.in: Don't define PR_PROGRAM - it's defined by configure.
Remove separate rule for util.c.
2001-02-06 Derek Price <derek.price@openavenue.com>
Rex Jolliff <Rex_Jolliff@notes.ymp.gov>
Shawn Smith <Shawn_Smith@notes.ymp.gov>
* dir.c: Replace opendir, closedir, & readdir calls with CVS_OPENDIR,
CVS_CLOSEDIR, & CVS_READDIR in support of changes to handle VMS DEC C
5.7 {open,read,close}dir problems. Check today's entry in the vms
subdir for more.
* system.h: definitions of CVS_*DIR provided here.
2000-12-21 Derek Price <derek.price@openavenue.com>
* Makefile.in: Some changes to support Automake targets
2000-10-26 Larry Jones <larry.jones@sdrc.com>
* Makefile.in: Get PR_PROGRAM from autoconf instead of hard coding.
(Patch submitted by Urs Thuermann <urs@isnogud.escape.de>.)
Also add a dependency for util.o on Makefile since PR_PROGRAM gets
compiled in.
2000-08-03 Larry Jones <larry.jones@sdrc.com>
* diff3.c (read_diff): Use cvs_temp_name () instead of tmpnam () so
there's at least a chance of getting the file in the correct tmp dir.
2000-07-10 Larry Jones <larry.jones@sdrc.com>
* util.c (printf_output): Fix type clashes.
2000-06-15 Larry Jones <larry.jones@sdrc.com>
* diff3.c (diff3_run, make_3way_diff): Plug memory leaks.
1999-12-29 Jim Kingdon <http://developer.redhat.com/>
* diff.c (compare_files): Use explicit braces with if-if-else, per
GNU coding standards and gcc -Wall.
1999-11-23 Larry Jones <larry.jones@sdrc.com>
* diff3.c: Explicitly initialize zero_diff3 to placate neurotic
compilers that gripe about implicitly initialized const variables.
Reported by Eric Veum <sysv@yahoo.com>.
1999-09-15 Larry Jones <larry.jones@sdrc.com>
* diff.c (diff_run): Move the setjmp call before the options
processing since option errors can call fatal which in turn
calls longjmp.
1999-05-06 Jim Kingdon <http://www.cyclic.com>
* Makefile.in (DISTFILES): Remove libdiff.mak.
* libdiff.mak: Removed; we are back to a single makefile for
Visual C++ version 4.
1999-04-29 Jim Kingdon <http://www.cyclic.com>
* diff.c (diff_run): Use separate statement for setjmp call and if
statement. This is better style in general (IMHO) but in the case
of setjmp the UNICOS compiler apparently cares (I don't have the
standard handy, but there are lots of legitimate restrictions on
how you can call setjmp).
1999-04-26 Jim Kingdon <http://www.cyclic.com>
* Makefile.in (DISTFILES): Add libdiff.dsp libdiff.mak .cvsignore.
1999-04-26 (submitted 1999-03-24) John O'Connor <john@shore.net>
* libdiff.dsp: new file. MSVC project file used to build the library.
* libdiff.mak: new file. Makefile for building from the command-line.
* .cvsignore: Removed un-used entries related to MSVC. Added
entries to ignore directories generated by the NT build, Debug
and Release.
1999-03-24 Larry Jones <larry.jones@sdrc.com>
and Olaf Brandes
* diff3.c (diff3_run): Use a separate stream for the input to
output_diff3_merge instead of reopening stdin to avoid problems
with leaving it open.
1999-02-17 Jim Kingdon <http://www.cyclic.com>
and Hallvard B Furuseth.
* util.c: Use __STDC__ consistently with ./system.h.
* system.h: Add comment about PARAMS.
1999-01-12 Jim Kingdon <http://www.cyclic.com>
* Makefile.in, analyze.c, cmpbuf.c, cmpbuf.h, context.c, diff.c,
diff.h, diff3.c, diffrun.h, dir.c, ed.c, io.c, normal.c, system.h,
util.c: Remove paragraph containing the old snail mail address of
the Free Software Foundation.
1998-09-21 Jim Kingdon <kingdon@harvey.cyclic.com>
* util.c (printf_output): Make msg static; avoids auto
initializer, which is not portable to SunOS4 /bin/cc.
Reported by Mike Sutton@SAIC.
1998-09-14 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in (DISTFILES): Add diagmeet.note.
1998-08-15 Jim Kingdon <kingdon@harvey.cyclic.com>
* diffrun.h (struct diff_callbacks): Change calling convention of
write_output so that a zero length means to output zero bytes.
The cvs_output convention is just too ugly/error-prone.
* util.c (printf_output): Rewrite to parse format string
overselves rather than calling vasprintf, which cannot be
implemented in portable C.
1998-08-06 David Masterson of kla-tencor.com
* util.c (flush_output): Don't prototype.
Thu Jul 2 16:34:38 1998 Ian Lance Taylor <ian@cygnus.com>
Simplify the callback interface:
* diffrun.h: Don't include <stdarg.h> or <varargs.h>.
(struct diff_callbacks): Remove printf_output field.
* util.c: Include <stdarg.h> or <varargs.h>.
(printf_output): Use vasprintf and write_output callback rather
than printf_output callback.
* diff3.c (read_diff): Don't set my_callbacks.printf_output.
Thu Jun 18 12:43:53 1998 Ian Lance Taylor <ian@cygnus.com>
* diffrun.h: New file.
* diff.h: Include diffrun.h.
(callbacks): New EXTERN variable.
(write_output, printf_output, flush_output): Declare.
* diff.c (diff_run): Add parameter callbacks_arg. Use callback
functions rather than writing to stdout. Don't open a file if
there is a write_output callback. Call perror_with_name rather
than perror.
(usage): Use callbacks if defined rather than writing to stdout.
(compare_files): Call flush_output rather than fflush (outfile).
* diff3.c: Include diffrun.h. Change several functions to use
output functions from util.c rather than direct printing. Use
diff_error and friends rather than printing to stderr. Set global
variable outfile.
(outfile, callbacks): Declare.
(write_output, printf_output, flush_output): Declare.
(diff3_run): Add parameter callbacks_arg. Use callback functions
rather than writing to stdout.
(usage): Use callbacks if defined rather than writing to stdout.
(read_diff): Preserve callbacks and outfile around call to
diff_run.
* util.c (perror_with_name): Use error callback if defined.
(pfatal_with_name, diff_error): Likewise.
(message5): Use printf_output and write_output.
(print_message_queue, print_1_line, output_1_line): Likewise.
(begin_output): Reject paginate_flag if there are output
callbacks.
(write_output, printf_output, flush_output): New functions.
* context.c: Change all output to outfile to use printf_output and
write_output.
* ed.c: Likewise.
* ifdef.c: Likewise.
* normal.c: Likewise.
* side.c: Likewise.
* Makefile.in (SOURCES): Add diffrun.h.
($(OBJECTS)): Depend upon diffrun.h.
Fri Jan 16 14:58:19 1998 Larry Jones <larry.jones@sdrc.com>
* diff.c, diff3.c: Plug memory leaks.
Thu Jan 15 13:36:46 1998 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in (installdirs): New rule, for when ../Makefile
recurses into this directory (bug reported by W. L. Estes).
Tue Nov 11 10:48:19 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* diff.c (diff_run): Change #ifdef on HAVE_SETMODE to #if to match
the other uses (fixes compilation error on unix).
* diff.c (diff_run): Don't set stdout to binary mode.
Mon, 10 Nov 1997 Jim Kingdon
* diff.c (run_diff): Open outfile in binary mode if --binary.
Thu Nov 6 12:42:12 1997 Karl Fogel <kfogel@floss.red-bean.com>
and Paul Eggert <eggert@twinsun.com>
* analyze.c: applied Paul Eggert's patch to fix the diff3 merge
bug described in ccvs/doc/DIFFUTILS-2.7-BUG:
(shift_boundaries): new var `inhibit_hunk_merge'; use it to
control something important that I don't quite understand, but
Paul apparently does, so that's okay.
Sat Nov 1 14:17:57 1997 Michael L.H. Brouwer <michael@thi.nl>
* Makefile.in: Add call to ranlib to build a table of contents for
the library since some systems seem to require this.
1997-10-28 Jim Kingdon
* .cvsignore: Add files du jour for Visual C++, vc50.pdb and vc50.idb.
* system.h: Define HAVE_TIME_H.
* dir.c [_WIN32]: Define CLOSEDIR_VOID.
1997-10-18 Jim Kingdon
* build_diff.com: Add diff3.c
Fri Sep 26 14:24:42 1997 Tim Pierce <twp@twp.tezcat.com>
* diff.c (diff_run): Save old value of optind before calling
getopt_long, then restore before returning. Eventually it would
be nice if diff_run were fully reentrant.
New diff3 library for CVS.
* Makefile.in (SOURCES): Add diff3.c.
(OBJECTS): Add diff3.o.
* diff3.c: New file, copied from diffutils-2.7. See diffutils for
earlier ChangeLogs. Undefine initialize_main macro. Remove <signal.h>.
(diff3_run): Renamed from main(). Add `outfile' argument. Remove
SIGCLD handling; we do not fork. Save optind and reset to 0
before calling getopt_long; restore after option processing done.
(read_diff): Use diff_run with a temporary output file,
instead of forking a diff subprocess and reading from a pipe.
Change DIFF_PROGRAM to "diff"; this argument is now used only for
diagnostic reporting.
(xmalloc, xrealloc): Removed.
(diff_program): Removed.
(diff_program_name): Made extern, so it may be used in other
library calls like `error'.
(initialize_main): New function.
Namespace munging. util.c defines both fatal() and
perror_with_exit(), but these cannot be used to abort diff3: both
attempt to longjmp() to a buffer set in diff.c, used only by
diff_run. This is an awful solution, but necessary until the code
can be cleaned up. (These functions do not *have* to be renamed,
since both are declared static to diff3.c and should not clash
with libdiff.a, but it reduces potential confusion.)
* diff3.c (diff3_fatal): Renamed from fatal.
(diff3_perror_with_exit): Renamed from perror_with_exit.
Eliminate exit calls.
(try_help): Change from `void' to `int'. Return, do not exit.
(diff3_fatal, diff3_perror_with_exit, process_diff): Change `exit'
to DIFF3_ABORT.
(diff3_run): Initialize jump buffer for nonlocal exits. Change
exit calls to returns. Change `perror_with_exit' to
`perror_with_name' and add a return. Change `fatal' to
`diff_error' and add a return. The reasoning is that we shouldn't
rely on setjmp/longjmp any more than necessary.
Redirect stdout.
(check_output): Renamed from check_stdout. Take stream argument
instead of blindly checking stdout. Do not close stream, but
merely fflush it.
(diff3_run): Initialize outstream, and close when done. Pass this
stream (instead of stdout) to output_diff3_edscript,
output_diff3_merge, and output_diff3.
Thu Sep 25 14:34:22 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* util.c (begin_output, finish_output): If PR_PROGRAM is not
defined (VMS), just give a fatal error if --paginate specified.
* Makefile.in (DISTFILES): Add ChangeLog build_diff.com
Makefile.in.
* build_diff.com: New file.
Wed Sep 24 10:27:00 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in: Also set top_srcdir. Needed to make today's other
Makefile.in change work.
* .cvsignore: New file.
* Makefile.in (COMPILE): Add -I options for srcdir (perhaps
unneeded) and change -I option for lib to use top_srcdir (needed
to avoid mixups with CVS's regex.h vs. the system one).
Sun Sep 21 19:44:42 1997 Jim Kingdon <kingdon@harvey.cyclic.com>
* Makefile.in (util.o): Change util.c to $<, needed for srcdir.
Sat Sep 20 12:06:41 1997 Tim Pierce <twp@twp.tezcat.com>
New diff library for CVS, based on diffutils-2.7. See diffutils
for earlier ChangeLogs.
* Makefile.in, analyze.c, cmpbuf.c, cmpbuf.h, config.hin,
context.c, diagmeet.note, diff.c, diff.h, dir.c, ed.c, ifdef.c,
io.c, normal.c, side.c, stamp-h.in, system.h, util.c, version.c:
New files.
(COMPILE): Add -I../lib, so we can get getopt.h.
* Makefile.in: Removed anything not related to libdiff.a.
(dist-dir): New target, copied from ../lib/Makefile.in.
(DISTFILES): New variable.
(SOURCES): Renamed from `srcs'.
(OBJECTS): Renamed from `libdiff_o'.
(Makefile): Changed dependencies to reflect
new, shallow config directory structure.
(stamp-h.in, config.h.in, config.h, stamp-h): Removed.
* stamp-h.in, config.h.in: Removed.
* system.h: Remove dup2 macro (provided by ../lib/dup2.c).
Include stdlib.h if STDC_HEADERS is defined (not just
HAVE_STDLIB_H).
Sat Sep 20 05:32:18 1997 Tim Pierce <twp@twp.tezcat.com>
Diff librarification.
* diff.c (diff_run): New function, renamed from `main'.
Initialize `outfile' based on the value of the new `out' filename
argument.
(initialize_main): New function.
* system.h: Removed initialize_main macro.
* diffmain.c: New file.
* Makefile.in (diff): Added diffmain.o.
(libdiff): New target.
(AR, libdiff_o): New variables. libdiff_o does not include
xmalloc.o, fnmatch.o, getopt.o, getopt1.o, regex.o or error.o,
because these functions are already present in CVS. It will take
some work to make this more general-purpose.
Redirect standard output.
* util.c: Redirect stdout to outfile: change all naked `printf'
and `putchar' statements to `fprintf (outfile)' and `putc (...,
outfile)' throughout. This should permit redirecting diff output
by changing `outfile' just once in `diff_run'.
(output_in_progress): New variable.
(begin_output, finish_output): Use `output_in_progress', rather than
`outfile', as a semaphore to avoid reentrancy problems.
(finish_output): Close `outfile' only if paginate_flag is set.
* diff.c (check_output): New function, was check_stdout. Take a
`file' argument, and flush it instead of closing it.
(diff_run): Change check_stdout to check_output.
(compare_files): Fflush outfile, not stdout.
Eliminate exit statements.
* diff.h: Include setjmp.h.
(diff_abort_buf): New variable.
(DIFF_ABORT): New macro.
* diff.c (diff_run): Change all `exit' statements to `return'.
Set up diff_abort_buf, so we can abort diff without
terminating (for libdiff.a).
(try_help): Return int instead of void; do not exit.
* util.c (fatal): Use DIFF_ABORT instead of exit.
(pfatal_with_name): Use DIFF_ABORT instead of exit.
Namespace cleanup (rudimentary). Strictly speaking, this is not
necessary to make diff into a library. However, namespace
clashes between diff and CVS must be resolved immediately, since
CVS is the first application targeted for use with difflib.
* analyze.c, diff.c, diff.h, util.c (diff_error): Renamed from `error'.
* version.c, diff.c, diff.h, cmp.c, diff3.c, sdiff.c
(diff_version_string): Renamed from version_string.
* diff.c, util.c, diff.h, diff3.c, error.c (diff_program_name):
Renamed from program_name.
* util.c (xmalloc, xrealloc): Removed.
* Makefile.in (diff_o): Added error.o and xmalloc.o.

View File

@ -1,25 +0,0 @@
## Makefile.am for GNU DIFF
## Copyright (C) 2001 Free Software Foundation, Inc.
##
## This file is part of GNU DIFF.
##
## GNU DIFF is free software; you can redistribute it and/or modify
## it under the terms of the GNU General Public License as published by
## the Free Software Foundation; either version 2, or (at your option)
## any later version.
##
## GNU DIFF is distributed in the hope that it will be useful,
## but WITHOUT ANY WARRANTY; without even the implied warranty of
## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
## GNU General Public License for more details.
INCLUDES = -I$(top_srcdir)/lib
noinst_LIBRARIES = libdiff.a
libdiff_a_SOURCES = diff.c diff3.c analyze.c cmpbuf.c cmpbuf.h io.c \
context.c ed.c normal.c ifdef.c util.c dir.c version.c diff.h \
side.c system.h diffrun.h
EXTRA_DIST = ChangeLog build_diff.com diagmeet.note \
libdiff.dep libdiff.dsp libdiff.mak .cvsignore

View File

@ -1,429 +0,0 @@
# Makefile.in generated by automake 1.10 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = diff
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in ChangeLog
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.in
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
LIBRARIES = $(noinst_LIBRARIES)
AR = @AR@
ARFLAGS = @ARFLAGS@
libdiff_a_AR = $(AR) $(ARFLAGS)
libdiff_a_LIBADD =
am_libdiff_a_OBJECTS = diff.$(OBJEXT) diff3.$(OBJEXT) \
analyze.$(OBJEXT) cmpbuf.$(OBJEXT) io.$(OBJEXT) \
context.$(OBJEXT) ed.$(OBJEXT) normal.$(OBJEXT) \
ifdef.$(OBJEXT) util.$(OBJEXT) dir.$(OBJEXT) version.$(OBJEXT) \
side.$(OBJEXT)
libdiff_a_OBJECTS = $(am_libdiff_a_OBJECTS)
DEFAULT_INCLUDES = -I. -I$(top_builddir)@am__isrc@
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
SOURCES = $(libdiff_a_SOURCES)
DIST_SOURCES = $(libdiff_a_SOURCES)
ETAGS = etags
CTAGS = ctags
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CSH = @CSH@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EDITOR = @EDITOR@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
KRB4 = @KRB4@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAINT = @MAINT@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
MKTEMP = @MKTEMP@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PERL = @PERL@
PR = @PR@
PS2PDF = @PS2PDF@
RANLIB = @RANLIB@
ROFF = @ROFF@
SENDMAIL = @SENDMAIL@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TEXI2DVI = @TEXI2DVI@
VERSION = @VERSION@
YACC = @YACC@
YFLAGS = @YFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
ac_prefix_program = @ac_prefix_program@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
includeopt = @includeopt@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
with_default_rsh = @with_default_rsh@
with_default_ssh = @with_default_ssh@
INCLUDES = -I$(top_srcdir)/lib
noinst_LIBRARIES = libdiff.a
libdiff_a_SOURCES = diff.c diff3.c analyze.c cmpbuf.c cmpbuf.h io.c \
context.c ed.c normal.c ifdef.c util.c dir.c version.c diff.h \
side.c system.h diffrun.h
EXTRA_DIST = ChangeLog build_diff.com diagmeet.note \
libdiff.dep libdiff.dsp libdiff.mak .cvsignore
all: all-am
.SUFFIXES:
.SUFFIXES: .c .o .obj
$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu diff/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu diff/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
clean-noinstLIBRARIES:
-test -z "$(noinst_LIBRARIES)" || rm -f $(noinst_LIBRARIES)
libdiff.a: $(libdiff_a_OBJECTS) $(libdiff_a_DEPENDENCIES)
-rm -f libdiff.a
$(libdiff_a_AR) libdiff.a $(libdiff_a_OBJECTS) $(libdiff_a_LIBADD)
$(RANLIB) libdiff.a
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/analyze.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/cmpbuf.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/context.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/diff.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/diff3.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/dir.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ed.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ifdef.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/io.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/normal.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/side.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/util.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/version.Po@am__quote@
.c.o:
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ mv -f $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c $<
.c.obj:
@am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
@am__fastdepCC_TRUE@ mv -f $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c `$(CYGPATH_W) '$<'`
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(LIBRARIES)
installdirs:
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic clean-noinstLIBRARIES mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am:
install-dvi: install-dvi-am
install-exec-am:
install-html: install-html-am
install-info: install-info-am
install-man:
install-pdf: install-pdf-am
install-ps: install-ps-am
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am:
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \
clean-noinstLIBRARIES ctags distclean distclean-compile \
distclean-generic distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-pdf install-pdf-am \
install-ps install-ps-am install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic pdf pdf-am ps ps-am tags uninstall \
uninstall-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

File diff suppressed because it is too large Load Diff

View File

@ -1,38 +0,0 @@
/* Buffer primitives for comparison operations.
Copyright (C) 1993 Free Software Foundation, Inc.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "system.h"
#include "cmpbuf.h"
/* Least common multiple of two buffer sizes A and B. */
size_t
buffer_lcm (a, b)
size_t a, b;
{
size_t m, n, r;
/* Yield reasonable values if buffer sizes are zero. */
if (!a)
return b ? b : 8 * 1024;
if (!b)
return a;
/* n = gcd (a, b) */
for (m = a, n = b; (r = m % n) != 0; m = n, n = r)
continue;
return a/n * b;
}

View File

@ -1,18 +0,0 @@
/* Buffer primitives for comparison operations.
Copyright (C) 1993 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
size_t buffer_lcm PARAMS((size_t, size_t));

View File

@ -1,462 +0,0 @@
/* Context-format output routines for GNU DIFF.
Copyright (C) 1988,1989,1991,1992,1993,1994,1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
static struct change *find_hunk PARAMS((struct change *));
static void find_function PARAMS((struct file_data const *, int, char const **, size_t *));
static void mark_ignorable PARAMS((struct change *));
static void pr_context_hunk PARAMS((struct change *));
static void pr_unidiff_hunk PARAMS((struct change *));
static void print_context_label PARAMS ((char const *, struct file_data *, char const *));
static void print_context_number_range PARAMS((struct file_data const *, int, int));
static void print_unidiff_number_range PARAMS((struct file_data const *, int, int));
/* Last place find_function started searching from. */
static int find_function_last_search;
/* The value find_function returned when it started searching there. */
static int find_function_last_match;
/* Print a label for a context diff, with a file name and date or a label. */
static void
print_context_label (mark, inf, label)
char const *mark;
struct file_data *inf;
char const *label;
{
if (label)
printf_output ("%s %s\n", mark, label);
else
{
char const *ct = ctime (&inf->stat.st_mtime);
if (!ct)
ct = "?\n";
/* See Posix.2 section 4.17.6.1.4 for this format. */
printf_output ("%s %s\t%s", mark, inf->name, ct);
}
}
/* Print a header for a context diff, with the file names and dates. */
void
print_context_header (inf, unidiff_flag)
struct file_data inf[];
int unidiff_flag;
{
if (unidiff_flag)
{
print_context_label ("---", &inf[0], file_label[0]);
print_context_label ("+++", &inf[1], file_label[1]);
}
else
{
print_context_label ("***", &inf[0], file_label[0]);
print_context_label ("---", &inf[1], file_label[1]);
}
}
/* Print an edit script in context format. */
void
print_context_script (script, unidiff_flag)
struct change *script;
int unidiff_flag;
{
if (ignore_blank_lines_flag || ignore_regexp_list)
mark_ignorable (script);
else
{
struct change *e;
for (e = script; e; e = e->link)
e->ignore = 0;
}
find_function_last_search = - files[0].prefix_lines;
find_function_last_match = find_function_last_search - 1;
if (unidiff_flag)
print_script (script, find_hunk, pr_unidiff_hunk);
else
print_script (script, find_hunk, pr_context_hunk);
}
/* Print a pair of line numbers with a comma, translated for file FILE.
If the second number is not greater, use the first in place of it.
Args A and B are internal line numbers.
We print the translated (real) line numbers. */
static void
print_context_number_range (file, a, b)
struct file_data const *file;
int a, b;
{
int trans_a, trans_b;
translate_range (file, a, b, &trans_a, &trans_b);
/* Note: we can have B < A in the case of a range of no lines.
In this case, we should print the line number before the range,
which is B. */
if (trans_b > trans_a)
printf_output ("%d,%d", trans_a, trans_b);
else
printf_output ("%d", trans_b);
}
/* Print a portion of an edit script in context format.
HUNK is the beginning of the portion to be printed.
The end is marked by a `link' that has been nulled out.
Prints out lines from both files, and precedes each
line with the appropriate flag-character. */
static void
pr_context_hunk (hunk)
struct change *hunk;
{
int first0, last0, first1, last1, show_from, show_to, i;
struct change *next;
char const *prefix;
char const *function;
size_t function_length;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &first0, &last0, &first1, &last1, &show_from, &show_to);
if (!show_from && !show_to)
return;
/* Include a context's width before and after. */
i = - files[0].prefix_lines;
first0 = max (first0 - context, i);
first1 = max (first1 - context, i);
last0 = min (last0 + context, files[0].valid_lines - 1);
last1 = min (last1 + context, files[1].valid_lines - 1);
/* If desired, find the preceding function definition line in file 0. */
function = 0;
if (function_regexp_list)
find_function (&files[0], first0, &function, &function_length);
begin_output ();
/* If we looked for and found a function this is part of,
include its name in the header of the diff section. */
printf_output ("***************");
if (function)
{
printf_output (" ");
write_output (function, min (function_length - 1, 40));
}
printf_output ("\n*** ");
print_context_number_range (&files[0], first0, last0);
printf_output (" ****\n");
if (show_from)
{
next = hunk;
for (i = first0; i <= last0; i++)
{
/* Skip past changes that apply (in file 0)
only to lines before line I. */
while (next && next->line0 + next->deleted <= i)
next = next->link;
/* Compute the marking for line I. */
prefix = " ";
if (next && next->line0 <= i)
/* The change NEXT covers this line.
If lines were inserted here in file 1, this is "changed".
Otherwise it is "deleted". */
prefix = (next->inserted > 0 ? "!" : "-");
print_1_line (prefix, &files[0].linbuf[i]);
}
}
printf_output ("--- ");
print_context_number_range (&files[1], first1, last1);
printf_output (" ----\n");
if (show_to)
{
next = hunk;
for (i = first1; i <= last1; i++)
{
/* Skip past changes that apply (in file 1)
only to lines before line I. */
while (next && next->line1 + next->inserted <= i)
next = next->link;
/* Compute the marking for line I. */
prefix = " ";
if (next && next->line1 <= i)
/* The change NEXT covers this line.
If lines were deleted here in file 0, this is "changed".
Otherwise it is "inserted". */
prefix = (next->deleted > 0 ? "!" : "+");
print_1_line (prefix, &files[1].linbuf[i]);
}
}
}
/* Print a pair of line numbers with a comma, translated for file FILE.
If the second number is smaller, use the first in place of it.
If the numbers are equal, print just one number.
Args A and B are internal line numbers.
We print the translated (real) line numbers. */
static void
print_unidiff_number_range (file, a, b)
struct file_data const *file;
int a, b;
{
int trans_a, trans_b;
translate_range (file, a, b, &trans_a, &trans_b);
/* Note: we can have B < A in the case of a range of no lines.
In this case, we should print the line number before the range,
which is B. */
if (trans_b <= trans_a)
printf_output (trans_b == trans_a ? "%d" : "%d,0", trans_b);
else
printf_output ("%d,%d", trans_a, trans_b - trans_a + 1);
}
/* Print a portion of an edit script in unidiff format.
HUNK is the beginning of the portion to be printed.
The end is marked by a `link' that has been nulled out.
Prints out lines from both files, and precedes each
line with the appropriate flag-character. */
static void
pr_unidiff_hunk (hunk)
struct change *hunk;
{
int first0, last0, first1, last1, show_from, show_to, i, j, k;
struct change *next;
char const *function;
size_t function_length;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &first0, &last0, &first1, &last1, &show_from, &show_to);
if (!show_from && !show_to)
return;
/* Include a context's width before and after. */
i = - files[0].prefix_lines;
first0 = max (first0 - context, i);
first1 = max (first1 - context, i);
last0 = min (last0 + context, files[0].valid_lines - 1);
last1 = min (last1 + context, files[1].valid_lines - 1);
/* If desired, find the preceding function definition line in file 0. */
function = 0;
if (function_regexp_list)
find_function (&files[0], first0, &function, &function_length);
begin_output ();
printf_output ("@@ -");
print_unidiff_number_range (&files[0], first0, last0);
printf_output (" +");
print_unidiff_number_range (&files[1], first1, last1);
printf_output (" @@");
/* If we looked for and found a function this is part of,
include its name in the header of the diff section. */
if (function)
{
write_output (" ", 1);
write_output (function, min (function_length - 1, 40));
}
write_output ("\n", 1);
next = hunk;
i = first0;
j = first1;
while (i <= last0 || j <= last1)
{
/* If the line isn't a difference, output the context from file 0. */
if (!next || i < next->line0)
{
write_output (tab_align_flag ? "\t" : " ", 1);
print_1_line (0, &files[0].linbuf[i++]);
j++;
}
else
{
/* For each difference, first output the deleted part. */
k = next->deleted;
while (k--)
{
write_output ("-", 1);
if (tab_align_flag)
write_output ("\t", 1);
print_1_line (0, &files[0].linbuf[i++]);
}
/* Then output the inserted part. */
k = next->inserted;
while (k--)
{
write_output ("+", 1);
if (tab_align_flag)
write_output ("\t", 1);
print_1_line (0, &files[1].linbuf[j++]);
}
/* We're done with this hunk, so on to the next! */
next = next->link;
}
}
}
/* Scan a (forward-ordered) edit script for the first place that more than
2*CONTEXT unchanged lines appear, and return a pointer
to the `struct change' for the last change before those lines. */
static struct change *
find_hunk (start)
struct change *start;
{
struct change *prev;
int top0, top1;
int thresh;
do
{
/* Compute number of first line in each file beyond this changed. */
top0 = start->line0 + start->deleted;
top1 = start->line1 + start->inserted;
prev = start;
start = start->link;
/* Threshold distance is 2*CONTEXT between two non-ignorable changes,
but only CONTEXT if one is ignorable. */
thresh = ((prev->ignore || (start && start->ignore))
? context
: 2 * context + 1);
/* It is not supposed to matter which file we check in the end-test.
If it would matter, crash. */
if (start && start->line0 - top0 != start->line1 - top1)
abort ();
} while (start
/* Keep going if less than THRESH lines
elapse before the affected line. */
&& start->line0 < top0 + thresh);
return prev;
}
/* Set the `ignore' flag properly in each change in SCRIPT.
It should be 1 if all the lines inserted or deleted in that change
are ignorable lines. */
static void
mark_ignorable (script)
struct change *script;
{
while (script)
{
struct change *next = script->link;
int first0, last0, first1, last1, deletes, inserts;
/* Turn this change into a hunk: detach it from the others. */
script->link = 0;
/* Determine whether this change is ignorable. */
analyze_hunk (script, &first0, &last0, &first1, &last1, &deletes, &inserts);
/* Reconnect the chain as before. */
script->link = next;
/* If the change is ignorable, mark it. */
script->ignore = (!deletes && !inserts);
/* Advance to the following change. */
script = next;
}
}
/* Find the last function-header line in FILE prior to line number LINENUM.
This is a line containing a match for the regexp in `function_regexp'.
Store the address of the line text into LINEP and the length of the
line into LENP.
Do not store anything if no function-header is found. */
static void
find_function (file, linenum, linep, lenp)
struct file_data const *file;
int linenum;
char const **linep;
size_t *lenp;
{
int i = linenum;
int last = find_function_last_search;
find_function_last_search = i;
while (--i >= last)
{
/* See if this line is what we want. */
struct regexp_list *r;
char const *line = file->linbuf[i];
size_t len = file->linbuf[i + 1] - line;
for (r = function_regexp_list; r; r = r->next)
if (0 <= re_search (&r->buf, line, len, 0, len, 0))
{
*linep = line;
*lenp = len;
find_function_last_match = i;
return;
}
}
/* If we search back to where we started searching the previous time,
find the line we found last time. */
if (find_function_last_match >= - file->prefix_lines)
{
i = find_function_last_match;
*linep = file->linbuf[i];
*lenp = file->linbuf[i + 1] - *linep;
return;
}
return;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,354 +0,0 @@
/* Shared definitions for GNU DIFF
Copyright (C) 1988, 89, 91, 92, 93, 97, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "system.h"
#include <stdio.h>
#include <setjmp.h>
#include "regex.h"
#include "diffrun.h"
#define TAB_WIDTH 8
/* Variables for command line options */
#ifndef GDIFF_MAIN
#define EXTERN extern
#else
#define EXTERN
#endif
/* The callbacks to use for output. */
EXTERN const struct diff_callbacks *callbacks;
enum output_style {
/* Default output style. */
OUTPUT_NORMAL,
/* Output the differences with lines of context before and after (-c). */
OUTPUT_CONTEXT,
/* Output the differences in a unified context diff format (-u). */
OUTPUT_UNIFIED,
/* Output the differences as commands suitable for `ed' (-e). */
OUTPUT_ED,
/* Output the diff as a forward ed script (-f). */
OUTPUT_FORWARD_ED,
/* Like -f, but output a count of changed lines in each "command" (-n). */
OUTPUT_RCS,
/* Output merged #ifdef'd file (-D). */
OUTPUT_IFDEF,
/* Output sdiff style (-y). */
OUTPUT_SDIFF
};
/* True for output styles that are robust,
i.e. can handle a file that ends in a non-newline. */
#define ROBUST_OUTPUT_STYLE(S) ((S) != OUTPUT_ED && (S) != OUTPUT_FORWARD_ED)
EXTERN enum output_style output_style;
/* Nonzero if output cannot be generated for identical files. */
EXTERN int no_diff_means_no_output;
/* Number of lines of context to show in each set of diffs.
This is zero when context is not to be shown. */
EXTERN int context;
/* Consider all files as text files (-a).
Don't interpret codes over 0177 as implying a "binary file". */
EXTERN int always_text_flag;
/* Number of lines to keep in identical prefix and suffix. */
EXTERN int horizon_lines;
/* Ignore changes in horizontal white space (-b). */
EXTERN int ignore_space_change_flag;
/* Ignore all horizontal white space (-w). */
EXTERN int ignore_all_space_flag;
/* Ignore changes that affect only blank lines (-B). */
EXTERN int ignore_blank_lines_flag;
/* 1 if lines may match even if their contents do not match exactly.
This depends on various options. */
EXTERN int ignore_some_line_changes;
/* 1 if files may match even if their contents are not byte-for-byte identical.
This depends on various options. */
EXTERN int ignore_some_changes;
/* Ignore differences in case of letters (-i). */
EXTERN int ignore_case_flag;
/* File labels for `-c' output headers (-L). */
EXTERN char *file_label[2];
struct regexp_list
{
struct re_pattern_buffer buf;
struct regexp_list *next;
};
/* Regexp to identify function-header lines (-F). */
EXTERN struct regexp_list *function_regexp_list;
/* Ignore changes that affect only lines matching this regexp (-I). */
EXTERN struct regexp_list *ignore_regexp_list;
/* Say only whether files differ, not how (-q). */
EXTERN int no_details_flag;
/* Report files compared that match (-s).
Normally nothing is output when that happens. */
EXTERN int print_file_same_flag;
/* Output the differences with exactly 8 columns added to each line
so that any tabs in the text line up properly (-T). */
EXTERN int tab_align_flag;
/* Expand tabs in the output so the text lines up properly
despite the characters added to the front of each line (-t). */
EXTERN int tab_expand_flag;
/* In directory comparison, specify file to start with (-S).
All file names less than this name are ignored. */
EXTERN char *dir_start_file;
/* If a file is new (appears in only one dir)
include its entire contents (-N).
Then `patch' would create the file with appropriate contents. */
EXTERN int entire_new_file_flag;
/* If a file is new (appears in only the second dir)
include its entire contents (-P).
Then `patch' would create the file with appropriate contents. */
EXTERN int unidirectional_new_file_flag;
/* Pipe each file's output through pr (-l). */
EXTERN int paginate_flag;
enum line_class {
/* Lines taken from just the first file. */
OLD,
/* Lines taken from just the second file. */
NEW,
/* Lines common to both files. */
UNCHANGED,
/* A hunk containing both old and new lines (line groups only). */
CHANGED
};
/* Line group formats for old, new, unchanged, and changed groups. */
EXTERN char *group_format[CHANGED + 1];
/* Line formats for old, new, and unchanged lines. */
EXTERN char *line_format[UNCHANGED + 1];
/* If using OUTPUT_SDIFF print extra information to help the sdiff filter. */
EXTERN int sdiff_help_sdiff;
/* Tell OUTPUT_SDIFF to show only the left version of common lines. */
EXTERN int sdiff_left_only;
/* Tell OUTPUT_SDIFF to not show common lines. */
EXTERN int sdiff_skip_common_lines;
/* The half line width and column 2 offset for OUTPUT_SDIFF. */
EXTERN unsigned sdiff_half_width;
EXTERN unsigned sdiff_column2_offset;
/* String containing all the command options diff received,
with spaces between and at the beginning but none at the end.
If there were no options given, this string is empty. */
EXTERN char * switch_string;
/* Nonzero means use heuristics for better speed. */
EXTERN int heuristic;
/* Name of program the user invoked (for error messages). */
EXTERN char *diff_program_name;
/* Jump buffer for nonlocal exits. */
EXTERN jmp_buf diff_abort_buf;
#define DIFF_ABORT(retval) longjmp(diff_abort_buf, retval)
/* The result of comparison is an "edit script": a chain of `struct change'.
Each `struct change' represents one place where some lines are deleted
and some are inserted.
LINE0 and LINE1 are the first affected lines in the two files (origin 0).
DELETED is the number of lines deleted here from file 0.
INSERTED is the number of lines inserted here in file 1.
If DELETED is 0 then LINE0 is the number of the line before
which the insertion was done; vice versa for INSERTED and LINE1. */
struct change
{
struct change *link; /* Previous or next edit command */
int inserted; /* # lines of file 1 changed here. */
int deleted; /* # lines of file 0 changed here. */
int line0; /* Line number of 1st deleted line. */
int line1; /* Line number of 1st inserted line. */
char ignore; /* Flag used in context.c */
};
/* Structures that describe the input files. */
/* Data on one input file being compared. */
struct file_data {
int desc; /* File descriptor */
char const *name; /* File name */
struct stat stat; /* File status from fstat() */
int dir_p; /* nonzero if file is a directory */
/* Buffer in which text of file is read. */
char * buffer;
/* Allocated size of buffer. */
size_t bufsize;
/* Number of valid characters now in the buffer. */
size_t buffered_chars;
/* Array of pointers to lines in the file. */
char const **linbuf;
/* linbuf_base <= buffered_lines <= valid_lines <= alloc_lines.
linebuf[linbuf_base ... buffered_lines - 1] are possibly differing.
linebuf[linbuf_base ... valid_lines - 1] contain valid data.
linebuf[linbuf_base ... alloc_lines - 1] are allocated. */
int linbuf_base, buffered_lines, valid_lines, alloc_lines;
/* Pointer to end of prefix of this file to ignore when hashing. */
char const *prefix_end;
/* Count of lines in the prefix.
There are this many lines in the file before linbuf[0]. */
int prefix_lines;
/* Pointer to start of suffix of this file to ignore when hashing. */
char const *suffix_begin;
/* Vector, indexed by line number, containing an equivalence code for
each line. It is this vector that is actually compared with that
of another file to generate differences. */
int *equivs;
/* Vector, like the previous one except that
the elements for discarded lines have been squeezed out. */
int *undiscarded;
/* Vector mapping virtual line numbers (not counting discarded lines)
to real ones (counting those lines). Both are origin-0. */
int *realindexes;
/* Total number of nondiscarded lines. */
int nondiscarded_lines;
/* Vector, indexed by real origin-0 line number,
containing 1 for a line that is an insertion or a deletion.
The results of comparison are stored here. */
char *changed_flag;
/* 1 if file ends in a line with no final newline. */
int missing_newline;
/* 1 more than the maximum equivalence value used for this or its
sibling file. */
int equiv_max;
};
/* Describe the two files currently being compared. */
EXTERN struct file_data files[2];
/* Stdio stream to output diffs to. */
EXTERN FILE *outfile;
/* Declare various functions. */
/* analyze.c */
int diff_2_files PARAMS((struct file_data[], int));
/* context.c */
void print_context_header PARAMS((struct file_data[], int));
void print_context_script PARAMS((struct change *, int));
/* diff.c */
int excluded_filename PARAMS((char const *));
/* dir.c */
int diff_dirs PARAMS((struct file_data const[], int (*) PARAMS((char const *, char const *, char const *, char const *, int)), int));
/* ed.c */
void print_ed_script PARAMS((struct change *));
void pr_forward_ed_script PARAMS((struct change *));
/* ifdef.c */
void print_ifdef_script PARAMS((struct change *));
/* io.c */
int read_files PARAMS((struct file_data[], int));
int sip PARAMS((struct file_data *, int));
void slurp PARAMS((struct file_data *));
/* normal.c */
void print_normal_script PARAMS((struct change *));
/* rcs.c */
void print_rcs_script PARAMS((struct change *));
/* side.c */
void print_sdiff_script PARAMS((struct change *));
/* util.c */
VOID *xmalloc PARAMS((size_t));
VOID *xrealloc PARAMS((VOID *, size_t));
char *concat PARAMS((char const *, char const *, char const *));
char *dir_file_pathname PARAMS((char const *, char const *));
int change_letter PARAMS((int, int));
int line_cmp PARAMS((char const *, char const *));
int translate_line_number PARAMS((struct file_data const *, int));
struct change *find_change PARAMS((struct change *));
struct change *find_reverse_change PARAMS((struct change *));
void analyze_hunk PARAMS((struct change *, int *, int *, int *, int *, int *, int *));
void begin_output PARAMS((void));
void debug_script PARAMS((struct change *));
void diff_error PARAMS((char const *, char const *, char const *));
void fatal PARAMS((char const *));
void finish_output PARAMS((void));
void write_output PARAMS((char const *, size_t));
void printf_output PARAMS((char const *, ...))
#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ > 6)
__attribute__ ((__format__ (__printf__, 1, 2)))
#endif
;
void flush_output PARAMS((void));
void message PARAMS((char const *, char const *, char const *));
void message5 PARAMS((char const *, char const *, char const *, char const *, char const *));
void output_1_line PARAMS((char const *, char const *, char const *, char const *));
void perror_with_name PARAMS((char const *));
void pfatal_with_name PARAMS((char const *));
void print_1_line PARAMS((char const *, char const * const *));
void print_message_queue PARAMS((void));
void print_number_range PARAMS((int, struct file_data *, int, int));
void print_script PARAMS((struct change *, struct change * (*) PARAMS((struct change *)), void (*) PARAMS((struct change *))));
void setup_output PARAMS((char const *, char const *, int));
void translate_range PARAMS((struct file_data const *, int, int, int *, int *));
/* version.c */
extern char const diff_version_string[];

File diff suppressed because it is too large Load Diff

View File

@ -1,69 +0,0 @@
/* Interface header file for GNU DIFF library.
Copyright (C) 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#ifndef DIFFRUN_H
#define DIFFRUN_H
/* This header file defines the interfaces used by the diff library.
It should be included by programs which use the diff library. */
#include <sys/types.h>
#if defined __STDC__ && __STDC__
#define DIFFPARAMS(args) args
#else
#define DIFFPARAMS(args) ()
#endif
/* The diff_callbacks structure is used to handle callbacks from the
diff library. All output goes through these callbacks. When a
pointer to this structure is passed in, it may be NULL. Also, any
of the individual callbacks may be NULL. This means that the
default action should be taken. */
struct diff_callbacks
{
/* Write output. This function just writes a string of a given
length to the output file. The default is to fwrite to OUTFILE.
If this callback is defined, flush_output must also be defined.
If the length is zero, output zero bytes. */
void (*write_output) DIFFPARAMS((char const *, size_t));
/* Flush output. The default is to fflush OUTFILE. If this
callback is defined, write_output must also be defined. */
void (*flush_output) DIFFPARAMS((void));
/* Write a '\0'-terminated string to stdout.
This is called for version and help messages. */
void (*write_stdout) DIFFPARAMS((char const *));
/* Print an error message. The first argument is a printf format,
and the next two are parameters. The default is to print a
message on stderr. */
void (*error) DIFFPARAMS((char const *, char const *, char const *));
};
/* Run a diff. */
extern int diff_run DIFFPARAMS((int, char **, const char *,
const struct diff_callbacks *));
/* Run a diff3. */
extern int diff3_run DIFFPARAMS((int, char **, char *,
const struct diff_callbacks *));
#undef DIFFPARAMS
#endif /* DIFFRUN_H */

View File

@ -1,218 +0,0 @@
/* Read, sort and compare two directories. Used for GNU DIFF.
Copyright (C) 1988, 1989, 1992, 1993, 1994 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
/* Read the directory named by DIR and store into DIRDATA a sorted vector
of filenames for its contents. DIR->desc == -1 means this directory is
known to be nonexistent, so set DIRDATA to an empty vector.
Return -1 (setting errno) if error, 0 otherwise. */
struct dirdata
{
char const **names; /* Sorted names of files in dir, 0-terminated. */
char *data; /* Allocated storage for file names. */
};
static int compare_names PARAMS((void const *, void const *));
static int dir_sort PARAMS((struct file_data const *, struct dirdata *));
#ifdef _WIN32
#define CLOSEDIR_VOID 1
#endif
static int
dir_sort (dir, dirdata)
struct file_data const *dir;
struct dirdata *dirdata;
{
register struct dirent *next;
register int i;
/* Address of block containing the files that are described. */
char const **names;
/* Number of files in directory. */
size_t nnames;
/* Allocated and used storage for file name data. */
char *data;
size_t data_alloc, data_used;
dirdata->names = 0;
dirdata->data = 0;
nnames = 0;
data = 0;
if (dir->desc != -1)
{
/* Open the directory and check for errors. */
register DIR *reading = CVS_OPENDIR (dir->name);
if (!reading)
return -1;
/* Initialize the table of filenames. */
data_alloc = max (1, (size_t) dir->stat.st_size);
data_used = 0;
dirdata->data = data = xmalloc (data_alloc);
/* Read the directory entries, and insert the subfiles
into the `data' table. */
while ((errno = 0, (next = CVS_READDIR (reading)) != 0))
{
char *d_name = next->d_name;
size_t d_size = NAMLEN (next) + 1;
/* Ignore the files `.' and `..' */
if (d_name[0] == '.'
&& (d_name[1] == 0 || (d_name[1] == '.' && d_name[2] == 0)))
continue;
if (excluded_filename (d_name))
continue;
while (data_alloc < data_used + d_size)
dirdata->data = data = xrealloc (data, data_alloc *= 2);
memcpy (data + data_used, d_name, d_size);
data_used += d_size;
nnames++;
}
if (errno)
{
int e = errno;
CVS_CLOSEDIR (reading);
errno = e;
return -1;
}
#if CLOSEDIR_VOID
CVS_CLOSEDIR (reading);
#else
if (CVS_CLOSEDIR (reading) != 0)
return -1;
#endif
}
/* Create the `names' table from the `data' table. */
dirdata->names = names = (char const **) xmalloc (sizeof (char *)
* (nnames + 1));
for (i = 0; i < nnames; i++)
{
names[i] = data;
data += strlen (data) + 1;
}
names[nnames] = 0;
/* Sort the table. */
qsort (names, nnames, sizeof (char *), compare_names);
return 0;
}
/* Sort the files now in the table. */
static int
compare_names (file1, file2)
void const *file1, *file2;
{
return filename_cmp (* (char const *const *) file1,
* (char const *const *) file2);
}
/* Compare the contents of two directories named in FILEVEC[0] and FILEVEC[1].
This is a top-level routine; it does everything necessary for diff
on two directories.
FILEVEC[0].desc == -1 says directory FILEVEC[0] doesn't exist,
but pretend it is empty. Likewise for FILEVEC[1].
HANDLE_FILE is a caller-provided subroutine called to handle each file.
It gets five operands: dir and name (rel to original working dir) of file
in dir 0, dir and name pathname of file in dir 1, and the recursion depth.
For a file that appears in only one of the dirs, one of the name-args
to HANDLE_FILE is zero.
DEPTH is the current depth in recursion, used for skipping top-level
files by the -S option.
Returns the maximum of all the values returned by HANDLE_FILE,
or 2 if trouble is encountered in opening files. */
int
diff_dirs (filevec, handle_file, depth)
struct file_data const filevec[];
int (*handle_file) PARAMS((char const *, char const *, char const *, char const *, int));
int depth;
{
struct dirdata dirdata[2];
int val = 0; /* Return value. */
int i;
/* Get sorted contents of both dirs. */
for (i = 0; i < 2; i++)
if (dir_sort (&filevec[i], &dirdata[i]) != 0)
{
perror_with_name (filevec[i].name);
val = 2;
}
if (val == 0)
{
register char const * const *names0 = dirdata[0].names;
register char const * const *names1 = dirdata[1].names;
char const *name0 = filevec[0].name;
char const *name1 = filevec[1].name;
/* If `-S name' was given, and this is the topmost level of comparison,
ignore all file names less than the specified starting name. */
if (dir_start_file && depth == 0)
{
while (*names0 && filename_cmp (*names0, dir_start_file) < 0)
names0++;
while (*names1 && filename_cmp (*names1, dir_start_file) < 0)
names1++;
}
/* Loop while files remain in one or both dirs. */
while (*names0 || *names1)
{
/* Compare next name in dir 0 with next name in dir 1.
At the end of a dir,
pretend the "next name" in that dir is very large. */
int nameorder = (!*names0 ? 1 : !*names1 ? -1
: filename_cmp (*names0, *names1));
int v1 = (*handle_file) (name0, 0 < nameorder ? 0 : *names0++,
name1, nameorder < 0 ? 0 : *names1++,
depth + 1);
if (v1 > val)
val = v1;
}
}
for (i = 0; i < 2; i++)
{
if (dirdata[i].names)
free (dirdata[i].names);
if (dirdata[i].data)
free (dirdata[i].data);
}
return val;
}

View File

@ -1,198 +0,0 @@
/* Output routines for ed-script format.
Copyright (C) 1988, 89, 91, 92, 93, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
static void print_ed_hunk PARAMS((struct change *));
static void print_rcs_hunk PARAMS((struct change *));
static void pr_forward_ed_hunk PARAMS((struct change *));
/* Print our script as ed commands. */
void
print_ed_script (script)
struct change *script;
{
print_script (script, find_reverse_change, print_ed_hunk);
}
/* Print a hunk of an ed diff */
static void
print_ed_hunk (hunk)
struct change *hunk;
{
int f0, l0, f1, l1;
int deletes, inserts;
#if 0
hunk = flip_script (hunk);
#endif
#ifdef DEBUG
debug_script (hunk);
#endif
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &f0, &l0, &f1, &l1, &deletes, &inserts);
if (!deletes && !inserts)
return;
begin_output ();
/* Print out the line number header for this hunk */
print_number_range (',', &files[0], f0, l0);
printf_output ("%c\n", change_letter (inserts, deletes));
/* Print new/changed lines from second file, if needed */
if (inserts)
{
int i;
int inserting = 1;
for (i = f1; i <= l1; i++)
{
/* Resume the insert, if we stopped. */
if (! inserting)
printf_output ("%da\n",
i - f1 + translate_line_number (&files[0], f0) - 1);
inserting = 1;
/* If the file's line is just a dot, it would confuse `ed'.
So output it with a double dot, and set the flag LEADING_DOT
so that we will output another ed-command later
to change the double dot into a single dot. */
if (files[1].linbuf[i][0] == '.'
&& files[1].linbuf[i][1] == '\n')
{
printf_output ("..\n");
printf_output (".\n");
/* Now change that double dot to the desired single dot. */
printf_output ("%ds/^\\.\\././\n",
i - f1 + translate_line_number (&files[0], f0));
inserting = 0;
}
else
/* Line is not `.', so output it unmodified. */
print_1_line ("", &files[1].linbuf[i]);
}
/* End insert mode, if we are still in it. */
if (inserting)
printf_output (".\n");
}
}
/* Print change script in the style of ed commands,
but print the changes in the order they appear in the input files,
which means that the commands are not truly useful with ed. */
void
pr_forward_ed_script (script)
struct change *script;
{
print_script (script, find_change, pr_forward_ed_hunk);
}
static void
pr_forward_ed_hunk (hunk)
struct change *hunk;
{
int i;
int f0, l0, f1, l1;
int deletes, inserts;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &f0, &l0, &f1, &l1, &deletes, &inserts);
if (!deletes && !inserts)
return;
begin_output ();
printf_output ("%c", change_letter (inserts, deletes));
print_number_range (' ', files, f0, l0);
printf_output ("\n");
/* If deletion only, print just the number range. */
if (!inserts)
return;
/* For insertion (with or without deletion), print the number range
and the lines from file 2. */
for (i = f1; i <= l1; i++)
print_1_line ("", &files[1].linbuf[i]);
printf_output (".\n");
}
/* Print in a format somewhat like ed commands
except that each insert command states the number of lines it inserts.
This format is used for RCS. */
void
print_rcs_script (script)
struct change *script;
{
print_script (script, find_change, print_rcs_hunk);
}
/* Print a hunk of an RCS diff */
static void
print_rcs_hunk (hunk)
struct change *hunk;
{
int i;
int f0, l0, f1, l1;
int deletes, inserts;
int tf0, tl0, tf1, tl1;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &f0, &l0, &f1, &l1, &deletes, &inserts);
if (!deletes && !inserts)
return;
begin_output ();
translate_range (&files[0], f0, l0, &tf0, &tl0);
if (deletes)
{
printf_output ("d");
/* For deletion, print just the starting line number from file 0
and the number of lines deleted. */
printf_output ("%d %d\n",
tf0,
(tl0 >= tf0 ? tl0 - tf0 + 1 : 1));
}
if (inserts)
{
printf_output ("a");
/* Take last-line-number from file 0 and # lines from file 1. */
translate_range (&files[1], f1, l1, &tf1, &tl1);
printf_output ("%d %d\n",
tl0,
(tl1 >= tf1 ? tl1 - tf1 + 1 : 1));
/* Print the inserted lines. */
for (i = f1; i <= l1; i++)
print_1_line ("", &files[1].linbuf[i]);
}
}

View File

@ -1,436 +0,0 @@
/* #ifdef-format output routines for GNU DIFF.
Copyright (C) 1989, 1991, 1992, 1993, 1994, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY. No author or distributor
accepts responsibility to anyone for the consequences of using it
or for whether it serves any particular purpose or works at all,
unless he says so in writing. Refer to the GNU DIFF General Public
License for full details.
Everyone is granted permission to copy, modify and redistribute
GNU DIFF, but only under the conditions described in the
GNU DIFF General Public License. A copy of this license is
supposed to have been given to you along with GNU DIFF so you
can know your rights and responsibilities. It should be in a
file named COPYING. Among other things, the copyright notice
and this notice must be preserved on all copies. */
#include "diff.h"
struct group
{
struct file_data const *file;
int from, upto; /* start and limit lines for this group of lines */
};
static char *format_group PARAMS((int, char *, int, struct group const *));
static char *scan_char_literal PARAMS((char *, int *));
static char *scan_printf_spec PARAMS((char *));
static int groups_letter_value PARAMS((struct group const *, int));
static void format_ifdef PARAMS((char *, int, int, int, int));
static void print_ifdef_hunk PARAMS((struct change *));
static void print_ifdef_lines PARAMS((int, char *, struct group const *));
static int next_line;
/* Print the edit-script SCRIPT as a merged #ifdef file. */
void
print_ifdef_script (script)
struct change *script;
{
next_line = - files[0].prefix_lines;
print_script (script, find_change, print_ifdef_hunk);
if (next_line < files[0].valid_lines)
{
begin_output ();
format_ifdef (group_format[UNCHANGED], next_line, files[0].valid_lines,
next_line - files[0].valid_lines + files[1].valid_lines,
files[1].valid_lines);
}
}
/* Print a hunk of an ifdef diff.
This is a contiguous portion of a complete edit script,
describing changes in consecutive lines. */
static void
print_ifdef_hunk (hunk)
struct change *hunk;
{
int first0, last0, first1, last1, deletes, inserts;
char *format;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &first0, &last0, &first1, &last1, &deletes, &inserts);
if (inserts)
format = deletes ? group_format[CHANGED] : group_format[NEW];
else if (deletes)
format = group_format[OLD];
else
return;
begin_output ();
/* Print lines up to this change. */
if (next_line < first0)
format_ifdef (group_format[UNCHANGED], next_line, first0,
next_line - first0 + first1, first1);
/* Print this change. */
next_line = last0 + 1;
format_ifdef (format, first0, next_line, first1, last1 + 1);
}
/* Print a set of lines according to FORMAT.
Lines BEG0 up to END0 are from the first file;
lines BEG1 up to END1 are from the second file. */
static void
format_ifdef (format, beg0, end0, beg1, end1)
char *format;
int beg0, end0, beg1, end1;
{
struct group groups[2];
groups[0].file = &files[0];
groups[0].from = beg0;
groups[0].upto = end0;
groups[1].file = &files[1];
groups[1].from = beg1;
groups[1].upto = end1;
format_group (1, format, '\0', groups);
}
/* If DOIT is non-zero, output a set of lines according to FORMAT.
The format ends at the first free instance of ENDCHAR.
Yield the address of the terminating character.
GROUPS specifies which lines to print.
If OUT is zero, do not actually print anything; just scan the format. */
static char *
format_group (doit, format, endchar, groups)
int doit;
char *format;
int endchar;
struct group const *groups;
{
register char c;
register char *f = format;
while ((c = *f) != endchar && c != 0)
{
f++;
if (c == '%')
{
char *spec = f;
switch ((c = *f++))
{
case '%':
break;
case '(':
/* Print if-then-else format e.g. `%(n=1?thenpart:elsepart)'. */
{
int i, value[2];
int thendoit, elsedoit;
for (i = 0; i < 2; i++)
{
unsigned char f0 = f[0];
if (ISDIGIT (f0))
{
value[i] = atoi (f);
while (ISDIGIT ((unsigned char) *++f))
continue;
}
else
{
value[i] = groups_letter_value (groups, f0);
if (value[i] < 0)
goto bad_format;
f++;
}
if (*f++ != "=?"[i])
goto bad_format;
}
if (value[0] == value[1])
thendoit = doit, elsedoit = 0;
else
thendoit = 0, elsedoit = doit;
f = format_group (thendoit, f, ':', groups);
if (*f)
{
f = format_group (elsedoit, f + 1, ')', groups);
if (*f)
f++;
}
}
continue;
case '<':
/* Print lines deleted from first file. */
print_ifdef_lines (doit, line_format[OLD], &groups[0]);
continue;
case '=':
/* Print common lines. */
print_ifdef_lines (doit, line_format[UNCHANGED], &groups[0]);
continue;
case '>':
/* Print lines inserted from second file. */
print_ifdef_lines (doit, line_format[NEW], &groups[1]);
continue;
default:
{
int value;
char *speclim;
f = scan_printf_spec (spec);
if (!f)
goto bad_format;
speclim = f;
c = *f++;
switch (c)
{
case '\'':
f = scan_char_literal (f, &value);
if (!f)
goto bad_format;
break;
default:
value = groups_letter_value (groups, c);
if (value < 0)
goto bad_format;
break;
}
if (doit)
{
/* Temporarily replace e.g. "%3dnx" with "%3d\0x". */
*speclim = 0;
printf_output (spec - 1, value);
/* Undo the temporary replacement. */
*speclim = c;
}
}
continue;
bad_format:
c = '%';
f = spec;
break;
}
}
if (doit)
{
/* Don't take the address of a register variable. */
char cc = c;
write_output (&cc, 1);
}
}
return f;
}
/* For the line group pair G, return the number corresponding to LETTER.
Return -1 if LETTER is not a group format letter. */
static int
groups_letter_value (g, letter)
struct group const *g;
int letter;
{
if (ISUPPER (letter))
{
g++;
letter = tolower (letter);
}
switch (letter)
{
case 'e': return translate_line_number (g->file, g->from) - 1;
case 'f': return translate_line_number (g->file, g->from);
case 'l': return translate_line_number (g->file, g->upto) - 1;
case 'm': return translate_line_number (g->file, g->upto);
case 'n': return g->upto - g->from;
default: return -1;
}
}
/* Output using FORMAT to print the line group GROUP.
But do nothing if DOIT is zero. */
static void
print_ifdef_lines (doit, format, group)
int doit;
char *format;
struct group const *group;
{
struct file_data const *file = group->file;
char const * const *linbuf = file->linbuf;
int from = group->from, upto = group->upto;
if (!doit)
return;
/* If possible, use a single fwrite; it's faster. */
if (!tab_expand_flag && format[0] == '%')
{
if (format[1] == 'l' && format[2] == '\n' && !format[3])
{
write_output (linbuf[from],
(linbuf[upto] + (linbuf[upto][-1] != '\n')
- linbuf[from]));
return;
}
if (format[1] == 'L' && !format[2])
{
write_output (linbuf[from],
linbuf[upto] - linbuf[from]);
return;
}
}
for (; from < upto; from++)
{
register char c;
register char *f = format;
char cc;
while ((c = *f++) != 0)
{
if (c == '%')
{
char *spec = f;
switch ((c = *f++))
{
case '%':
break;
case 'l':
output_1_line (linbuf[from],
linbuf[from + 1]
- (linbuf[from + 1][-1] == '\n'), 0, 0);
continue;
case 'L':
output_1_line (linbuf[from], linbuf[from + 1], 0, 0);
continue;
default:
{
int value;
char *speclim;
f = scan_printf_spec (spec);
if (!f)
goto bad_format;
speclim = f;
c = *f++;
switch (c)
{
case '\'':
f = scan_char_literal (f, &value);
if (!f)
goto bad_format;
break;
case 'n':
value = translate_line_number (file, from);
break;
default:
goto bad_format;
}
/* Temporarily replace e.g. "%3dnx" with "%3d\0x". */
*speclim = 0;
printf_output (spec - 1, value);
/* Undo the temporary replacement. */
*speclim = c;
}
continue;
bad_format:
c = '%';
f = spec;
break;
}
}
/* Don't take the address of a register variable. */
cc = c;
write_output (&cc, 1);
}
}
}
/* Scan the character literal represented in the string LIT; LIT points just
after the initial apostrophe. Put the literal's value into *INTPTR.
Yield the address of the first character after the closing apostrophe,
or zero if the literal is ill-formed. */
static char *
scan_char_literal (lit, intptr)
char *lit;
int *intptr;
{
register char *p = lit;
int value, digits;
char c = *p++;
switch (c)
{
case 0:
case '\'':
return 0;
case '\\':
value = 0;
while ((c = *p++) != '\'')
{
unsigned digit = c - '0';
if (8 <= digit)
return 0;
value = 8 * value + digit;
}
digits = p - lit - 2;
if (! (1 <= digits && digits <= 3))
return 0;
break;
default:
value = c;
if (*p++ != '\'')
return 0;
break;
}
*intptr = value;
return p;
}
/* Scan optional printf-style SPEC of the form `-*[0-9]*(.[0-9]*)?[cdoxX]'.
Return the address of the character following SPEC, or zero if failure. */
static char *
scan_printf_spec (spec)
register char *spec;
{
register unsigned char c;
while ((c = *spec++) == '-')
continue;
while (ISDIGIT (c))
c = *spec++;
if (c == '.')
while (ISDIGIT (c = *spec++))
continue;
switch (c)
{
case 'c': case 'd': case 'o': case 'x': case 'X':
return spec;
default:
return 0;
}
}

View File

@ -1,711 +0,0 @@
/* File I/O for GNU DIFF.
Copyright (C) 1988, 1989, 1992, 1993, 1994 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
/* Rotate a value n bits to the left. */
#define UINT_BIT (sizeof (unsigned) * CHAR_BIT)
#define ROL(v, n) ((v) << (n) | (v) >> (UINT_BIT - (n)))
/* Given a hash value and a new character, return a new hash value. */
#define HASH(h, c) ((c) + ROL (h, 7))
/* Guess remaining number of lines from number N of lines so far,
size S so far, and total size T. */
#define GUESS_LINES(n,s,t) (((t) - (s)) / ((n) < 10 ? 32 : (s) / ((n)-1)) + 5)
/* Type used for fast prefix comparison in find_identical_ends. */
#ifndef word
#define word int
#endif
/* Lines are put into equivalence classes (of lines that match in line_cmp).
Each equivalence class is represented by one of these structures,
but only while the classes are being computed.
Afterward, each class is represented by a number. */
struct equivclass
{
int next; /* Next item in this bucket. */
unsigned hash; /* Hash of lines in this class. */
char const *line; /* A line that fits this class. */
size_t length; /* That line's length, not counting its newline. */
};
/* Hash-table: array of buckets, each being a chain of equivalence classes.
buckets[-1] is reserved for incomplete lines. */
static int *buckets;
/* Number of buckets in the hash table array, not counting buckets[-1]. */
static int nbuckets;
/* Array in which the equivalence classes are allocated.
The bucket-chains go through the elements in this array.
The number of an equivalence class is its index in this array. */
static struct equivclass *equivs;
/* Index of first free element in the array `equivs'. */
static int equivs_index;
/* Number of elements allocated in the array `equivs'. */
static int equivs_alloc;
static void find_and_hash_each_line PARAMS((struct file_data *));
static void find_identical_ends PARAMS((struct file_data[]));
static void prepare_text_end PARAMS((struct file_data *));
/* Check for binary files and compare them for exact identity. */
/* Return 1 if BUF contains a non text character.
SIZE is the number of characters in BUF. */
#define binary_file_p(buf, size) (memchr (buf, '\0', size) != 0)
/* Get ready to read the current file.
Return nonzero if SKIP_TEST is zero,
and if it appears to be a binary file. */
int
sip (current, skip_test)
struct file_data *current;
int skip_test;
{
/* If we have a nonexistent file at this stage, treat it as empty. */
if (current->desc < 0)
{
/* Leave room for a sentinel. */
current->bufsize = sizeof (word);
current->buffer = xmalloc (current->bufsize);
}
else
{
current->bufsize = STAT_BLOCKSIZE (current->stat);
current->buffer = xmalloc (current->bufsize);
if (! skip_test)
{
/* Check first part of file to see if it's a binary file. */
#if HAVE_SETMODE
int oldmode = setmode (current->desc, O_BINARY);
#endif
ssize_t n = read (current->desc, current->buffer, current->bufsize);
if (n == -1)
pfatal_with_name (current->name);
current->buffered_chars = n;
#if HAVE_SETMODE
if (oldmode != O_BINARY)
{
if (lseek (current->desc, - (off_t) n, SEEK_CUR) == -1)
pfatal_with_name (current->name);
setmode (current->desc, oldmode);
current->buffered_chars = 0;
}
#endif
return binary_file_p (current->buffer, n);
}
}
current->buffered_chars = 0;
return 0;
}
/* Slurp the rest of the current file completely into memory. */
void
slurp (current)
struct file_data *current;
{
ssize_t cc;
if (current->desc < 0)
/* The file is nonexistent. */
;
else if (S_ISREG (current->stat.st_mode))
{
/* It's a regular file; slurp in the rest all at once. */
/* Get the size out of the stat block.
Allocate enough room for appended newline and sentinel. */
cc = current->stat.st_size + 1 + sizeof (word);
if (current->bufsize < cc)
{
current->bufsize = cc;
current->buffer = xrealloc (current->buffer, cc);
}
if (current->buffered_chars < current->stat.st_size)
{
cc = read (current->desc,
current->buffer + current->buffered_chars,
current->stat.st_size - current->buffered_chars);
if (cc == -1)
pfatal_with_name (current->name);
current->buffered_chars += cc;
}
}
/* It's not a regular file; read it, growing the buffer as needed. */
else if (always_text_flag || current->buffered_chars != 0)
{
for (;;)
{
if (current->buffered_chars == current->bufsize)
{
current->bufsize = current->bufsize * 2;
current->buffer = xrealloc (current->buffer, current->bufsize);
}
cc = read (current->desc,
current->buffer + current->buffered_chars,
current->bufsize - current->buffered_chars);
if (cc == 0)
break;
if (cc == -1)
pfatal_with_name (current->name);
current->buffered_chars += cc;
}
/* Allocate just enough room for appended newline and sentinel. */
current->bufsize = current->buffered_chars + 1 + sizeof (word);
current->buffer = xrealloc (current->buffer, current->bufsize);
}
}
/* Split the file into lines, simultaneously computing the equivalence class for
each line. */
static void
find_and_hash_each_line (current)
struct file_data *current;
{
unsigned h;
unsigned char const *p = (unsigned char const *) current->prefix_end;
unsigned char c;
int i, *bucket;
size_t length;
/* Cache often-used quantities in local variables to help the compiler. */
char const **linbuf = current->linbuf;
int alloc_lines = current->alloc_lines;
int line = 0;
int linbuf_base = current->linbuf_base;
int *cureqs = (int *) xmalloc (alloc_lines * sizeof (int));
struct equivclass *eqs = equivs;
int eqs_index = equivs_index;
int eqs_alloc = equivs_alloc;
char const *suffix_begin = current->suffix_begin;
char const *bufend = current->buffer + current->buffered_chars;
int use_line_cmp = ignore_some_line_changes;
while ((char const *) p < suffix_begin)
{
char const *ip = (char const *) p;
/* Compute the equivalence class for this line. */
h = 0;
/* Hash this line until we find a newline. */
if (ignore_case_flag)
{
if (ignore_all_space_flag)
while ((c = *p++) != '\n')
{
if (! ISSPACE (c))
h = HASH (h, ISUPPER (c) ? tolower (c) : c);
}
else if (ignore_space_change_flag)
while ((c = *p++) != '\n')
{
if (ISSPACE (c))
{
for (;;)
{
c = *p++;
if (!ISSPACE (c))
break;
if (c == '\n')
goto hashing_done;
}
h = HASH (h, ' ');
}
/* C is now the first non-space. */
h = HASH (h, ISUPPER (c) ? tolower (c) : c);
}
else
while ((c = *p++) != '\n')
h = HASH (h, ISUPPER (c) ? tolower (c) : c);
}
else
{
if (ignore_all_space_flag)
while ((c = *p++) != '\n')
{
if (! ISSPACE (c))
h = HASH (h, c);
}
else if (ignore_space_change_flag)
while ((c = *p++) != '\n')
{
if (ISSPACE (c))
{
for (;;)
{
c = *p++;
if (!ISSPACE (c))
break;
if (c == '\n')
goto hashing_done;
}
h = HASH (h, ' ');
}
/* C is now the first non-space. */
h = HASH (h, c);
}
else
while ((c = *p++) != '\n')
h = HASH (h, c);
}
hashing_done:;
bucket = &buckets[h % nbuckets];
length = (char const *) p - ip - 1;
if ((char const *) p == bufend
&& current->missing_newline
&& ROBUST_OUTPUT_STYLE (output_style))
{
/* This line is incomplete. If this is significant,
put the line into bucket[-1]. */
if (! (ignore_space_change_flag | ignore_all_space_flag))
bucket = &buckets[-1];
/* Omit the inserted newline when computing linbuf later. */
p--;
bufend = suffix_begin = (char const *) p;
}
for (i = *bucket; ; i = eqs[i].next)
if (!i)
{
/* Create a new equivalence class in this bucket. */
i = eqs_index++;
if (i == eqs_alloc)
eqs = (struct equivclass *)
xrealloc (eqs, (eqs_alloc*=2) * sizeof(*eqs));
eqs[i].next = *bucket;
eqs[i].hash = h;
eqs[i].line = ip;
eqs[i].length = length;
*bucket = i;
break;
}
else if (eqs[i].hash == h)
{
char const *eqline = eqs[i].line;
/* Reuse existing equivalence class if the lines are identical.
This detects the common case of exact identity
faster than complete comparison would. */
if (eqs[i].length == length && memcmp (eqline, ip, length) == 0)
break;
/* Reuse existing class if line_cmp reports the lines equal. */
if (use_line_cmp && line_cmp (eqline, ip) == 0)
break;
}
/* Maybe increase the size of the line table. */
if (line == alloc_lines)
{
/* Double (alloc_lines - linbuf_base) by adding to alloc_lines. */
alloc_lines = 2 * alloc_lines - linbuf_base;
cureqs = (int *) xrealloc (cureqs, alloc_lines * sizeof (*cureqs));
linbuf = (char const **) xrealloc (linbuf + linbuf_base,
(alloc_lines - linbuf_base)
* sizeof (*linbuf))
- linbuf_base;
}
linbuf[line] = ip;
cureqs[line] = i;
++line;
}
current->buffered_lines = line;
for (i = 0; ; i++)
{
/* Record the line start for lines in the suffix that we care about.
Record one more line start than lines,
so that we can compute the length of any buffered line. */
if (line == alloc_lines)
{
/* Double (alloc_lines - linbuf_base) by adding to alloc_lines. */
alloc_lines = 2 * alloc_lines - linbuf_base;
linbuf = (char const **) xrealloc (linbuf + linbuf_base,
(alloc_lines - linbuf_base)
* sizeof (*linbuf))
- linbuf_base;
}
linbuf[line] = (char const *) p;
if ((char const *) p == bufend)
break;
if (context <= i && no_diff_means_no_output)
break;
line++;
while (*p++ != '\n')
;
}
/* Done with cache in local variables. */
current->linbuf = linbuf;
current->valid_lines = line;
current->alloc_lines = alloc_lines;
current->equivs = cureqs;
equivs = eqs;
equivs_alloc = eqs_alloc;
equivs_index = eqs_index;
}
/* Prepare the end of the text. Make sure it's initialized.
Make sure text ends in a newline,
but remember that we had to add one. */
static void
prepare_text_end (current)
struct file_data *current;
{
size_t buffered_chars = current->buffered_chars;
char *p = current->buffer;
if (buffered_chars == 0 || p[buffered_chars - 1] == '\n')
current->missing_newline = 0;
else
{
p[buffered_chars++] = '\n';
current->buffered_chars = buffered_chars;
current->missing_newline = 1;
}
/* Don't use uninitialized storage when planting or using sentinels. */
if (p)
bzero (p + buffered_chars, sizeof (word));
}
/* Given a vector of two file_data objects, find the identical
prefixes and suffixes of each object. */
static void
find_identical_ends (filevec)
struct file_data filevec[];
{
word *w0, *w1;
char *p0, *p1, *buffer0, *buffer1;
char const *end0, *beg0;
char const **linbuf0, **linbuf1;
int i, lines;
size_t n0, n1, tem;
int alloc_lines0, alloc_lines1;
int buffered_prefix, prefix_count, prefix_mask;
slurp (&filevec[0]);
if (filevec[0].desc != filevec[1].desc)
slurp (&filevec[1]);
else
{
filevec[1].buffer = filevec[0].buffer;
filevec[1].bufsize = filevec[0].bufsize;
filevec[1].buffered_chars = filevec[0].buffered_chars;
}
for (i = 0; i < 2; i++)
prepare_text_end (&filevec[i]);
/* Find identical prefix. */
p0 = buffer0 = filevec[0].buffer;
p1 = buffer1 = filevec[1].buffer;
n0 = filevec[0].buffered_chars;
n1 = filevec[1].buffered_chars;
if (p0 == p1)
/* The buffers are the same; sentinels won't work. */
p0 = p1 += n1;
else
{
/* Insert end sentinels, in this case characters that are guaranteed
to make the equality test false, and thus terminate the loop. */
if (n0 < n1)
p0[n0] = ~p1[n0];
else
p1[n1] = ~p0[n1];
/* Loop until first mismatch, or to the sentinel characters. */
/* Compare a word at a time for speed. */
w0 = (word *) p0;
w1 = (word *) p1;
while (*w0++ == *w1++)
;
--w0, --w1;
/* Do the last few bytes of comparison a byte at a time. */
p0 = (char *) w0;
p1 = (char *) w1;
while (*p0++ == *p1++)
;
--p0, --p1;
/* Don't mistakenly count missing newline as part of prefix. */
if (ROBUST_OUTPUT_STYLE (output_style)
&& (buffer0 + n0 - filevec[0].missing_newline < p0)
!=
(buffer1 + n1 - filevec[1].missing_newline < p1))
--p0, --p1;
}
/* Now P0 and P1 point at the first nonmatching characters. */
/* Skip back to last line-beginning in the prefix,
and then discard up to HORIZON_LINES lines from the prefix. */
i = horizon_lines;
while (p0 != buffer0 && (p0[-1] != '\n' || i--))
--p0, --p1;
/* Record the prefix. */
filevec[0].prefix_end = p0;
filevec[1].prefix_end = p1;
/* Find identical suffix. */
/* P0 and P1 point beyond the last chars not yet compared. */
p0 = buffer0 + n0;
p1 = buffer1 + n1;
if (! ROBUST_OUTPUT_STYLE (output_style)
|| filevec[0].missing_newline == filevec[1].missing_newline)
{
end0 = p0; /* Addr of last char in file 0. */
/* Get value of P0 at which we should stop scanning backward:
this is when either P0 or P1 points just past the last char
of the identical prefix. */
beg0 = filevec[0].prefix_end + (n0 < n1 ? 0 : n0 - n1);
/* Scan back until chars don't match or we reach that point. */
for (; p0 != beg0; p0--, p1--)
if (*p0 != *p1)
{
/* Point at the first char of the matching suffix. */
beg0 = p0;
break;
}
/* Are we at a line-beginning in both files? If not, add the rest of
this line to the main body. Discard up to HORIZON_LINES lines from
the identical suffix. Also, discard one extra line,
because shift_boundaries may need it. */
i = horizon_lines + !((buffer0 == p0 || p0[-1] == '\n')
&&
(buffer1 == p1 || p1[-1] == '\n'));
while (i-- && p0 != end0)
while (*p0++ != '\n')
;
p1 += p0 - beg0;
}
/* Record the suffix. */
filevec[0].suffix_begin = p0;
filevec[1].suffix_begin = p1;
/* Calculate number of lines of prefix to save.
prefix_count == 0 means save the whole prefix;
we need this with for options like -D that output the whole file.
We also need it for options like -F that output some preceding line;
at least we will need to find the last few lines,
but since we don't know how many, it's easiest to find them all.
Otherwise, prefix_count != 0. Save just prefix_count lines at start
of the line buffer; they'll be moved to the proper location later.
Handle 1 more line than the context says (because we count 1 too many),
rounded up to the next power of 2 to speed index computation. */
if (no_diff_means_no_output && ! function_regexp_list)
{
for (prefix_count = 1; prefix_count < context + 1; prefix_count *= 2)
;
prefix_mask = prefix_count - 1;
alloc_lines0
= prefix_count
+ GUESS_LINES (0, 0, p0 - filevec[0].prefix_end)
+ context;
}
else
{
prefix_count = 0;
prefix_mask = ~0;
alloc_lines0 = GUESS_LINES (0, 0, n0);
}
lines = 0;
linbuf0 = (char const **) xmalloc (alloc_lines0 * sizeof (*linbuf0));
/* If the prefix is needed, find the prefix lines. */
if (! (no_diff_means_no_output
&& filevec[0].prefix_end == p0
&& filevec[1].prefix_end == p1))
{
p0 = buffer0;
end0 = filevec[0].prefix_end;
while (p0 != end0)
{
int l = lines++ & prefix_mask;
if (l == alloc_lines0)
linbuf0 = (char const **) xrealloc (linbuf0, (alloc_lines0 *= 2)
* sizeof(*linbuf0));
linbuf0[l] = p0;
while (*p0++ != '\n')
;
}
}
buffered_prefix = prefix_count && context < lines ? context : lines;
/* Allocate line buffer 1. */
tem = prefix_count ? filevec[1].suffix_begin - buffer1 : n1;
alloc_lines1
= (buffered_prefix
+ GUESS_LINES (lines, filevec[1].prefix_end - buffer1, tem)
+ context);
linbuf1 = (char const **) xmalloc (alloc_lines1 * sizeof (*linbuf1));
if (buffered_prefix != lines)
{
/* Rotate prefix lines to proper location. */
for (i = 0; i < buffered_prefix; i++)
linbuf1[i] = linbuf0[(lines - context + i) & prefix_mask];
for (i = 0; i < buffered_prefix; i++)
linbuf0[i] = linbuf1[i];
}
/* Initialize line buffer 1 from line buffer 0. */
for (i = 0; i < buffered_prefix; i++)
linbuf1[i] = linbuf0[i] - buffer0 + buffer1;
/* Record the line buffer, adjusted so that
linbuf*[0] points at the first differing line. */
filevec[0].linbuf = linbuf0 + buffered_prefix;
filevec[1].linbuf = linbuf1 + buffered_prefix;
filevec[0].linbuf_base = filevec[1].linbuf_base = - buffered_prefix;
filevec[0].alloc_lines = alloc_lines0 - buffered_prefix;
filevec[1].alloc_lines = alloc_lines1 - buffered_prefix;
filevec[0].prefix_lines = filevec[1].prefix_lines = lines;
}
/* Largest primes less than some power of two, for nbuckets. Values range
from useful to preposterous. If one of these numbers isn't prime
after all, don't blame it on me, blame it on primes (6) . . . */
static int const primes[] =
{
509,
1021,
2039,
4093,
8191,
16381,
32749,
#if 32767 < INT_MAX
65521,
131071,
262139,
524287,
1048573,
2097143,
4194301,
8388593,
16777213,
33554393,
67108859, /* Preposterously large . . . */
134217689,
268435399,
536870909,
1073741789,
2147483647,
#endif
0
};
/* Given a vector of two file_data objects, read the file associated
with each one, and build the table of equivalence classes.
Return 1 if either file appears to be a binary file.
If PRETEND_BINARY is nonzero, pretend they are binary regardless. */
int
read_files (filevec, pretend_binary)
struct file_data filevec[];
int pretend_binary;
{
int i;
int skip_test = always_text_flag | pretend_binary;
int appears_binary = pretend_binary | sip (&filevec[0], skip_test);
if (filevec[0].desc != filevec[1].desc)
appears_binary |= sip (&filevec[1], skip_test | appears_binary);
else
{
filevec[1].buffer = filevec[0].buffer;
filevec[1].bufsize = filevec[0].bufsize;
filevec[1].buffered_chars = filevec[0].buffered_chars;
}
if (appears_binary)
{
#if HAVE_SETMODE
setmode (filevec[0].desc, O_BINARY);
setmode (filevec[1].desc, O_BINARY);
#endif
return 1;
}
find_identical_ends (filevec);
equivs_alloc = filevec[0].alloc_lines + filevec[1].alloc_lines + 1;
equivs = (struct equivclass *) xmalloc (equivs_alloc * sizeof (struct equivclass));
/* Equivalence class 0 is permanently safe for lines that were not
hashed. Real equivalence classes start at 1. */
equivs_index = 1;
for (i = 0; primes[i] < equivs_alloc / 3; i++)
if (! primes[i])
abort ();
nbuckets = primes[i];
buckets = (int *) xmalloc ((nbuckets + 1) * sizeof (*buckets));
bzero (buckets++, (nbuckets + 1) * sizeof (*buckets));
for (i = 0; i < 2; i++)
find_and_hash_each_line (&filevec[i]);
filevec[0].equiv_max = filevec[1].equiv_max = equivs_index;
free (equivs);
free (buckets - 1);
return 0;
}

View File

@ -1,69 +0,0 @@
/* Normal-format output routines for GNU DIFF.
Copyright (C) 1988, 1989, 1993, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
static void print_normal_hunk PARAMS((struct change *));
/* Print the edit-script SCRIPT as a normal diff.
INF points to an array of descriptions of the two files. */
void
print_normal_script (script)
struct change *script;
{
print_script (script, find_change, print_normal_hunk);
}
/* Print a hunk of a normal diff.
This is a contiguous portion of a complete edit script,
describing changes in consecutive lines. */
static void
print_normal_hunk (hunk)
struct change *hunk;
{
int first0, last0, first1, last1, deletes, inserts;
register int i;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &first0, &last0, &first1, &last1, &deletes, &inserts);
if (!deletes && !inserts)
return;
begin_output ();
/* Print out the line number header for this hunk */
print_number_range (',', &files[0], first0, last0);
printf_output ("%c", change_letter (inserts, deletes));
print_number_range (',', &files[1], first1, last1);
printf_output ("\n");
/* Print the lines that the first file has. */
if (deletes)
for (i = first0; i <= last0; i++)
print_1_line ("<", &files[0].linbuf[i]);
if (inserts && deletes)
printf_output ("---\n");
/* Print the lines that the second file has. */
if (inserts)
for (i = first1; i <= last1; i++)
print_1_line (">", &files[1].linbuf[i]);
}

View File

@ -1,294 +0,0 @@
/* sdiff-format output routines for GNU DIFF.
Copyright (C) 1991, 1992, 1993, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY. No author or distributor
accepts responsibility to anyone for the consequences of using it
or for whether it serves any particular purpose or works at all,
unless he says so in writing. Refer to the GNU DIFF General Public
License for full details.
Everyone is granted permission to copy, modify and redistribute
GNU DIFF, but only under the conditions described in the
GNU DIFF General Public License. A copy of this license is
supposed to have been given to you along with GNU DIFF so you
can know your rights and responsibilities. It should be in a
file named COPYING. Among other things, the copyright notice
and this notice must be preserved on all copies. */
#include "diff.h"
static unsigned print_half_line PARAMS((char const * const *, unsigned, unsigned));
static unsigned tab_from_to PARAMS((unsigned, unsigned));
static void print_1sdiff_line PARAMS((char const * const *, int, char const * const *));
static void print_sdiff_common_lines PARAMS((int, int));
static void print_sdiff_hunk PARAMS((struct change *));
/* Next line number to be printed in the two input files. */
static int next0, next1;
/* Print the edit-script SCRIPT as a sdiff style output. */
void
print_sdiff_script (script)
struct change *script;
{
begin_output ();
next0 = next1 = - files[0].prefix_lines;
print_script (script, find_change, print_sdiff_hunk);
print_sdiff_common_lines (files[0].valid_lines, files[1].valid_lines);
}
/* Tab from column FROM to column TO, where FROM <= TO. Yield TO. */
static unsigned
tab_from_to (from, to)
unsigned from, to;
{
unsigned tab;
if (! tab_expand_flag)
for (tab = from + TAB_WIDTH - from % TAB_WIDTH; tab <= to; tab += TAB_WIDTH)
{
write_output ("\t", 1);
from = tab;
}
while (from++ < to)
write_output (" ", 1);
return to;
}
/*
* Print the text for half an sdiff line. This means truncate to width
* observing tabs, and trim a trailing newline. Returns the last column
* written (not the number of chars).
*/
static unsigned
print_half_line (line, indent, out_bound)
char const * const *line;
unsigned indent, out_bound;
{
register unsigned in_position = 0, out_position = 0;
register char const
*text_pointer = line[0],
*text_limit = line[1];
while (text_pointer < text_limit)
{
register unsigned char c = *text_pointer++;
/* We use CC to avoid taking the address of the register
variable C. */
char cc;
switch (c)
{
case '\t':
{
unsigned spaces = TAB_WIDTH - in_position % TAB_WIDTH;
if (in_position == out_position)
{
unsigned tabstop = out_position + spaces;
if (tab_expand_flag)
{
if (out_bound < tabstop)
tabstop = out_bound;
for (; out_position < tabstop; out_position++)
write_output (" ", 1);
}
else
if (tabstop < out_bound)
{
out_position = tabstop;
cc = c;
write_output (&cc, 1);
}
}
in_position += spaces;
}
break;
case '\r':
{
cc = c;
write_output (&cc, 1);
tab_from_to (0, indent);
in_position = out_position = 0;
}
break;
case '\b':
if (in_position != 0 && --in_position < out_bound)
if (out_position <= in_position)
/* Add spaces to make up for suppressed tab past out_bound. */
for (; out_position < in_position; out_position++)
write_output (" ", 1);
else
{
out_position = in_position;
cc = c;
write_output (&cc, 1);
}
break;
case '\f':
case '\v':
control_char:
if (in_position < out_bound)
{
cc = c;
write_output (&cc, 1);
}
break;
default:
if (! ISPRINT (c))
goto control_char;
/* falls through */
case ' ':
if (in_position++ < out_bound)
{
out_position = in_position;
cc = c;
write_output (&cc, 1);
}
break;
case '\n':
return out_position;
}
}
return out_position;
}
/*
* Print side by side lines with a separator in the middle.
* 0 parameters are taken to indicate white space text.
* Blank lines that can easily be caught are reduced to a single newline.
*/
static void
print_1sdiff_line (left, sep, right)
char const * const *left;
int sep;
char const * const *right;
{
unsigned hw = sdiff_half_width, c2o = sdiff_column2_offset;
unsigned col = 0;
int put_newline = 0;
if (left)
{
if (left[1][-1] == '\n')
put_newline = 1;
col = print_half_line (left, 0, hw);
}
if (sep != ' ')
{
char cc;
col = tab_from_to (col, (hw + c2o - 1) / 2) + 1;
if (sep == '|' && put_newline != (right[1][-1] == '\n'))
sep = put_newline ? '/' : '\\';
cc = sep;
write_output (&cc, 1);
}
if (right)
{
if (right[1][-1] == '\n')
put_newline = 1;
if (**right != '\n')
{
col = tab_from_to (col, c2o);
print_half_line (right, col, hw);
}
}
if (put_newline)
write_output ("\n", 1);
}
/* Print lines common to both files in side-by-side format. */
static void
print_sdiff_common_lines (limit0, limit1)
int limit0, limit1;
{
int i0 = next0, i1 = next1;
if (! sdiff_skip_common_lines && (i0 != limit0 || i1 != limit1))
{
if (sdiff_help_sdiff)
printf_output ("i%d,%d\n", limit0 - i0, limit1 - i1);
if (! sdiff_left_only)
{
while (i0 != limit0 && i1 != limit1)
print_1sdiff_line (&files[0].linbuf[i0++], ' ', &files[1].linbuf[i1++]);
while (i1 != limit1)
print_1sdiff_line (0, ')', &files[1].linbuf[i1++]);
}
while (i0 != limit0)
print_1sdiff_line (&files[0].linbuf[i0++], '(', 0);
}
next0 = limit0;
next1 = limit1;
}
/* Print a hunk of an sdiff diff.
This is a contiguous portion of a complete edit script,
describing changes in consecutive lines. */
static void
print_sdiff_hunk (hunk)
struct change *hunk;
{
int first0, last0, first1, last1, deletes, inserts;
register int i, j;
/* Determine range of line numbers involved in each file. */
analyze_hunk (hunk, &first0, &last0, &first1, &last1, &deletes, &inserts);
if (!deletes && !inserts)
return;
/* Print out lines up to this change. */
print_sdiff_common_lines (first0, first1);
if (sdiff_help_sdiff)
printf_output ("c%d,%d\n", last0 - first0 + 1, last1 - first1 + 1);
/* Print ``xxx | xxx '' lines */
if (inserts && deletes)
{
for (i = first0, j = first1; i <= last0 && j <= last1; ++i, ++j)
print_1sdiff_line (&files[0].linbuf[i], '|', &files[1].linbuf[j]);
deletes = i <= last0;
inserts = j <= last1;
next0 = first0 = i;
next1 = first1 = j;
}
/* Print `` > xxx '' lines */
if (inserts)
{
for (j = first1; j <= last1; ++j)
print_1sdiff_line (0, '>', &files[1].linbuf[j]);
next1 = j;
}
/* Print ``xxx < '' lines */
if (deletes)
{
for (i = first0; i <= last0; ++i)
print_1sdiff_line (&files[0].linbuf[i], '<', 0);
next0 = i;
}
}

View File

@ -1,304 +0,0 @@
/* System dependent declarations.
Copyright (C) 1988, 1989, 1992, 1993, 1994 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
/* We must define `volatile' and `const' first (the latter inside config.h),
so that they're used consistently in all system includes. */
#if !__STDC__
#ifndef volatile
#define volatile
#endif
#endif
#include <config.h>
#include <sys/types.h>
#include <sys/stat.h>
/* Note that PARAMS is just internal to the diff library; diffrun.h
has its own mechanism, which will hopefully be less likely to
conflict with the library's caller's namespace. */
#if __STDC__
#define PARAMS(args) args
#define VOID void
#else
#define PARAMS(args) ()
#define VOID char
#endif
#if STAT_MACROS_BROKEN
#undef S_ISBLK
#undef S_ISCHR
#undef S_ISDIR
#undef S_ISFIFO
#undef S_ISREG
#undef S_ISSOCK
#endif
#ifndef S_ISDIR
#define S_ISDIR(mode) (((mode) & S_IFMT) == S_IFDIR)
#endif
#ifndef S_ISREG
#define S_ISREG(mode) (((mode) & S_IFMT) == S_IFREG)
#endif
#if !defined(S_ISBLK) && defined(S_IFBLK)
#define S_ISBLK(mode) (((mode) & S_IFMT) == S_IFBLK)
#endif
#if !defined(S_ISCHR) && defined(S_IFCHR)
#define S_ISCHR(mode) (((mode) & S_IFMT) == S_IFCHR)
#endif
#if !defined(S_ISFIFO) && defined(S_IFFIFO)
#define S_ISFIFO(mode) (((mode) & S_IFMT) == S_IFFIFO)
#endif
#ifndef S_ISSOCK
# if defined( S_IFSOCK )
# ifdef S_IFMT
# define S_ISSOCK(mode) (((mode) & S_IFMT) == S_IFSOCK)
# else
# define S_ISSOCK(mode) ((mode) & S_IFSOCK)
# endif /* S_IFMT */
# elif defined( S_ISNAM )
/* SCO OpenServer 5.0.6a */
# define S_ISSOCK S_ISNAM
# endif /* !S_IFSOCK && S_ISNAM */
#endif /* !S_ISSOCK */
#if HAVE_UNISTD_H
#include <unistd.h>
#endif
#ifdef HAVE_IO_H
# include <io.h>
#endif
#ifdef HAVE_FCNTL_H
# include <fcntl.h>
#else
# include <sys/file.h>
#endif
#ifndef SEEK_SET
#define SEEK_SET 0
#endif
#ifndef SEEK_CUR
#define SEEK_CUR 1
#endif
#ifndef STDIN_FILENO
#define STDIN_FILENO 0
#endif
#ifndef STDOUT_FILENO
#define STDOUT_FILENO 1
#endif
#ifndef STDERR_FILENO
#define STDERR_FILENO 2
#endif
/* I believe that all relevant systems have
time.h. It is in ANSI, for example. The
code below looks quite bogus as I don't think
sys/time.h is ever a substitute for time.h;
it is something different. */
#define HAVE_TIME_H 1
#if HAVE_TIME_H
#include <time.h>
#else
#include <sys/time.h>
#endif
#if HAVE_FCNTL_H
#include <fcntl.h>
#else
#if HAVE_SYS_FILE_H
#include <sys/file.h>
#endif
#endif
#ifndef O_RDONLY
#define O_RDONLY 0
#endif
#if HAVE_SYS_WAIT_H
#include <sys/wait.h>
#endif
#ifndef WEXITSTATUS
#define WEXITSTATUS(stat_val) ((unsigned) (stat_val) >> 8)
#endif
#ifndef WIFEXITED
#define WIFEXITED(stat_val) (((stat_val) & 255) == 0)
#endif
#ifndef STAT_BLOCKSIZE
#if HAVE_STRUCT_STAT_ST_BLKSIZE
#define STAT_BLOCKSIZE(s) (s).st_blksize
#else
#define STAT_BLOCKSIZE(s) (8 * 1024)
#endif
#endif
#if HAVE_DIRENT_H
# include <dirent.h>
# define NAMLEN(dirent) strlen((dirent)->d_name)
#else
# define dirent direct
# define NAMLEN(dirent) ((dirent)->d_namlen)
# if HAVE_SYS_NDIR_H
# include <sys/ndir.h>
# endif
# if HAVE_SYS_DIR_H
# include <sys/dir.h>
# endif
# if HAVE_NDIR_H
# include <ndir.h>
# endif
#endif
#if HAVE_VFORK_H
#include <vfork.h>
#endif
#if HAVE_STDLIB_H || defined(STDC_HEADERS)
#include <stdlib.h>
#else
VOID *malloc ();
VOID *realloc ();
#endif
#ifndef getenv
char *getenv ();
#endif
#if HAVE_LIMITS_H
#include <limits.h>
#endif
#ifndef INT_MAX
#define INT_MAX 2147483647
#endif
#ifndef CHAR_BIT
#define CHAR_BIT 8
#endif
#if STDC_HEADERS || HAVE_STRING_H
# include <string.h>
# ifndef bzero
# define bzero(s, n) memset (s, 0, n)
# endif
#else
# if !HAVE_STRCHR
# define strchr index
# define strrchr rindex
# endif
char *strchr (), *strrchr ();
# if !HAVE_MEMCHR
# define memcmp(s1, s2, n) bcmp (s1, s2, n)
# define memcpy(d, s, n) bcopy (s, d, n)
void *memchr ();
# endif
#endif
#include <ctype.h>
/* CTYPE_DOMAIN (C) is nonzero if the unsigned char C can safely be given
as an argument to <ctype.h> macros like `isspace'. */
#if STDC_HEADERS
#define CTYPE_DOMAIN(c) 1
#else
#define CTYPE_DOMAIN(c) ((unsigned) (c) <= 0177)
#endif
#ifndef ISPRINT
#define ISPRINT(c) (CTYPE_DOMAIN (c) && isprint (c))
#endif
#ifndef ISSPACE
#define ISSPACE(c) (CTYPE_DOMAIN (c) && isspace (c))
#endif
#ifndef ISUPPER
#define ISUPPER(c) (CTYPE_DOMAIN (c) && isupper (c))
#endif
#ifndef ISDIGIT
#define ISDIGIT(c) ((unsigned) (c) - '0' <= 9)
#endif
#include <errno.h>
#if !STDC_HEADERS
extern int errno;
#endif
#ifdef min
#undef min
#endif
#ifdef max
#undef max
#endif
#define min(a,b) ((a) <= (b) ? (a) : (b))
#define max(a,b) ((a) >= (b) ? (a) : (b))
/* This section contains Posix-compliant defaults for macros
that are meant to be overridden by hand in config.h as needed. */
#ifndef filename_cmp
#define filename_cmp(a, b) strcmp (a, b)
#endif
#ifndef filename_lastdirchar
#define filename_lastdirchar(filename) strrchr (filename, '/')
#endif
#ifndef HAVE_FORK
#define HAVE_FORK 1
#endif
#ifndef HAVE_SETMODE
#define HAVE_SETMODE 0
#endif
#ifndef initialize_main
#define initialize_main(argcp, argvp)
#endif
/* Do struct stat *S, *T describe the same file? Answer -1 if unknown. */
#ifndef same_file
#define same_file(s,t) ((s)->st_ino==(t)->st_ino && (s)->st_dev==(t)->st_dev)
#endif
/* Place into Q a quoted version of A suitable for `popen' or `system',
incrementing Q and junking A.
Do not increment Q by more than 4 * strlen (A) + 2. */
#ifndef SYSTEM_QUOTE_ARG
#define SYSTEM_QUOTE_ARG(q, a) \
{ \
*(q)++ = '\''; \
for (; *(a); *(q)++ = *(a)++) \
if (*(a) == '\'') \
{ \
*(q)++ = '\''; \
*(q)++ = '\\'; \
*(q)++ = '\''; \
} \
*(q)++ = '\''; \
}
#endif
/* these come from CVS's lib/system.h, but I wasn't sure how to include that
* properly or even if I really should
*/
#ifndef CVS_OPENDIR
#define CVS_OPENDIR opendir
#endif
#ifndef CVS_READDIR
#define CVS_READDIR readdir
#endif
#ifndef CVS_CLOSEDIR
#define CVS_CLOSEDIR closedir
#endif

View File

@ -1,849 +0,0 @@
/* Support routines for GNU DIFF.
Copyright (C) 1988, 1989, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc.
This file is part of GNU DIFF.
GNU DIFF is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU DIFF is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
*/
#include "diff.h"
#if __STDC__
#include <stdarg.h>
#else
#include <varargs.h>
#endif
#ifndef strerror
extern char *strerror ();
#endif
/* Queue up one-line messages to be printed at the end,
when -l is specified. Each message is recorded with a `struct msg'. */
struct msg
{
struct msg *next;
char const *format;
char const *arg1;
char const *arg2;
char const *arg3;
char const *arg4;
};
/* Head of the chain of queues messages. */
static struct msg *msg_chain;
/* Tail of the chain of queues messages. */
static struct msg **msg_chain_end = &msg_chain;
/* Use when a system call returns non-zero status.
TEXT should normally be the file name. */
void
perror_with_name (text)
char const *text;
{
int e = errno;
if (callbacks && callbacks->error)
(*callbacks->error) ("%s: %s", text, strerror (e));
else
{
fprintf (stderr, "%s: ", diff_program_name);
errno = e;
perror (text);
}
}
/* Use when a system call returns non-zero status and that is fatal. */
void
pfatal_with_name (text)
char const *text;
{
int e = errno;
print_message_queue ();
if (callbacks && callbacks->error)
(*callbacks->error) ("%s: %s", text, strerror (e));
else
{
fprintf (stderr, "%s: ", diff_program_name);
errno = e;
perror (text);
}
DIFF_ABORT (2);
}
/* Print an error message from the format-string FORMAT
with args ARG1 and ARG2. */
void
diff_error (format, arg, arg1)
char const *format, *arg, *arg1;
{
if (callbacks && callbacks->error)
(*callbacks->error) (format, arg, arg1);
else
{
fprintf (stderr, "%s: ", diff_program_name);
fprintf (stderr, format, arg, arg1);
fprintf (stderr, "\n");
}
}
/* Print an error message containing the string TEXT, then exit. */
void
fatal (m)
char const *m;
{
print_message_queue ();
diff_error ("%s", m, 0);
DIFF_ABORT (2);
}
/* Like printf, except if -l in effect then save the message and print later.
This is used for things like "binary files differ" and "Only in ...". */
void
message (format, arg1, arg2)
char const *format, *arg1, *arg2;
{
message5 (format, arg1, arg2, 0, 0);
}
void
message5 (format, arg1, arg2, arg3, arg4)
char const *format, *arg1, *arg2, *arg3, *arg4;
{
if (paginate_flag)
{
struct msg *new = (struct msg *) xmalloc (sizeof (struct msg));
new->format = format;
new->arg1 = concat (arg1, "", "");
new->arg2 = concat (arg2, "", "");
new->arg3 = arg3 ? concat (arg3, "", "") : 0;
new->arg4 = arg4 ? concat (arg4, "", "") : 0;
new->next = 0;
*msg_chain_end = new;
msg_chain_end = &new->next;
}
else
{
if (sdiff_help_sdiff)
write_output (" ", 1);
printf_output (format, arg1, arg2, arg3, arg4);
}
}
/* Output all the messages that were saved up by calls to `message'. */
void
print_message_queue ()
{
struct msg *m;
for (m = msg_chain; m; m = m->next)
printf_output (m->format, m->arg1, m->arg2, m->arg3, m->arg4);
}
/* Call before outputting the results of comparing files NAME0 and NAME1
to set up OUTFILE, the stdio stream for the output to go to.
Usually, OUTFILE is just stdout. But when -l was specified
we fork off a `pr' and make OUTFILE a pipe to it.
`pr' then outputs to our stdout. */
static char const *current_name0;
static char const *current_name1;
static int current_depth;
static int output_in_progress = 0;
void
setup_output (name0, name1, depth)
char const *name0, *name1;
int depth;
{
current_name0 = name0;
current_name1 = name1;
current_depth = depth;
}
#if HAVE_FORK && defined (PR_PROGRAM)
static pid_t pr_pid;
#endif
void
begin_output ()
{
char *name;
if (output_in_progress)
return;
output_in_progress = 1;
/* Construct the header of this piece of diff. */
name = xmalloc (strlen (current_name0) + strlen (current_name1)
+ strlen (switch_string) + 7);
/* Posix.2 section 4.17.6.1.1 specifies this format. But there is a
bug in the first printing (IEEE Std 1003.2-1992 p 251 l 3304):
it says that we must print only the last component of the pathnames.
This requirement is silly and does not match historical practice. */
sprintf (name, "diff%s %s %s", switch_string, current_name0, current_name1);
if (paginate_flag && callbacks && callbacks->write_output)
fatal ("can't paginate when using library callbacks");
if (paginate_flag)
{
/* Make OUTFILE a pipe to a subsidiary `pr'. */
#ifdef PR_PROGRAM
# if HAVE_FORK
int pipes[2];
if (pipe (pipes) != 0)
pfatal_with_name ("pipe");
fflush (stdout);
pr_pid = vfork ();
if (pr_pid < 0)
pfatal_with_name ("vfork");
if (pr_pid == 0)
{
close (pipes[1]);
if (pipes[0] != STDIN_FILENO)
{
if (dup2 (pipes[0], STDIN_FILENO) < 0)
pfatal_with_name ("dup2");
close (pipes[0]);
}
execl (PR_PROGRAM, PR_PROGRAM, "-f", "-h", name, 0);
pfatal_with_name (PR_PROGRAM);
}
else
{
close (pipes[0]);
outfile = fdopen (pipes[1], "w");
if (!outfile)
pfatal_with_name ("fdopen");
}
# else /* ! HAVE_FORK */
char *command = xmalloc (4 * strlen (name) + strlen (PR_PROGRAM) + 10);
char *p;
char const *a = name;
sprintf (command, "%s -f -h ", PR_PROGRAM);
p = command + strlen (command);
SYSTEM_QUOTE_ARG (p, a);
*p = 0;
outfile = popen (command, "w");
if (!outfile)
pfatal_with_name (command);
free (command);
# endif /* ! HAVE_FORK */
#else
fatal ("This port does not support the --paginate option to diff.");
#endif
}
else
{
/* If -l was not specified, output the diff straight to `stdout'. */
/* If handling multiple files (because scanning a directory),
print which files the following output is about. */
if (current_depth > 0)
printf_output ("%s\n", name);
}
free (name);
/* A special header is needed at the beginning of context output. */
switch (output_style)
{
case OUTPUT_CONTEXT:
print_context_header (files, 0);
break;
case OUTPUT_UNIFIED:
print_context_header (files, 1);
break;
default:
break;
}
}
/* Call after the end of output of diffs for one file.
If -l was given, close OUTFILE and get rid of the `pr' subfork. */
void
finish_output ()
{
if (paginate_flag && outfile != 0 && outfile != stdout)
{
#ifdef PR_PROGRAM
int wstatus, w;
if (ferror (outfile))
fatal ("write error");
# if ! HAVE_FORK
wstatus = pclose (outfile);
# else /* HAVE_FORK */
if (fclose (outfile) != 0)
pfatal_with_name ("write error");
while ((w = waitpid (pr_pid, &wstatus, 0)) < 0 && errno == EINTR)
;
if (w < 0)
pfatal_with_name ("waitpid");
# endif /* HAVE_FORK */
if (wstatus != 0)
fatal ("subsidiary pr failed");
#else
fatal ("internal error in finish_output");
#endif
}
output_in_progress = 0;
}
/* Write something to the output file. */
void
write_output (text, len)
char const *text;
size_t len;
{
if (callbacks && callbacks->write_output)
(*callbacks->write_output) (text, len);
else if (len == 1)
putc (*text, outfile);
else
fwrite (text, sizeof (char), len, outfile);
}
/* Printf something to the output file. */
#if __STDC__
#define VA_START(args, lastarg) va_start(args, lastarg)
#else /* ! __STDC__ */
#define VA_START(args, lastarg) va_start(args)
#endif /* __STDC__ */
void
#if __STDC__
printf_output (const char *format, ...)
#else
printf_output (format, va_alist)
char const *format;
va_dcl
#endif
{
va_list args;
VA_START (args, format);
if (callbacks && callbacks->write_output)
{
/* We implement our own limited printf-like functionality (%s, %d,
and %c only). Callers who want something fancier can use
sprintf. */
const char *p = format;
char *q;
char *str;
int num;
int ch;
char buf[100];
while ((q = strchr (p, '%')) != NULL)
{
static const char msg[] =
"\ninternal error: bad % in printf_output\n";
(*callbacks->write_output) (p, q - p);
switch (q[1])
{
case 's':
str = va_arg (args, char *);
(*callbacks->write_output) (str, strlen (str));
break;
case 'd':
num = va_arg (args, int);
sprintf (buf, "%d", num);
(*callbacks->write_output) (buf, strlen (buf));
break;
case 'c':
ch = va_arg (args, int);
buf[0] = ch;
(*callbacks->write_output) (buf, 1);
break;
default:
(*callbacks->write_output) (msg, sizeof (msg) - 1);
/* Don't just keep going, because q + 1 might point to the
terminating '\0'. */
goto out;
}
p = q + 2;
}
(*callbacks->write_output) (p, strlen (p));
}
else
vfprintf (outfile, format, args);
out:
va_end (args);
}
/* Flush the output file. */
void
flush_output ()
{
if (callbacks && callbacks->flush_output)
(*callbacks->flush_output) ();
else
fflush (outfile);
}
/* Compare two lines (typically one from each input file)
according to the command line options.
For efficiency, this is invoked only when the lines do not match exactly
but an option like -i might cause us to ignore the difference.
Return nonzero if the lines differ. */
int
line_cmp (s1, s2)
char const *s1, *s2;
{
register unsigned char const *t1 = (unsigned char const *) s1;
register unsigned char const *t2 = (unsigned char const *) s2;
while (1)
{
register unsigned char c1 = *t1++;
register unsigned char c2 = *t2++;
/* Test for exact char equality first, since it's a common case. */
if (c1 != c2)
{
/* Ignore horizontal white space if -b or -w is specified. */
if (ignore_all_space_flag)
{
/* For -w, just skip past any white space. */
while (ISSPACE (c1) && c1 != '\n') c1 = *t1++;
while (ISSPACE (c2) && c2 != '\n') c2 = *t2++;
}
else if (ignore_space_change_flag)
{
/* For -b, advance past any sequence of white space in line 1
and consider it just one Space, or nothing at all
if it is at the end of the line. */
if (ISSPACE (c1))
{
while (c1 != '\n')
{
c1 = *t1++;
if (! ISSPACE (c1))
{
--t1;
c1 = ' ';
break;
}
}
}
/* Likewise for line 2. */
if (ISSPACE (c2))
{
while (c2 != '\n')
{
c2 = *t2++;
if (! ISSPACE (c2))
{
--t2;
c2 = ' ';
break;
}
}
}
if (c1 != c2)
{
/* If we went too far when doing the simple test
for equality, go back to the first non-white-space
character in both sides and try again. */
if (c2 == ' ' && c1 != '\n'
&& (unsigned char const *) s1 + 1 < t1
&& ISSPACE(t1[-2]))
{
--t1;
continue;
}
if (c1 == ' ' && c2 != '\n'
&& (unsigned char const *) s2 + 1 < t2
&& ISSPACE(t2[-2]))
{
--t2;
continue;
}
}
}
/* Lowercase all letters if -i is specified. */
if (ignore_case_flag)
{
if (ISUPPER (c1))
c1 = tolower (c1);
if (ISUPPER (c2))
c2 = tolower (c2);
}
if (c1 != c2)
break;
}
if (c1 == '\n')
return 0;
}
return (1);
}
/* Find the consecutive changes at the start of the script START.
Return the last link before the first gap. */
struct change *
find_change (start)
struct change *start;
{
return start;
}
struct change *
find_reverse_change (start)
struct change *start;
{
return start;
}
/* Divide SCRIPT into pieces by calling HUNKFUN and
print each piece with PRINTFUN.
Both functions take one arg, an edit script.
HUNKFUN is called with the tail of the script
and returns the last link that belongs together with the start
of the tail.
PRINTFUN takes a subscript which belongs together (with a null
link at the end) and prints it. */
void
print_script (script, hunkfun, printfun)
struct change *script;
struct change * (*hunkfun) PARAMS((struct change *));
void (*printfun) PARAMS((struct change *));
{
struct change *next = script;
while (next)
{
struct change *this, *end;
/* Find a set of changes that belong together. */
this = next;
end = (*hunkfun) (next);
/* Disconnect them from the rest of the changes,
making them a hunk, and remember the rest for next iteration. */
next = end->link;
end->link = 0;
#ifdef DEBUG
debug_script (this);
#endif
/* Print this hunk. */
(*printfun) (this);
/* Reconnect the script so it will all be freed properly. */
end->link = next;
}
}
/* Print the text of a single line LINE,
flagging it with the characters in LINE_FLAG (which say whether
the line is inserted, deleted, changed, etc.). */
void
print_1_line (line_flag, line)
char const *line_flag;
char const * const *line;
{
char const *text = line[0], *limit = line[1]; /* Help the compiler. */
char const *flag_format = 0;
/* If -T was specified, use a Tab between the line-flag and the text.
Otherwise use a Space (as Unix diff does).
Print neither space nor tab if line-flags are empty. */
if (line_flag && *line_flag)
{
flag_format = tab_align_flag ? "%s\t" : "%s ";
printf_output (flag_format, line_flag);
}
output_1_line (text, limit, flag_format, line_flag);
if ((!line_flag || line_flag[0]) && limit[-1] != '\n')
printf_output ("\n\\ No newline at end of file\n");
}
/* Output a line from TEXT up to LIMIT. Without -t, output verbatim.
With -t, expand white space characters to spaces, and if FLAG_FORMAT
is nonzero, output it with argument LINE_FLAG after every
internal carriage return, so that tab stops continue to line up. */
void
output_1_line (text, limit, flag_format, line_flag)
char const *text, *limit, *flag_format, *line_flag;
{
if (!tab_expand_flag)
write_output (text, limit - text);
else
{
register unsigned char c;
register char const *t = text;
register unsigned column = 0;
/* CC is used to avoid taking the address of the register
variable C. */
char cc;
while (t < limit)
switch ((c = *t++))
{
case '\t':
{
unsigned spaces = TAB_WIDTH - column % TAB_WIDTH;
column += spaces;
do
write_output (" ", 1);
while (--spaces);
}
break;
case '\r':
write_output ("\r", 1);
if (flag_format && t < limit && *t != '\n')
printf_output (flag_format, line_flag);
column = 0;
break;
case '\b':
if (column == 0)
continue;
column--;
write_output ("\b", 1);
break;
default:
if (ISPRINT (c))
column++;
cc = c;
write_output (&cc, 1);
break;
}
}
}
int
change_letter (inserts, deletes)
int inserts, deletes;
{
if (!inserts)
return 'd';
else if (!deletes)
return 'a';
else
return 'c';
}
/* Translate an internal line number (an index into diff's table of lines)
into an actual line number in the input file.
The internal line number is LNUM. FILE points to the data on the file.
Internal line numbers count from 0 starting after the prefix.
Actual line numbers count from 1 within the entire file. */
int
translate_line_number (file, lnum)
struct file_data const *file;
int lnum;
{
return lnum + file->prefix_lines + 1;
}
void
translate_range (file, a, b, aptr, bptr)
struct file_data const *file;
int a, b;
int *aptr, *bptr;
{
*aptr = translate_line_number (file, a - 1) + 1;
*bptr = translate_line_number (file, b + 1) - 1;
}
/* Print a pair of line numbers with SEPCHAR, translated for file FILE.
If the two numbers are identical, print just one number.
Args A and B are internal line numbers.
We print the translated (real) line numbers. */
void
print_number_range (sepchar, file, a, b)
int sepchar;
struct file_data *file;
int a, b;
{
int trans_a, trans_b;
translate_range (file, a, b, &trans_a, &trans_b);
/* Note: we can have B < A in the case of a range of no lines.
In this case, we should print the line number before the range,
which is B. */
if (trans_b > trans_a)
printf_output ("%d%c%d", trans_a, sepchar, trans_b);
else
printf_output ("%d", trans_b);
}
/* Look at a hunk of edit script and report the range of lines in each file
that it applies to. HUNK is the start of the hunk, which is a chain
of `struct change'. The first and last line numbers of file 0 are stored in
*FIRST0 and *LAST0, and likewise for file 1 in *FIRST1 and *LAST1.
Note that these are internal line numbers that count from 0.
If no lines from file 0 are deleted, then FIRST0 is LAST0+1.
Also set *DELETES nonzero if any lines of file 0 are deleted
and set *INSERTS nonzero if any lines of file 1 are inserted.
If only ignorable lines are inserted or deleted, both are
set to 0. */
void
analyze_hunk (hunk, first0, last0, first1, last1, deletes, inserts)
struct change *hunk;
int *first0, *last0, *first1, *last1;
int *deletes, *inserts;
{
int l0, l1, show_from, show_to;
int i;
int trivial = ignore_blank_lines_flag || ignore_regexp_list;
struct change *next;
show_from = show_to = 0;
*first0 = hunk->line0;
*first1 = hunk->line1;
next = hunk;
do
{
l0 = next->line0 + next->deleted - 1;
l1 = next->line1 + next->inserted - 1;
show_from += next->deleted;
show_to += next->inserted;
for (i = next->line0; i <= l0 && trivial; i++)
if (!ignore_blank_lines_flag || files[0].linbuf[i][0] != '\n')
{
struct regexp_list *r;
char const *line = files[0].linbuf[i];
int len = files[0].linbuf[i + 1] - line;
for (r = ignore_regexp_list; r; r = r->next)
if (0 <= re_search (&r->buf, line, len, 0, len, 0))
break; /* Found a match. Ignore this line. */
/* If we got all the way through the regexp list without
finding a match, then it's nontrivial. */
if (!r)
trivial = 0;
}
for (i = next->line1; i <= l1 && trivial; i++)
if (!ignore_blank_lines_flag || files[1].linbuf[i][0] != '\n')
{
struct regexp_list *r;
char const *line = files[1].linbuf[i];
int len = files[1].linbuf[i + 1] - line;
for (r = ignore_regexp_list; r; r = r->next)
if (0 <= re_search (&r->buf, line, len, 0, len, 0))
break; /* Found a match. Ignore this line. */
/* If we got all the way through the regexp list without
finding a match, then it's nontrivial. */
if (!r)
trivial = 0;
}
}
while ((next = next->link) != 0);
*last0 = l0;
*last1 = l1;
/* If all inserted or deleted lines are ignorable,
tell the caller to ignore this hunk. */
if (trivial)
show_from = show_to = 0;
*deletes = show_from;
*inserts = show_to;
}
/* Concatenate three strings, returning a newly malloc'd string. */
char *
concat (s1, s2, s3)
char const *s1, *s2, *s3;
{
size_t len = strlen (s1) + strlen (s2) + strlen (s3);
char *new = xmalloc (len + 1);
sprintf (new, "%s%s%s", s1, s2, s3);
return new;
}
/* Yield the newly malloc'd pathname
of the file in DIR whose filename is FILE. */
char *
dir_file_pathname (dir, file)
char const *dir, *file;
{
char const *p = filename_lastdirchar (dir);
return concat (dir, "/" + (p && !p[1]), file);
}
void
debug_script (sp)
struct change *sp;
{
fflush (stdout);
for (; sp; sp = sp->link)
fprintf (stderr, "%3d %3d delete %d insert %d\n",
sp->line0, sp->line1, sp->deleted, sp->inserted);
fflush (stderr);
}

View File

@ -1,5 +0,0 @@
/* Version number of GNU diff. */
#include <config.h>
char const diff_version_string[] = "2.7";

File diff suppressed because it is too large Load Diff

View File

@ -1,38 +0,0 @@
Thu Sep 15 14:19:50 1994 david d `zoo' zuhn <zoo@monad.armadillo.com>
* Makefile.in: define TEXI2DVI
Sat Dec 18 01:23:39 1993 david d zuhn (zoo@monad.armadillo.com)
* cvs.texinfo: document -k SUBST options to 'cvs import';
regularize use @sc{cvs}
* Makefile.in (VPATH): don't use $(srcdir), but @srcdir@ instead
(install-info): grab all info files, not just *.info
Mon Oct 11 16:23:54 1993 Jim Kingdon (kingdon@lioth.cygnus.com)
* cvsclient.texi: New node TODO; various other changes.
Wed Feb 26 18:04:40 1992 K. Richard Pixley (rich@cygnus.com)
* Makefile.in, configure.in: removed traces of namesubdir,
-subdirs, $(subdir), $(unsubdir), some rcs triggers. Forced
copyrights to '92, changed some from Cygnus to FSF.
Tue Dec 10 04:07:10 1991 K. Richard Pixley (rich at rtl.cygnus.com)
* Makefile.in: infodir belongs in datadir.
Thu Dec 5 22:46:01 1991 K. Richard Pixley (rich at rtl.cygnus.com)
* Makefile.in: idestdir and ddestdir go away. Added copyrights
and shift gpl to v2. Added ChangeLog if it didn't exist. docdir
and mandir now keyed off datadir by default.
Wed Nov 27 02:45:18 1991 K. Richard Pixley (rich at sendai)
* brought Makefile.in's up to standards.text.
* fresh changelog.

View File

@ -1,46 +0,0 @@
Here's some of the texinfo conventions the CVS documentation uses:
@code{ ... } command usage & command snippets, including
command names.
@var{ ... } variables - text which the user is expected to
replace with some meaningful text of their own
in actual usage.
@file{ ... } file names
@samp{ ... } for most anything else you need quotes around
(often still misused for command snippets)
@example ... @end example example command usage and output, etc.
@emph{ ... } emphasis - warnings, stress, etc. This will be
bracketed by underline characters in info files
(_ ... _) and in italics in PDF & probably in
postscript & HTML.
@strong{ ... } Similar to @emph{}, but the effect is to
bracket with asterisks in info files (* ... *)
and in bold in PDF & probably in postscript &
HTML.
@noindent Suppresses indentation of the following
paragraph. This can ocassionally be useful
after examples and the like.
@cindex ... Add a tag to the index.
@pxref{ ... } Cross reference in parentheses.
@xref{ ... } Cross reference.
Preformatted text should be marked as such (use @example... there may be other
ways) since many of the final output formats can use relational fonts otherwise
and marking it as formatted should restrict it to a fixed width font. Keep
this sort of text to 80 characters or less per line since larger may not be
properly viewable for some info users.
There are dictionary lists and function definition markers. Scan cvs.texinfo
for their usage. There may be table definitions as well but I haven't used
them.
Use lots of index markers. Scan the index for the current style. Try to reuse
an existing entry if the meaning is similar.
`makeinfo' 3.11 or greater is required for output generation since earlier
versions do not support the @ifnottex & @ifnothtml commands. There may be
other commands used in `cvs.texinfo' that are unsupported by earlier versions
of `makeinfo' by the time you read this.
For more on using texinfo docs, see the `info texinfo' documentation or
http://www.gnu.org/manual/texinfo/texinfo.html .

View File

@ -1,121 +0,0 @@
## Process this file with automake to produce Makefile.in
# Makefile for GNU CVS documentation (excluding man pages - see ../man).
#
# Copyright (C) 1986-2005 The Free Software Foundation, Inc.
#
# Portions Copyright (C) 1998-2005 Derek Price, Ximbiot <http://ximbiot.com>,
# and others.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
info_TEXINFOS = cvs.texinfo cvsclient.texi
man_MANS = $(srcdir)/cvs.1
PSS = \
cvs.ps \
cvs-paper.ps \
cvsclient.ps
PDFS = \
cvs.pdf \
$(srcdir)/cvs-paper.pdf \
cvsclient.pdf
TXTS = \
cvs.txt \
cvsclient.txt
EXTRA_DIST = \
.cvsignore \
ChangeLog.fsf \
RCSFILES \
mdate-sh \
$(srcdir)/cvs.1 \
cvs-paper.ms \
cvs.man.header \
cvs.man.footer \
$(PDFS)
MOSTLYCLEANFILES =
CLEANFILES = \
$(PSS) \
$(TXTS)
MAINTAINERCLEANFILES = \
$(PDFS) \
$(srcdir)/cvs.1
doc: info pdf
.PHONY: doc
txt: $(TXTS)
.PHONY: txt
dvi: cvs.dvi cvsclient.dvi
.PHONY: dvi
# FIXME-AUTOMAKE:
# For some reason if I remove version.texi, it doesn't get built automatically.
# This needs to be fixed in automake.
cvs.txt: cvs.texinfo $(srcdir)/version.texi
cvsclient.txt: cvsclient.texi $(srcdir)/version-client.texi
# The cvs-paper.pdf target needs to be very specific so that the other PDFs get
# generated correctly. If a more generic .ps.pdf implicit target is defined,
# and cvs.ps is made before cvs.pdf, then cvs.pdf can be generated from the
# .ps.pdf target and the PS source, which contains less information (hyperlinks
# and such) than the usual texinfo source.
#
# It is possible that an implicit .ms.ps target could be safely defined. I
# don't recall looking into it.
cvs-paper.ps: cvs-paper.ms
$(ROFF) -t -p -ms -Tps $(srcdir)/cvs-paper.ms >cvs-paper.ps-t
cp cvs-paper.ps-t $@
-@rm -f cvs-paper.ps-t
# This rule introduces some redundancy, but `make distcheck' requires that
# Nothing in $(srcdir) be rebuilt, and this will always be rebuilt when it
# is dependant on cvs-paper.ps and cvs-paper.ps isn't distributed.
$(srcdir)/cvs-paper.pdf: cvs-paper.ms
$(ROFF) -t -p -ms -Tps $(srcdir)/cvs-paper.ms >cvs-paper.ps-t
ps2pdf cvs-paper.ps-t cvs-paper.pdf-t
cp cvs-paper.pdf-t $@
-@rm -f cvs-paper.pdf-t cvs-paper.ps-t
MOSTLYCLEANFILES += cvs-paper.pdf-t cvs-paper.ps-t
# Targets to build a man page from cvs.texinfo.
$(srcdir)/cvs.1: @MAINTAINER_MODE_TRUE@ mkman cvs.man.header cvs.texinfo cvs.man.footer
$(PERL) ./mkman $(srcdir)/cvs.man.header $(srcdir)/cvs.texinfo \
$(srcdir)/cvs.man.footer >cvs.tmp
cp cvs.tmp $(srcdir)/cvs.1
-@rm -f cvs.tmp
# texinfo based targets automake neglects to include
SUFFIXES = .txt
.texinfo.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
.txi.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
.texi.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
##
## MAINTAINER Targets
##
# for backwards compatibility with the old makefiles
realclean: maintainer-clean
.PHONY: realclean

View File

@ -1,816 +0,0 @@
# Makefile.in generated by automake 1.10 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
# Makefile for GNU CVS documentation (excluding man pages - see ../man).
#
# Copyright (C) 1986-2005 The Free Software Foundation, Inc.
#
# Portions Copyright (C) 1998-2005 Derek Price, Ximbiot <http://ximbiot.com>,
# and others.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = doc
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
$(srcdir)/mkman.pl $(srcdir)/stamp-1 $(srcdir)/stamp-vti \
$(srcdir)/version-client.texi $(srcdir)/version.texi ChangeLog \
mdate-sh texinfo.tex
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.in
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES = mkman
SOURCES =
DIST_SOURCES =
INFO_DEPS = $(srcdir)/cvs.info $(srcdir)/cvsclient.info
am__TEXINFO_TEX_DIR = $(srcdir)
DVIS = cvs.dvi cvsclient.dvi
HTMLS = cvs.html cvsclient.html
TEXINFOS = cvs.texinfo cvsclient.texi
TEXI2PDF = $(TEXI2DVI) --pdf --batch
MAKEINFOHTML = $(MAKEINFO) --html
AM_MAKEINFOHTMLFLAGS = $(AM_MAKEINFOFLAGS)
DVIPS = dvips
am__installdirs = "$(DESTDIR)$(infodir)" "$(DESTDIR)$(man1dir)"
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
man1dir = $(mandir)/man1
NROFF = nroff
MANS = $(man_MANS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CSH = @CSH@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EDITOR = @EDITOR@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
KRB4 = @KRB4@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
MAINT = @MAINT@
MAKEINFO = @MAKEINFO@
MKDIR_P = @MKDIR_P@
MKTEMP = @MKTEMP@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PERL = @PERL@
PR = @PR@
PS2PDF = @PS2PDF@
RANLIB = @RANLIB@
ROFF = @ROFF@
SENDMAIL = @SENDMAIL@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TEXI2DVI = @TEXI2DVI@
VERSION = @VERSION@
YACC = @YACC@
YFLAGS = @YFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_CC = @ac_ct_CC@
ac_prefix_program = @ac_prefix_program@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
htmldir = @htmldir@
includedir = @includedir@
includeopt = @includeopt@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
with_default_rsh = @with_default_rsh@
with_default_ssh = @with_default_ssh@
info_TEXINFOS = cvs.texinfo cvsclient.texi
man_MANS = $(srcdir)/cvs.1
PSS = \
cvs.ps \
cvs-paper.ps \
cvsclient.ps
PDFS = \
cvs.pdf \
$(srcdir)/cvs-paper.pdf \
cvsclient.pdf
TXTS = \
cvs.txt \
cvsclient.txt
EXTRA_DIST = \
.cvsignore \
ChangeLog.fsf \
RCSFILES \
mdate-sh \
$(srcdir)/cvs.1 \
cvs-paper.ms \
cvs.man.header \
cvs.man.footer \
$(PDFS)
MOSTLYCLEANFILES = cvs-paper.pdf-t cvs-paper.ps-t
CLEANFILES = \
$(PSS) \
$(TXTS)
MAINTAINERCLEANFILES = \
$(PDFS) \
$(srcdir)/cvs.1
# texinfo based targets automake neglects to include
SUFFIXES = .txt
all: all-am
.SUFFIXES:
.SUFFIXES: .txt .dvi .html .info .pdf .ps .texi .texinfo .txi
$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu doc/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
mkman: $(top_builddir)/config.status $(srcdir)/mkman.pl
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
.texinfo.info:
restore=: && backupdir="$(am__leading_dot)am$$$$" && \
am__cwd=`pwd` && cd $(srcdir) && \
rm -rf $$backupdir && mkdir $$backupdir && \
if ($(MAKEINFO) --version) >/dev/null 2>&1; then \
for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \
if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \
done; \
else :; fi && \
cd "$$am__cwd"; \
if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
-o $@ $<; \
then \
rc=0; \
cd $(srcdir); \
else \
rc=$$?; \
cd $(srcdir) && \
$$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \
fi; \
rm -rf $$backupdir; exit $$rc
.texinfo.dvi:
TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
$(TEXI2DVI) $<
.texinfo.pdf:
TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
$(TEXI2PDF) $<
.texinfo.html:
rm -rf $(@:.html=.htp)
if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
-o $(@:.html=.htp) $<; \
then \
rm -rf $@; \
if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \
else \
if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \
exit 1; \
fi
$(srcdir)/cvs.info: cvs.texinfo $(srcdir)/version.texi
cvs.dvi: cvs.texinfo $(srcdir)/version.texi
cvs.pdf: cvs.texinfo $(srcdir)/version.texi
cvs.html: cvs.texinfo $(srcdir)/version.texi
$(srcdir)/version.texi: @MAINTAINER_MODE_TRUE@ $(srcdir)/stamp-vti
$(srcdir)/stamp-vti: cvs.texinfo $(top_srcdir)/configure
@(dir=.; test -f ./cvs.texinfo || dir=$(srcdir); \
set `$(SHELL) $(srcdir)/mdate-sh $$dir/cvs.texinfo`; \
echo "@set UPDATED $$1 $$2 $$3"; \
echo "@set UPDATED-MONTH $$2 $$3"; \
echo "@set EDITION $(VERSION)"; \
echo "@set VERSION $(VERSION)") > vti.tmp
@cmp -s vti.tmp $(srcdir)/version.texi \
|| (echo "Updating $(srcdir)/version.texi"; \
cp vti.tmp $(srcdir)/version.texi)
-@rm -f vti.tmp
@cp $(srcdir)/version.texi $@
mostlyclean-vti:
-rm -f vti.tmp
maintainer-clean-vti:
@MAINTAINER_MODE_TRUE@ -rm -f $(srcdir)/stamp-vti $(srcdir)/version.texi
.texi.info:
restore=: && backupdir="$(am__leading_dot)am$$$$" && \
am__cwd=`pwd` && cd $(srcdir) && \
rm -rf $$backupdir && mkdir $$backupdir && \
if ($(MAKEINFO) --version) >/dev/null 2>&1; then \
for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \
if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \
done; \
else :; fi && \
cd "$$am__cwd"; \
if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
-o $@ $<; \
then \
rc=0; \
cd $(srcdir); \
else \
rc=$$?; \
cd $(srcdir) && \
$$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \
fi; \
rm -rf $$backupdir; exit $$rc
.texi.dvi:
TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
$(TEXI2DVI) $<
.texi.pdf:
TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
$(TEXI2PDF) $<
.texi.html:
rm -rf $(@:.html=.htp)
if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
-o $(@:.html=.htp) $<; \
then \
rm -rf $@; \
if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \
else \
if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \
exit 1; \
fi
$(srcdir)/cvsclient.info: cvsclient.texi $(srcdir)/version-client.texi
cvsclient.dvi: cvsclient.texi $(srcdir)/version-client.texi
cvsclient.pdf: cvsclient.texi $(srcdir)/version-client.texi
cvsclient.html: cvsclient.texi $(srcdir)/version-client.texi
$(srcdir)/version-client.texi: @MAINTAINER_MODE_TRUE@ $(srcdir)/stamp-1
$(srcdir)/stamp-1: cvsclient.texi $(top_srcdir)/configure
@(dir=.; test -f ./cvsclient.texi || dir=$(srcdir); \
set `$(SHELL) $(srcdir)/mdate-sh $$dir/cvsclient.texi`; \
echo "@set UPDATED $$1 $$2 $$3"; \
echo "@set UPDATED-MONTH $$2 $$3"; \
echo "@set EDITION $(VERSION)"; \
echo "@set VERSION $(VERSION)") > 1.tmp
@cmp -s 1.tmp $(srcdir)/version-client.texi \
|| (echo "Updating $(srcdir)/version-client.texi"; \
cp 1.tmp $(srcdir)/version-client.texi)
-@rm -f 1.tmp
@cp $(srcdir)/version-client.texi $@
mostlyclean-1:
-rm -f 1.tmp
maintainer-clean-1:
@MAINTAINER_MODE_TRUE@ -rm -f $(srcdir)/stamp-1 $(srcdir)/version-client.texi
.dvi.ps:
TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
$(DVIPS) -o $@ $<
uninstall-dvi-am:
@$(NORMAL_UNINSTALL)
@list='$(DVIS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(dvidir)/$$f'"; \
rm -f "$(DESTDIR)$(dvidir)/$$f"; \
done
uninstall-html-am:
@$(NORMAL_UNINSTALL)
@list='$(HTMLS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -rf '$(DESTDIR)$(htmldir)/$$f'"; \
rm -rf "$(DESTDIR)$(htmldir)/$$f"; \
done
uninstall-info-am:
@$(PRE_UNINSTALL)
@if test -d '$(DESTDIR)$(infodir)' && \
(install-info --version && \
install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
list='$(INFO_DEPS)'; \
for file in $$list; do \
relfile=`echo "$$file" | sed 's|^.*/||'`; \
echo " install-info --info-dir='$(DESTDIR)$(infodir)' --remove '$(DESTDIR)$(infodir)/$$relfile'"; \
install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$$relfile"; \
done; \
else :; fi
@$(NORMAL_UNINSTALL)
@list='$(INFO_DEPS)'; \
for file in $$list; do \
relfile=`echo "$$file" | sed 's|^.*/||'`; \
relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \
(if test -d "$(DESTDIR)$(infodir)" && cd "$(DESTDIR)$(infodir)"; then \
echo " cd '$(DESTDIR)$(infodir)' && rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]"; \
rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \
else :; fi); \
done
uninstall-pdf-am:
@$(NORMAL_UNINSTALL)
@list='$(PDFS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(pdfdir)/$$f'"; \
rm -f "$(DESTDIR)$(pdfdir)/$$f"; \
done
uninstall-ps-am:
@$(NORMAL_UNINSTALL)
@list='$(PSS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(psdir)/$$f'"; \
rm -f "$(DESTDIR)$(psdir)/$$f"; \
done
dist-info: $(INFO_DEPS)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
list='$(INFO_DEPS)'; \
for base in $$list; do \
case $$base in \
$(srcdir)/*) base=`echo "$$base" | sed "s|^$$srcdirstrip/||"`;; \
esac; \
if test -f $$base; then d=.; else d=$(srcdir); fi; \
base_i=`echo "$$base" | sed 's|\.info$$||;s|$$|.i|'`; \
for file in $$d/$$base $$d/$$base-[0-9] $$d/$$base-[0-9][0-9] $$d/$$base_i[0-9] $$d/$$base_i[0-9][0-9]; do \
if test -f $$file; then \
relfile=`expr "$$file" : "$$d/\(.*\)"`; \
test -f $(distdir)/$$relfile || \
cp -p $$file $(distdir)/$$relfile; \
else :; fi; \
done; \
done
mostlyclean-aminfo:
-rm -rf cvs.aux cvs.cp cvs.cps cvs.fn cvs.fns cvs.ky cvs.kys cvs.log cvs.pg \
cvs.pgs cvs.tmp cvs.toc cvs.tp cvs.tps cvs.vr cvs.vrs \
cvs.dvi cvs.pdf cvs.ps cvs.html cvsclient.aux cvsclient.cp \
cvsclient.cps cvsclient.fn cvsclient.fns cvsclient.ky \
cvsclient.kys cvsclient.log cvsclient.pg cvsclient.pgs \
cvsclient.tmp cvsclient.toc cvsclient.tp cvsclient.tps \
cvsclient.vr cvsclient.vrs cvsclient.dvi cvsclient.pdf \
cvsclient.ps cvsclient.html
maintainer-clean-aminfo:
@list='$(INFO_DEPS)'; for i in $$list; do \
i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \
echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \
rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \
done
install-man1: $(man1_MANS) $(man_MANS)
@$(NORMAL_INSTALL)
test -z "$(man1dir)" || $(MKDIR_P) "$(DESTDIR)$(man1dir)"
@list='$(man1_MANS) $(dist_man1_MANS) $(nodist_man1_MANS)'; \
l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
for i in $$l2; do \
case "$$i" in \
*.1*) list="$$list $$i" ;; \
esac; \
done; \
for i in $$list; do \
if test -f $(srcdir)/$$i; then file=$(srcdir)/$$i; \
else file=$$i; fi; \
ext=`echo $$i | sed -e 's/^.*\\.//'`; \
case "$$ext" in \
1*) ;; \
*) ext='1' ;; \
esac; \
inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
inst=`echo $$inst | sed -e 's/^.*\///'`; \
inst=`echo $$inst | sed '$(transform)'`.$$ext; \
echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man1dir)/$$inst'"; \
$(INSTALL_DATA) "$$file" "$(DESTDIR)$(man1dir)/$$inst"; \
done
uninstall-man1:
@$(NORMAL_UNINSTALL)
@list='$(man1_MANS) $(dist_man1_MANS) $(nodist_man1_MANS)'; \
l2='$(man_MANS) $(dist_man_MANS) $(nodist_man_MANS)'; \
for i in $$l2; do \
case "$$i" in \
*.1*) list="$$list $$i" ;; \
esac; \
done; \
for i in $$list; do \
ext=`echo $$i | sed -e 's/^.*\\.//'`; \
case "$$ext" in \
1*) ;; \
*) ext='1' ;; \
esac; \
inst=`echo $$i | sed -e 's/\\.[0-9a-z]*$$//'`; \
inst=`echo $$inst | sed -e 's/^.*\///'`; \
inst=`echo $$inst | sed '$(transform)'`.$$ext; \
echo " rm -f '$(DESTDIR)$(man1dir)/$$inst'"; \
rm -f "$(DESTDIR)$(man1dir)/$$inst"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$(top_distdir)" distdir="$(distdir)" \
dist-info
check-am: all-am
check: check-am
all-am: Makefile $(INFO_DEPS) $(MANS)
installdirs:
for dir in "$(DESTDIR)$(infodir)" "$(DESTDIR)$(man1dir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
-test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
clean-generic:
-test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
-test -z "$(MAINTAINERCLEANFILES)" || rm -f $(MAINTAINERCLEANFILES)
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi-am: $(DVIS)
html: html-am
html-am: $(HTMLS)
info: info-am
info-am: $(INFO_DEPS)
install-data-am: install-info-am install-man
install-dvi: install-dvi-am
install-dvi-am: $(DVIS)
@$(NORMAL_INSTALL)
test -z "$(dvidir)" || $(MKDIR_P) "$(DESTDIR)$(dvidir)"
@list='$(DVIS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(dvidir)/$$f'"; \
$(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(dvidir)/$$f"; \
done
install-exec-am:
install-html: install-html-am
install-html-am: $(HTMLS)
@$(NORMAL_INSTALL)
test -z "$(htmldir)" || $(MKDIR_P) "$(DESTDIR)$(htmldir)"
@list='$(HTMLS)'; for p in $$list; do \
if test -f "$$p" || test -d "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
if test -d "$$d$$p"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)/$$f'"; \
$(MKDIR_P) "$(DESTDIR)$(htmldir)/$$f" || exit 1; \
echo " $(INSTALL_DATA) '$$d$$p'/* '$(DESTDIR)$(htmldir)/$$f'"; \
$(INSTALL_DATA) "$$d$$p"/* "$(DESTDIR)$(htmldir)/$$f"; \
else \
echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(htmldir)/$$f'"; \
$(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(htmldir)/$$f"; \
fi; \
done
install-info: install-info-am
install-info-am: $(INFO_DEPS)
@$(NORMAL_INSTALL)
test -z "$(infodir)" || $(MKDIR_P) "$(DESTDIR)$(infodir)"
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
list='$(INFO_DEPS)'; \
for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
esac; \
if test -f $$file; then d=.; else d=$(srcdir); fi; \
file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \
for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \
$$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \
if test -f $$ifile; then \
relfile=`echo "$$ifile" | sed 's|^.*/||'`; \
echo " $(INSTALL_DATA) '$$ifile' '$(DESTDIR)$(infodir)/$$relfile'"; \
$(INSTALL_DATA) "$$ifile" "$(DESTDIR)$(infodir)/$$relfile"; \
else : ; fi; \
done; \
done
@$(POST_INSTALL)
@if (install-info --version && \
install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
list='$(INFO_DEPS)'; \
for file in $$list; do \
relfile=`echo "$$file" | sed 's|^.*/||'`; \
echo " install-info --info-dir='$(DESTDIR)$(infodir)' '$(DESTDIR)$(infodir)/$$relfile'";\
install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$$relfile" || :;\
done; \
else : ; fi
install-man: install-man1
install-pdf: install-pdf-am
install-pdf-am: $(PDFS)
@$(NORMAL_INSTALL)
test -z "$(pdfdir)" || $(MKDIR_P) "$(DESTDIR)$(pdfdir)"
@list='$(PDFS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(pdfdir)/$$f'"; \
$(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(pdfdir)/$$f"; \
done
install-ps: install-ps-am
install-ps-am: $(PSS)
@$(NORMAL_INSTALL)
test -z "$(psdir)" || $(MKDIR_P) "$(DESTDIR)$(psdir)"
@list='$(PSS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(INSTALL_DATA) '$$d$$p' '$(DESTDIR)$(psdir)/$$f'"; \
$(INSTALL_DATA) "$$d$$p" "$(DESTDIR)$(psdir)/$$f"; \
done
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-1 \
maintainer-clean-aminfo maintainer-clean-generic \
maintainer-clean-vti
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-1 mostlyclean-aminfo mostlyclean-generic \
mostlyclean-vti
pdf: pdf-am
pdf-am: $(PDFS)
ps: ps-am
ps-am: $(PSS)
uninstall-am: uninstall-dvi-am uninstall-html-am uninstall-info-am \
uninstall-man uninstall-pdf-am uninstall-ps-am
uninstall-man: uninstall-man1
.MAKE: install-am install-strip
.PHONY: all all-am check check-am clean clean-generic dist-info \
distclean distclean-generic distdir dvi dvi-am html html-am \
info info-am install install-am install-data install-data-am \
install-dvi install-dvi-am install-exec install-exec-am \
install-html install-html-am install-info install-info-am \
install-man install-man1 install-pdf install-pdf-am install-ps \
install-ps-am install-strip installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-1 \
maintainer-clean-aminfo maintainer-clean-generic \
maintainer-clean-vti mostlyclean mostlyclean-1 \
mostlyclean-aminfo mostlyclean-generic mostlyclean-vti pdf \
pdf-am ps ps-am uninstall uninstall-am uninstall-dvi-am \
uninstall-html-am uninstall-info-am uninstall-man \
uninstall-man1 uninstall-pdf-am uninstall-ps-am
doc: info pdf
.PHONY: doc
txt: $(TXTS)
.PHONY: txt
dvi: cvs.dvi cvsclient.dvi
.PHONY: dvi
# FIXME-AUTOMAKE:
# For some reason if I remove version.texi, it doesn't get built automatically.
# This needs to be fixed in automake.
cvs.txt: cvs.texinfo $(srcdir)/version.texi
cvsclient.txt: cvsclient.texi $(srcdir)/version-client.texi
# The cvs-paper.pdf target needs to be very specific so that the other PDFs get
# generated correctly. If a more generic .ps.pdf implicit target is defined,
# and cvs.ps is made before cvs.pdf, then cvs.pdf can be generated from the
# .ps.pdf target and the PS source, which contains less information (hyperlinks
# and such) than the usual texinfo source.
#
# It is possible that an implicit .ms.ps target could be safely defined. I
# don't recall looking into it.
cvs-paper.ps: cvs-paper.ms
$(ROFF) -t -p -ms -Tps $(srcdir)/cvs-paper.ms >cvs-paper.ps-t
cp cvs-paper.ps-t $@
-@rm -f cvs-paper.ps-t
# This rule introduces some redundancy, but `make distcheck' requires that
# Nothing in $(srcdir) be rebuilt, and this will always be rebuilt when it
# is dependant on cvs-paper.ps and cvs-paper.ps isn't distributed.
$(srcdir)/cvs-paper.pdf: cvs-paper.ms
$(ROFF) -t -p -ms -Tps $(srcdir)/cvs-paper.ms >cvs-paper.ps-t
ps2pdf cvs-paper.ps-t cvs-paper.pdf-t
cp cvs-paper.pdf-t $@
-@rm -f cvs-paper.pdf-t cvs-paper.ps-t
# Targets to build a man page from cvs.texinfo.
$(srcdir)/cvs.1: @MAINTAINER_MODE_TRUE@ mkman cvs.man.header cvs.texinfo cvs.man.footer
$(PERL) ./mkman $(srcdir)/cvs.man.header $(srcdir)/cvs.texinfo \
$(srcdir)/cvs.man.footer >cvs.tmp
cp cvs.tmp $(srcdir)/cvs.1
-@rm -f cvs.tmp
.texinfo.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
.txi.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
.texi.txt:
$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
--no-headers -o $@ `test -f '$<' || echo '$(srcdir)/'`$<
# for backwards compatibility with the old makefiles
realclean: maintainer-clean
.PHONY: realclean
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,276 +0,0 @@
It would be nice if the RCS file format (which is implemented by a
great many tools, both free and non-free, both by calling GNU RCS and
by reimplementing access to RCS files) were documented in some
standard separate from any one tool. But as far as I know no such
standard exists. Hence this file.
The place to start is the rcsfile.5 manpage in the GNU RCS 5.7
distribution. Then look at the diff at the end of this file (which
contains a few fixes and clarifications to that manpage).
If you are interested in MKS RCS, src/ci.c in GNU RCS 5.7 has a
comment about their date format. However, as far as we know there
isn't really any document describing MKS's changes to the RCS file
format.
The rcsfile.5 manpage does not document what goes in the "text" field
for each revision. The answer is that the head revision contains the
contents of that revision and every other revision contain a bunch of
edits to produce that revision ("a" and "d" lines). The GNU diff
manual (the version I looked at was for GNU diff 2.4) documents this
format somewhat (as the "RCS output format"), but the presentation is
a bit confusing as it is all tangled up with the documentation of
several other output formats. If you just want some source code to
look at, the part of CVS which applies these is RCS_deltas in
src/rcs.c.
The rcsfile.5 documentation only _very_ briefly touches on the order
of the revisions. The order _is_ important and CVS relies on it.
Here is an example of what I was able to find, based on the join3
sanity.sh testcase (and the behavior I am documenting here seems to be
the same for RCS 5.7 and CVS 1.9.27):
1.1 -----------------> 1.2
\---> 1.1.2.1 \---> 1.2.2.1
Here is how this shows up in the RCS file (omitting irrelevant parts):
admin: head 1.2;
deltas:
1.2 branches 1.2.2.1; next 1.1;
1.1 branches 1.1.2.1; next;
1.1.2.1 branches; next;
1.2.2.1 branches; next;
deltatexts:
1.2
1.2.2.1
1.1
1.1.2.1
Yes, the order seems to differ between the deltas and the deltatexts.
I have no idea how much of this should actually be considered part of
the RCS file format, and how much programs reading it should expect to
encounter any order.
The rcsfile.5 grammar shows the {num} after "next" as optional; if it
is omitted then there is no next delta node (for example 1.1 or the
head of a branch will typically have no next).
There is one case where CVS uses CVS-specific, non-compatible changes
to the RCS file format, and this is magic branches. See cvs.texinfo
for more information on them. CVS also sets the RCS state to "dead"
to indicate that a file does not exist in a given revision (this is
stored just as any other RCS state is).
The RCS file format allows quite a variety of extensions to be added
in a compatible manner by use of the "newphrase" feature documented in
rcsfile.5. We won't try to document extensions not used by CVS in any
detail, but we will briefly list them. Each occurrence of a newphrase
begins with an identifier, which is what we list here. Future
designers of extensions are strongly encouraged to pick
non-conflicting identifiers. Note that newphrase occurs several
places in the RCS grammar, and a given extension may not be legal in
all locations. However, it seems better to reserve a particular
identifier for all locations, to avoid confusion and complicated
rules.
Identifier Used by
---------- -------
namespace RCS library done at Silicon Graphics Inc. (SGI) in 1996
(a modified RCS 5.7--not sure it has any other name).
dead A set of RCS patches developed by Rich Pixley at
Cygnus about 1992. These were for CVS, and predated
the current CVS death support, which uses a state "dead"
rather than a "dead" newphrase.
CVS does use newphrases to implement the `PreservePermissions'
extension introduced in CVS 1.9.26. The following new keywords are
defined when PreservePermissions=yes:
owner
group
permissions
special
symlink
hardlinks
The contents of the `owner' and `group' field should be a numeric uid
and a numeric gid, respectively, representing the user and group who
own the file. The `permissions' field contains an octal integer,
representing the permissions that should be applied to the file. The
`special' field contains two words; the first must be either `block'
or `character', and the second is the file's device number. The
`symlink' field should be present only in files which are symbolic
links to other files, and absent on all regular files. The
`hardlinks' field contains a list of filenames to which the current
file is linked, in alphabetical order. Because files often contain
characters special to RCS, like `.' and sometimes even contain spaces
or eight-bit characters, the filenames in the hardlinks field will
usually be enclosed in RCS strings. For example:
hardlinks README @install.txt@ @Installation Notes@;
The hardlinks field should always include the name of the current
file. That is, in the repository file README,v, any hardlinks fields
in the delta nodes should include `README'; CVS will not operate
properly if this is not done.
The rules regarding keyword expansion are not documented along with
the rest of the RCS file format; they are documented in the co(1)
manpage in the RCS 5.7 distribution. See also the "Keyword
substitution" chapter of cvs.texinfo. The co(1) manpage refers to
special behavior if the log prefix for the $Log keyword is /* or (*.
RCS 5.7 produces a warning whenever it behaves that way, and current
versions of CVS do not handle this case in a special way (CVS 1.9 and
earlier invoke RCS to perform keyword expansion).
Note that if the "expand" keyword is omitted from the RCS file, the
default is "kv".
Note that the "comment {string};" syntax from rcsfile.5 specifies a
comment leader, which affects expansion of the $Log keyword for old
versions of RCS. The comment leader is not used by RCS 5.7 or current
versions of CVS.
Both RCS 5.7 and current versions of CVS handle the $Log keyword in a
different way if the log message starts with "checked in with -k by ".
I don't think this behavior is documented anywhere.
Here is a clarification regarding characters versus bytes in certain
character sets like JIS and Big5:
The RCS file format, as described in the rcsfile(5) man page, is
actually byte-oriented, not character-oriented, despite hints to
the contrary in the man page. This distinction is important for
multibyte characters. For example, if a multibyte character
contains a `@' byte, the `@' must be doubled within strings in RCS
files, since RCS uses `@' bytes as escapes.
This point is not an issue for encodings like ISO 8859, which do
not have multibyte characters. Nor is it an issue for encodings
like UTF-8 and EUC-JIS, which never uses ASCII bytes within a
multibyte character. It is an issue only for multibyte encodings
like JIS and BIG5, which _do_ usurp ASCII bytes.
If `@' doubling occurs within a multibyte char, the resulting RCS
file is not a properly encoded text file. Instead, it is a byte
stream that does not use a consistent character encoding that can
be understood by the usual text tools, since doubling `@' messes
up the encoding. This point affects only programs that examine
the RCS files -- it doesn't affect the external RCS interface, as
the RCS commands always give you the properly encoded text files
and logs (assuming that you always check in properly encoded
text).
CVS 1.10 (and earlier) probably has some bugs in this area on
systems where a C "char" is signed and where the data contains
bytes with the eighth bit set.
One common concern about the RCS file format is the fact that to get
the head of a branch, one must apply deltas from the head of the trunk
to the branchpoint, and then from the branchpoint to the head of the
branch. While more detailed analyses might be worth doing, we will
note:
* The performance bottleneck for CVS generally is figuring out which
files to operate on and that sort of thing, not applying deltas.
* Here is one quick test (probably not a very good test; a better test
would use a normally sized file (say 50-200K) instead of a small one):
I just did a quick test with a small file (on a Sun Ultra 1/170E
running Solaris 5.5.1), with 1000 revisions on the main branch and
1000 revisions on branch that forked at the root (i.e., RCS revisions
1.1, 1.2, ..., 1.1000, and branch revisions 1.1.1.1, 1.1.1.2, ...,
1.1.1.1000). It took about 0.15 seconds real time to check in the
first revision, and about 0.6 seconds to check in and 0.3 seconds to
retrieve revision 1.1.1.1000 (the worst case).
* Any attempt to "fix" this problem should be careful not to interfere
with other features, such as lightweight creation of branches
(particularly using CVS magic branches).
Diff follows:
(Note that in the following diff the old value for the Id keyword was:
Id: rcsfile.5in,v 5.6 1995/06/05 08:28:35 eggert Exp
and the new one was:
Id: rcsfile.5in,v 5.7 1996/12/09 17:31:44 eggert Exp
but since this file itself might be subject to keyword expansion I
haven't included a diff for that fact).
===================================================================
RCS file: RCS/rcsfile.5in,v
retrieving revision 5.6
retrieving revision 5.7
diff -u -r5.6 -r5.7
--- rcsfile.5in 1995/06/05 08:28:35 5.6
+++ rcsfile.5in 1996/12/09 17:31:44 5.7
@@ -85,7 +85,8 @@
.LP
\f2sym\fP ::= {\f2digit\fP}* \f2idchar\fP {\f2idchar\fP | \f2digit\fP}*
.LP
-\f2idchar\fP ::= any visible graphic character except \f2special\fP
+\f2idchar\fP ::= any visible graphic character,
+ except \f2digit\fP or \f2special\fP
.LP
\f2special\fP ::= \f3$\fP | \f3,\fP | \f3.\fP | \f3:\fP | \f3;\fP | \f3@\fP
.LP
@@ -119,12 +120,23 @@
the minute (00\-59),
and
.I ss
-the second (00\-60).
+the second (00\-59).
+If
.I Y
-contains just the last two digits of the year
-for years from 1900 through 1999,
-and all the digits of years thereafter.
-Dates use the Gregorian calendar; times use UTC.
+contains exactly two digits,
+they are the last two digits of a year from 1900 through 1999;
+otherwise,
+.I Y
+contains all the digits of the year.
+Dates use the Gregorian calendar.
+Times use UTC, except that for portability's sake leap seconds are not allowed;
+implementations that support leap seconds should output
+.B 59
+for
+.I ss
+during an inserted leap second, and should accept
+.B 59
+for a deleted leap second.
.PP
The
.I newphrase
@@ -144,16 +156,23 @@
field in order of decreasing numbers.
The
.B head
-field in the
-.I admin
-node points to the head of that sequence (i.e., contains
+field points to the head of that sequence (i.e., contains
the highest pair).
The
.B branch
-node in the admin node indicates the default
+field indicates the default
branch (or revision) for most \*r operations.
If empty, the default
branch is the highest branch on the trunk.
+The
+.B symbols
+field associates symbolic names with revisions.
+For example, if the file contains
+.B "symbols rr:1.1;"
+then
+.B rr
+is a name for revision
+.BR 1.1 .
.PP
All
.I delta

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,58 +0,0 @@
.SH "AUTHORS"
.TP
Dick Grune
Original author of the
.B cvs
shell script version posted to
.B comp.sources.unix
in the volume6 release of December, 1986.
Credited with much of the
.B cvs
conflict resolution algorithms.
.TP
Brian Berliner
Coder and designer of the
.B cvs
program itself in April, 1989, based on the original work done by Dick.
.TP
Jeff Polk
Helped Brian with the design of the
.B cvs
module and vendor branch support and author of the
.BR checkin ( 1 )
shell script (the ancestor of \fBcvs import\fP).
.TP
Larry Jones, Derek R. Price, and Mark D. Baushke
Have helped maintain
.B cvs
for many years.
.TP
And many others too numerous to mention here.
.SH "SEE ALSO"
The most comprehensive manual for CVS is
Version Management with CVS by Per Cederqvist et al. Depending on
your system, you may be able to get it with the
.B info CVS
command or it may be available as cvs.pdf (Portable Document Format),
cvs.ps (PostScript), cvs.texinfo (Texinfo source), or cvs.html.
.SP
For CVS updates, more information on documentation, software related
to CVS, development of CVS, and more, see:
.in +1i
.SP
.PD 0
.IP "" 4
.B http://cvs.nongnu.org
.in -1i
.SP
.BR ci ( 1 ),
.BR co ( 1 ),
.BR cvs ( 5 ),
.BR cvsbug ( 8 ),
.BR diff ( 1 ),
.BR grep ( 1 ),
.BR patch ( 1 ),
.BR rcs ( 1 ),
.BR rcsdiff ( 1 ),
.BR rcsmerge ( 1 ),
.BR rlog ( 1 ).

View File

@ -1,61 +0,0 @@
.\" This is the man page for CVS. It is auto-generated from the
.\" cvs.man.header, cvs.texinfo, & cvs.man.footer files. Please make changes
.\" there. A full copyright & license notice may also be found in cvs.texinfo.
.\"
.\" Man page autogeneration, including this header file, is
.\" Copyright 2004-2005 The Free Software Foundation, Inc.,
.\" Derek R. Price, & Ximbiot <http://ximbiot.com>.
.\"
.\" This documentation is free software; you can redistribute it and/or modify
.\" it under the terms of the GNU General Public License as published by
.\" the Free Software Foundation; either version 2, or (at your option)
.\" any later version.
.\"
.\" This documentation is distributed in the hope that it will be useful,
.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
.\" GNU General Public License for more details.
.\"
.\" You should have received a copy of the GNU General Public License
.\" along with this documentation; if not, write to the Free Software
.\" Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
.de Id
.ds Rv \\$3
.ds Dt \\$4
..
.TH CVS 1 "\*(Dt"
.\" Full space in nroff; half space in troff
.de SP
.if n .sp
.if t .sp .5
..
.\" quoted command
.de `
.RB ` "\|\\$1\|" '\\$2
..
.SH "NAME"
cvs \- Concurrent Versions System
.SH "SYNOPSIS"
.TP
\fBcvs\fP [ \fIcvs_options\fP ]
.I cvs_command
[
.I command_options
] [
.I command_args
]
.SH "NOTE"
.IX "revision control system" "\fLcvs\fR"
.IX cvs "" "\fLcvs\fP \- concurrent versions system"
.IX "concurrent versions system \- \fLcvs\fP"
.IX "release control system" "cvs command" "" "\fLcvs\fP \- concurrent versions system"
.IX "source control system" "cvs command" "" "\fLcvs\fP \- concurrent versions system"
.IX revisions "cvs command" "" "\fLcvs\fP \- source control"
This manpage is a summary of some of the features of
\fBcvs\fP. It is auto-generated from an appendix of the CVS manual.
For more in-depth documentation, please consult the
Cederqvist manual (via the
.B info CVS
command or otherwise,
as described in the SEE ALSO section of this manpage). Cross-references
in this man page refer to nodes in the same.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,201 +0,0 @@
#!/bin/sh
# Get modification time of a file or directory and pretty-print it.
scriptversion=2005-06-29.22
# Copyright (C) 1995, 1996, 1997, 2003, 2004, 2005 Free Software
# Foundation, Inc.
# written by Ulrich Drepper <drepper@gnu.ai.mit.edu>, June 1995
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
case $1 in
'')
echo "$0: No file. Try \`$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: mdate-sh [--help] [--version] FILE
Pretty-print the modification time of FILE.
Report bugs to <bug-automake@gnu.org>.
EOF
exit $?
;;
-v | --v*)
echo "mdate-sh $scriptversion"
exit $?
;;
esac
# Prevent date giving response in another language.
LANG=C
export LANG
LC_ALL=C
export LC_ALL
LC_TIME=C
export LC_TIME
# GNU ls changes its time format in response to the TIME_STYLE
# variable. Since we cannot assume `unset' works, revert this
# variable to its documented default.
if test "${TIME_STYLE+set}" = set; then
TIME_STYLE=posix-long-iso
export TIME_STYLE
fi
save_arg1=$1
# Find out how to get the extended ls output of a file or directory.
if ls -L /dev/null 1>/dev/null 2>&1; then
ls_command='ls -L -l -d'
else
ls_command='ls -l -d'
fi
# A `ls -l' line looks as follows on OS/2.
# drwxrwx--- 0 Aug 11 2001 foo
# This differs from Unix, which adds ownership information.
# drwxrwx--- 2 root root 4096 Aug 11 2001 foo
#
# To find the date, we split the line on spaces and iterate on words
# until we find a month. This cannot work with files whose owner is a
# user named `Jan', or `Feb', etc. However, it's unlikely that `/'
# will be owned by a user whose name is a month. So we first look at
# the extended ls output of the root directory to decide how many
# words should be skipped to get the date.
# On HPUX /bin/sh, "set" interprets "-rw-r--r--" as options, so the "x" below.
set x`ls -l -d /`
# Find which argument is the month.
month=
command=
until test $month
do
shift
# Add another shift to the command.
command="$command shift;"
case $1 in
Jan) month=January; nummonth=1;;
Feb) month=February; nummonth=2;;
Mar) month=March; nummonth=3;;
Apr) month=April; nummonth=4;;
May) month=May; nummonth=5;;
Jun) month=June; nummonth=6;;
Jul) month=July; nummonth=7;;
Aug) month=August; nummonth=8;;
Sep) month=September; nummonth=9;;
Oct) month=October; nummonth=10;;
Nov) month=November; nummonth=11;;
Dec) month=December; nummonth=12;;
esac
done
# Get the extended ls output of the file or directory.
set dummy x`eval "$ls_command \"\$save_arg1\""`
# Remove all preceding arguments
eval $command
# Because of the dummy argument above, month is in $2.
#
# On a POSIX system, we should have
#
# $# = 5
# $1 = file size
# $2 = month
# $3 = day
# $4 = year or time
# $5 = filename
#
# On Darwin 7.7.0 and 7.6.0, we have
#
# $# = 4
# $1 = day
# $2 = month
# $3 = year or time
# $4 = filename
# Get the month.
case $2 in
Jan) month=January; nummonth=1;;
Feb) month=February; nummonth=2;;
Mar) month=March; nummonth=3;;
Apr) month=April; nummonth=4;;
May) month=May; nummonth=5;;
Jun) month=June; nummonth=6;;
Jul) month=July; nummonth=7;;
Aug) month=August; nummonth=8;;
Sep) month=September; nummonth=9;;
Oct) month=October; nummonth=10;;
Nov) month=November; nummonth=11;;
Dec) month=December; nummonth=12;;
esac
case $3 in
???*) day=$1;;
*) day=$3; shift;;
esac
# Here we have to deal with the problem that the ls output gives either
# the time of day or the year.
case $3 in
*:*) set `date`; eval year=\$$#
case $2 in
Jan) nummonthtod=1;;
Feb) nummonthtod=2;;
Mar) nummonthtod=3;;
Apr) nummonthtod=4;;
May) nummonthtod=5;;
Jun) nummonthtod=6;;
Jul) nummonthtod=7;;
Aug) nummonthtod=8;;
Sep) nummonthtod=9;;
Oct) nummonthtod=10;;
Nov) nummonthtod=11;;
Dec) nummonthtod=12;;
esac
# For the first six month of the year the time notation can also
# be used for files modified in the last year.
if (expr $nummonth \> $nummonthtod) > /dev/null;
then
year=`expr $year - 1`
fi;;
*) year=$3;;
esac
# The result.
echo $day $month $year
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,372 +0,0 @@
#! @PERL@
#
# Generate a man page from sections of a Texinfo manual.
#
# Copyright 2004, 2006
# The Free Software Foundation,
# Derek R. Price,
# & Ximbiot <http://ximbiot.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# Need Perl 5.005 or greater for re 'eval'.
require 5.005;
# The usual.
use strict;
use IO::File;
###
### GLOBALS
###
my $texi_num = 0; # Keep track of how many texinfo files have been encountered.
my @parent; # This needs to be global to be used inside of a regex later.
my $nk; # Ditto.
my $ret; # The RE match Type, used in debug prints.
my $debug = 0; # Debug mode?
###
### FUNCTIONS
###
sub debug_print
{
print @_ if $debug;
}
sub keyword_mode
{
my ($keyword, $file) = @_;
return "\\fR"
if $keyword =~ /^(|r|t)$/;
return "\\fB"
if $keyword =~ /^(strong|sc|code|file|samp)$/;
return "\\fI"
if $keyword =~ /^(emph|var|dfn)$/;
die "no handler for keyword \`$keyword', found at line $. of file \`$file'\n";
}
# Return replacement for \@$keyword{$content}.
sub do_keyword
{
my ($file, $parent, $keyword, $content) = @_;
return "`$content\\(aq in the CVS manual"
if $keyword eq "ref";
return "see node `$content\\(aq in the CVS manual"
if $keyword =~ /^p?xref$/;
return "\\fP\\fP$content"
if $keyword =~ /^splitrcskeyword$/;
my $endmode = keyword_mode $parent;
my $startmode = keyword_mode $keyword, $file;
return "$startmode$content$endmode";
}
###
### MAIN
###
for my $file (@ARGV)
{
my $fh = new IO::File "< $file"
or die "Failed to open file \`$file': $!";
if ($file !~ /\.(texinfo|texi|txi)$/)
{
print stderr "Passing \`$file' through unprocessed.\n";
# Just cat any file that doesn't look like a Texinfo source.
while (my $line = $fh->getline)
{
print $line;
}
next;
}
print stderr "Processing \`$file'.\n";
$texi_num++;
my $gotone = 0;
my $inblank = 0;
my $indent = 0;
my $inexample = 0;
my $inmenu = 0;
my $intable = 0;
my $last_header = "";
my @table_headers;
my @table_footers;
my $table_header = "";
my $table_footer = "";
my $last;
while ($_ = $fh->getline)
{
if (!$gotone && /^\@c ----- START MAN $texi_num -----$/)
{
$gotone = 1;
next;
}
# Skip ahead until our man section.
next unless $gotone;
# If we find the end tag we are done.
last if /^\@c ----- END MAN $texi_num -----$/;
# Need to do this everywhere. i.e., before we print example
# lines, since literal back slashes can appear there too.
s/\\/\\\\/g;
s/^\./\\&./;
s/([\s])\./$1\\&./;
s/'/\\(aq/g;
s/`/\\`/g;
s/(?<!-)---(?!-)/\\(em/g;
s/\@bullet({}|\b)/\\(bu/g;
s/\@dots({}|\b)/\\&.../g;
# Examples should be indented and otherwise untouched
if (/^\@example$/)
{
$indent += 2;
print qq{.SP\n.PD 0\n};
$inexample = 1;
next;
}
if ($inexample)
{
if (/^\@end example$/)
{
$indent -= 2;
print qq{\n.PD\n.IP "" $indent\n};
$inexample = 0;
next;
}
if (/^[ ]*$/)
{
print ".SP\n";
next;
}
# Preserve the newline.
$_ = qq{.IP "" $indent\n} . $_;
}
# Compress blank lines into a single line. This and its
# corresponding skip purposely bracket the @menu and comment
# removal so that blanks on either side of a menu are
# compressed after the menu is removed.
if (/^[ ]*$/)
{
$inblank = 1;
next;
}
# Not used
if (/^\@(ignore|menu)$/)
{
$inmenu++;
next;
}
# Delete menu contents.
if ($inmenu)
{
next unless /^\@end (ignore|menu)$/;
$inmenu--;
next;
}
# Remove comments
next if /^\@c(omment)?\b/;
# Ignore includes.
next if /^\@include\b/;
# It's okay to ignore this keyword - we're not using any
# first-line indent commands at all.
next if s/^\@noindent\s*$//;
# @need is only significant in printed manuals.
next if s/^\@need\s+.*$//;
# If we didn't hit the previous check and $inblank is set, then
# we just finished with some number of blanks. Print the man
# page blank symbol before continuing processing of this line.
if ($inblank)
{
print ".SP\n";
$inblank = 0;
}
# Chapter headers.
$last_header = $1 if s/^\@node\s+(.*)$/.SH "$1"/;
if (/^\@appendix\w*\s+(.*)$/)
{
my $content = $1;
$content =~ s/^$last_header(\\\(em|\s+)?//;
next if $content =~ /^\s*$/;
s/^\@appendix\w*\s+.*$/.SS "$content"/;
}
# Tables are similar to examples, except we need to handle the
# keywords.
if (/^\@(itemize|table)(\s+(.*))?$/)
{
$indent += 2;
push @table_headers, $table_header;
push @table_footers, $table_footer;
my $content = $3;
if (/^\@itemize/)
{
my $bullet = $content;
$table_header = qq{.IP "$bullet" $indent\n};
$table_footer = "";
}
else
{
my $hi = $indent - 2;
$table_header = qq{.IP "" $hi\n};
$table_footer = qq{\n.IP "" $indent};
if ($content)
{
$table_header .= "$content\{";
$table_footer = "\}$table_footer";
}
}
$intable++;
next;
}
if ($intable)
{
if (/^\@end (itemize|table)$/)
{
$table_header = pop @table_headers;
$table_footer = pop @table_footers;
$indent -= 2;
$intable--;
next;
}
s/^\@itemx?(\s+(.*))?$/$table_header$2$table_footer/;
# Fall through so the rest of the table lines are
# processed normally.
}
# Index entries.
s/^\@cindex\s+(.*)$/.IX "$1"/;
$_ = "$last$_" if $last;
undef $last;
# Trap keywords
$nk = qr/
\@(\w+)\{
(?{ debug_print "$ret MATCHED $&\nPUSHING $1\n";
push @parent, $1; }) # Keep track of the last keyword
# keyword we encountered.
((?>
[^{}]|(?<=\@)[{}] # Non-braces...
| # ...or...
(??{ $nk }) # ...nested keywords...
)*) # ...without backtracking.
\}
(?{ debug_print "$ret MATCHED $&\nPOPPING ",
pop (@parent), "\n"; }) # Lose track of the current keyword.
/x;
$ret = "m//";
if (/\@\w+\{(?:[^{}]|(?<=\@)[{}]|(??{ $nk }))*$/)
{
# If there is an opening keyword on this line without a
# close bracket, we need to find the close bracket
# before processing the line. Set $last to append the
# next line in the next pass.
$last = $_;
next;
}
# Okay, the following works somewhat counter-intuitively. $nk
# processes the whole line, so @parent gets loaded properly,
# then, since no closing brackets have been found for the
# outermost matches, the innermost matches match and get
# replaced first.
#
# For example:
#
# Processing the line:
#
# yadda yadda @code{yadda @var{foo} yadda @var{bar} yadda}
#
# Happens something like this:
#
# 1. Ignores "yadda yadda "
# 2. Sees "@code{" and pushes "code" onto @parent.
# 3. Ignores "yadda " (backtracks and ignores "yadda yadda
# @code{yadda "?)
# 4. Sees "@var{" and pushes "var" onto @parent.
# 5. Sees "foo}", pops "var", and realizes that "@var{foo}"
# matches the overall pattern ($nk).
# 6. Replaces "@var{foo}" with the result of:
#
# do_keyword $file, $parent[$#parent], $1, $2;
#
# which would be "\Ifoo\B", in this case, because "var"
# signals a request for italics, or "\I", and "code" is
# still on the stack, which means the previous style was
# bold, or "\B".
#
# Then the while loop restarts and a similar series of events
# replaces "@var{bar}" with "\Ibar\B".
#
# Then the while loop restarts and a similar series of events
# replaces "@code{yadda \Ifoo\B yadda \Ibar\B yadda}" with
# "\Byadda \Ifoo\B yadda \Ibar\B yadda\R".
#
$ret = "s///";
@parent = ("");
while (s/$nk/do_keyword $file, $parent[$#parent], $1, $2/e)
{
# Do nothing except reset our last-replacement
# tracker - the replacement regex above is handling
# everything else.
debug_print "FINAL MATCH $&\n";
@parent = ("");
}
# Finally, unprotect texinfo special characters.
s/\@://g;
s/\@([{}])/$1/g;
# Verify we haven't left commands unprocessed.
die "Unprocessed command at line $. of file \`$file': "
. ($1 ? "$1\n" : "<EOL>\n")
if /^(?>(?:[^\@]|\@\@)*)\@(\w+|.|$)/;
# Unprotect @@.
s/\@\@/\@/g;
# And print whatever's left.
print $_;
}
}

View File

@ -1,4 +0,0 @@
@set UPDATED 7 May 2007
@set UPDATED-MONTH May 2007
@set EDITION 1.11.22.1
@set VERSION 1.11.22.1

View File

@ -1,4 +0,0 @@
@set UPDATED 27 January 2008
@set UPDATED-MONTH January 2008
@set EDITION 1.11.22.1
@set VERSION 1.11.22.1

View File

@ -1,4 +0,0 @@
@set UPDATED 7 May 2007
@set UPDATED-MONTH May 2007
@set EDITION 1.11.22.1
@set VERSION 1.11.22.1

View File

@ -1,4 +0,0 @@
@set UPDATED 27 January 2008
@set UPDATED-MONTH January 2008
@set EDITION 1.11.22.1
@set VERSION 1.11.22.1

View File

@ -1,507 +0,0 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2006-10-14.15
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# `make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
nl='
'
IFS=" "" $nl"
# set DOITPROG to echo to test this script
# Don't use :- since 4.3BSD and earlier shells don't like it.
doit="${DOITPROG-}"
if test -z "$doit"; then
doit_exec=exec
else
doit_exec=$doit
fi
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
mvprog="${MVPROG-mv}"
cpprog="${CPPROG-cp}"
chmodprog="${CHMODPROG-chmod}"
chownprog="${CHOWNPROG-chown}"
chgrpprog="${CHGRPPROG-chgrp}"
stripprog="${STRIPPROG-strip}"
rmprog="${RMPROG-rm}"
mkdirprog="${MKDIRPROG-mkdir}"
posix_glob=
posix_mkdir=
# Desired mode of installed file.
mode=0755
chmodcmd=$chmodprog
chowncmd=
chgrpcmd=
stripcmd=
rmcmd="$rmprog -f"
mvcmd="$mvprog"
src=
dst=
dir_arg=
dstarg=
no_target_directory=
usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
-c (ignored)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-s $stripprog installed files.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
--help display this help and exit.
--version display version info and exit.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG
"
while test $# -ne 0; do
case $1 in
-c) shift
continue;;
-d) dir_arg=true
shift
continue;;
-g) chgrpcmd="$chgrpprog $2"
shift
shift
continue;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
shift
shift
case $mode in
*' '* | *' '* | *'
'* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
continue;;
-o) chowncmd="$chownprog $2"
shift
shift
continue;;
-s) stripcmd=$stripprog
shift
continue;;
-t) dstarg=$2
shift
shift
continue;;
-T) no_target_directory=true
shift
continue;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
done
if test $# -ne 0 && test -z "$dir_arg$dstarg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dstarg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dstarg"
shift # fnord
fi
shift # arg
dstarg=$arg
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call `install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
trap '(exit $?); exit' 1 2 13 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names starting with `-'.
case $src in
-*) src=./$src ;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dstarg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dstarg
# Protect names starting with `-'.
case $dst in
-*) dst=./$dst ;;
esac
# If destination is a directory, append the input filename; won't work
# if double slashes aren't ignored.
if test -d "$dst"; then
if test -n "$no_target_directory"; then
echo "$0: $dstarg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dst=$dstdir/`basename "$src"`
dstdir_status=0
else
# Prefer dirname, but fall back on a substitute if dirname fails.
dstdir=`
(dirname "$dst") 2>/dev/null ||
expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \
X"$dst" : 'X\(//\)[^/]' \| \
X"$dst" : 'X\(//\)$' \| \
X"$dst" : 'X\(/\)' \| . 2>/dev/null ||
echo X"$dst" |
sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{
s//\1/
q
}
/^X\(\/\/\)[^/].*/{
s//\1/
q
}
/^X\(\/\/\)$/{
s//\1/
q
}
/^X\(\/\).*/{
s//\1/
q
}
s/.*/./; q'
`
test -d "$dstdir"
dstdir_status=$?
fi
fi
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# Create intermediate dirs using mode 755 as modified by the umask.
# This is like FreeBSD 'install' as of 1997-10-28.
umask=`umask`
case $stripcmd.$umask in
# Optimize common cases.
*[2367][2367]) mkdir_umask=$umask;;
.*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;;
*[0-7])
mkdir_umask=`expr $umask + 22 \
- $umask % 100 % 40 + $umask % 20 \
- $umask % 10 % 4 + $umask % 2
`;;
*) mkdir_umask=$umask,go-w;;
esac
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
case $umask in
*[123567][0-7][0-7])
# POSIX mkdir -p sets u+wx bits regardless of umask, which
# is incompatible with FreeBSD 'install' when (umask & 300) != 0.
;;
*)
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0
if (umask $mkdir_umask &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writeable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
ls_ld_tmpdir=`ls -ld "$tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/d" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null
fi
trap '' 0;;
esac;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# The umask is ridiculous, or mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix=/ ;;
-*) prefix=./ ;;
*) prefix= ;;
esac
case $posix_glob in
'')
if (set -f) 2>/dev/null; then
posix_glob=true
else
posix_glob=false
fi ;;
esac
oIFS=$IFS
IFS=/
$posix_glob && set -f
set fnord $dstdir
shift
$posix_glob && set +f
IFS=$oIFS
prefixes=
for d
do
test -z "$d" && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask=$mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=$dstdir/_inst.$$_
rmtmp=$dstdir/_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \
&& { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \
&& { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \
&& { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# Now rename the file to the real destination.
{ $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null \
|| {
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
if test -f "$dst"; then
$doit $rmcmd -f "$dst" 2>/dev/null \
|| { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null \
&& { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; }; }\
|| {
echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
else
:
fi
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
} || exit 1
trap '' 0
fi
done
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

File diff suppressed because it is too large Load Diff

View File

@ -1,90 +0,0 @@
Thu Sep 15 00:18:26 1994 david d `zoo' zuhn <zoo@monad.armadillo.com>
* system.h: remove a bunch of "extern int " declarations of system
functions (could conflict with vendor header files, and didn't
do anything *too* useful to begin with).
* Makefile.in: update getdate.y message (now has 10 s/r conflicts)
Wed Sep 14 22:12:21 1994 david d `zoo' zuhn <zoo@monad.armadillo.com>
* strerror.c: more complete, from the Cygnus libiberty package
* error.c (strerror): removed, functionality is in strerror.c
* cvs.h: remove duplicate prototype for Reader_Lock
* history.c: printf argument mismatch
(Both fixes thanks to J.T. Conklin (jtc@cygnus.com)
Sat Jul 30 13:50:11 1994 david d `zoo' zuhn (zoo@monad.armadillo.com)
* getopt1.c, getopt.c, getopt.h, getdate.y: latest versions from FSF
Wed Jul 13 22:11:17 1994 david d `zoo' zuhn (zoo@monad.armadillo.com)
* system.h: don't set PATH_MAX to pathconf(), since PATH_MAX is
used to size arrays. (thanks to kingdon@cygnus.com)
* getopt1.c: remove #ifdef __STDC__ around const usages (which
isn't correct and weren't complete)
Wed Apr 20 14:57:16 1994 Ian Lance Taylor (ian@tweedledumb.cygnus.com)
* getopt.h: Prevent multiple inclusion.
Tue Jan 25 17:34:42 1994 david d zuhn (zoo@monad.armadillo.com)
* Makefile.in: make sure that no blank lines are in the $(OBJECTS)
list (from Brad Figg)
Mon Jan 24 12:27:13 1994 david d zuhn (zoo@monad.armadillo.com)
* system.h: remove alloca checks (added to src/cvs.h); revamped
the MAXPATHLEN and PATH_MAX tests (from Brad Figg
<bradf@wv.MENTORG.COM>); handle index,rindex,bcmp,bzero better
(don't redefine if already defined); added S_IWRITE, S_IWGRP,
S_IWOTH definitions (header file reorganization)
* strippath.c: use strchr, not index
* getopt1.c: match prototypes when __STDC__ compiler (lint fixes)
* getdate.c: alloca checks for when using bison
* Makefile.in: added CC and YACC definitions; use YACC not BISON;
better getdate.c tests (also from Brad Figg)
Sat Dec 18 00:55:43 1993 david d zuhn (zoo@monad.armadillo.com)
* Makefile.in (VPATH): don't use $(srcdir), but @srcdir@ instead
* memmove.c: new file, implements memmove in terms of bcopy
* wait.h: include <sys/wait.h> if HAVE_SYS_WAIT_H, not if POSIX
Thu Sep 9 18:02:11 1993 david d `zoo' zuhn (zoo@rtl.cygnus.com)
* system.h: only #undef PATH_MAX if not on an Alpha. The #undef
causes problems with the Alpha C compiler.
Thu Apr 8 12:39:56 1993 Ian Lance Taylor (ian@cygnus.com)
* system.h: Removed several incorrect declarations which fail
on Solaris.
Wed Jan 20 17:57:24 1993 K. Richard Pixley (rich@rtl.cygnus.com)
* system.h: add externs for sun4 so that gcc -Wall becomes useful
again.
Wed Feb 26 18:04:40 1992 K. Richard Pixley (rich@cygnus.com)
* Makefile.in, configure.in: removed traces of namesubdir,
-subdirs, $(subdir), $(unsubdir), some rcs triggers. Forced
copyrights to '92, changed some from Cygnus to FSF.
Sat Dec 28 02:42:06 1991 K. Richard Pixley (rich at cygnus.com)
* mkdir.c, rename.c: change fork() to vfork().

Some files were not shown because too many files have changed in this diff Show More