Update svnlite from 1.8.10 to 1.8.14. This is mostly for client-side bug

fixes and quality of life improvements.
While there are security issues in this time frame that affect usage as a
server (eg: linked into apache), this isn't possible here.
This commit is contained in:
Peter Wemm 2015-08-09 05:22:53 +00:00
commit 19f868bc90
76 changed files with 4046 additions and 2457 deletions

View File

@ -1,3 +1,138 @@
Version 1.8.14
(5 Aug 2015, from /branches/1.8.x)
http://svn.apache.org/repos/asf/subversion/tags/1.8.14
User-visible changes:
- Client-side bugfixes:
* document svn:autoprops (r1678494 et al.)
* cp: fix 'svn cp ^/A/D/H@1 ^/A' to properly create A (r1674455, r1674456)
* resolve: improve conflict prompts for binary files (r1667228 et al.)
* ls: improve performance of '-v' on tag directories (r1673153)
* improved Sqlite 3.8.9 query performance regression on externals (r1672295 et al.)
* fixed issue #4580: 'svn -v st' on file externals reports "?" instead
of user and revision after 'svn up' (r1680242)
- Server-side bugfixes:
* mod_dav_svn: do not ignore skel parsing errors (r1658168)
* detect invalid svndiff data earlier (r1684077)
* prevent possible repository corruption on power/disk failures (r1680819)
* fixed issue #4577: Read error with nodes whose DELTA chain starts with
a PLAIN rep (r1676667, r1677267)
* fixed issue #4531: server-side copy (over dav) is slow and uses
too much memory (r1674627)
Developer-visible changes:
- General:
* support building on Windows with Visual Studio 2015 (r1692785)
* avoid failing some tests on versions of Python with a very old sqlite (r1674522)
* fix Ruby tests so they don't use the users real configuration (r1597884)
- Bindings:
* swig-pl: fix some stack memory problems (r1668618, 1671388)
Version 1.8.13
(31 Mar 2015, from /branches/1.8.x)
http://svn.apache.org/repos/asf/subversion/tags/1.8.13
User-visible changes:
- Client-side bugfixes:
* ra_serf: prevent abort of commits that have already succeeded (r1659867)
* ra_serf: support case-insensitivity in HTTP headers (r1650481, r1650489)
* better error message if an external is shadowed (r1655712, r1655738)
* ra_svn: fix reporting of directory read errors (r1656713)
* fix a redirect handling bug in 'svn log' over HTTP (r1650531)
* properly copy tree conflict information (r1658115, r1659553, r1659554)
* fix 'svn patch' output for reordered hunks (issue #4533)
* svnrdump load: don't load wrong props with no-deltas dump (issue #4551)
* fix working copy corruption with relative file external (issue #4411)
* don't crash if config file is unreadable (r1590751, r1660350)
* svn resolve: don't ask a question with only one answer (r1658417)
* fix assertion failure in svn move (r1651963 et al)
* working copy performance improvements (r1664531, r1664476, et al)
* handle existing working copies which become externals (r1660071)
* fix recording of WC meta-data for foreign repos copies (r1660593)
* fix calculating repository path of replaced directories (r1660646)
* fix calculating repository path after commit of switched nodes (r1663991)
* svnrdump: don't provide HEAD+1 as base revision for deletes (r1664684)
* don't leave conflict markers on files that are moved (r1660220, r1665874)
* avoid unnecessary subtree mergeinfo recording (r1666690)
* fix diff of a locally copied directory with props (r1619380 et al)
- Server-side bugfixes:
* fsfs: fix a problem verifying pre-1.4 repos used with 1.8 (r1561419)
* svnadmin freeze: fix memory allocation error (r1653039)
* svnadmin load: tolerate invalid mergeinfo at r0 (r1643074, issue #4476)
* svnadmin load: strip references to r1 from mergeinfo (issue #4538)
* svnsync: strip any r0 references from mergeinfo (issue #4476)
* fsfs: reduce memory consumption when operating on dag nodes (r1655651)
* reject invalid get-location-segments requests in mod_dav_svn and
svnserve (r1667233)
* mod_dav_svn: reject invalid txnprop change requests (r1667235)
- Client-side and server-side bugfixes:
* fix undefined behaviour in string buffer routines (r1650834)
* fix consistency issues with APR r/w locks on Windows (r1611380 et al)
* fix occasional SEGV if threads load DSOs in parallel (r1659013, r1659315)
* properly duplicate svn error objects (r1651759)
* fix use-after-free in config parser (1646785, r1646786, r1646797)
Developer-visible changes:
* add lock file config for testing against HTTPD 2.4+ (r1544302, r1544303)
* make sqlite amalgamated build work with sqlite 3.8.x+ (r1659399)
* fix build with Ruby 2 (r1596882)
* process 'svnadmin dump' output as binary in the test suite (r1592987)
* simplify Windows resource compilation to avoid warnings (r1532287)
Version 1.8.12
(Not released, see changes for 1.8.13.)
Version 1.8.11
(15 Dec 2014, from /branches/1.8.x)
http://svn.apache.org/repos/asf/subversion/tags/1.8.11
User-visible changes:
- Client-side bugfixes:
* checkout/update: fix file externals failing to follow history and
subsequently silently failing (issue #4185)
* patch: don't skip targets in valid --git difs (r1592014, r1592034)
* diff: make property output in diffs stable (r1589360)
* diff: fix diff of local copied directory with props (r1619380, r1619393)
* diff: fix changelist filter for repos-WC and WC-WC (r1621978, r1621981)
* remove broken conflict resolver menu options that always error out
(r1620332)
* improve gpg-agent support (r1600331, r1600348, 1600368, r1600563,
r1600781)
* fix crash in eclipse IDE with GNOME Keyring (issue #3498)
* fix externals shadowing a versioned directory (issue #4085)
* fix problems working on unix file systems that don't support
permissions (r1612225)
* upgrade: keep external registrations (issue #4519)
* cleanup: iprove performance of recorded timestamp fixups (r1633126)
* translation updates for German
- Server-side bugfixes:
* disable revprop caching feature due to cache invalidation problems
(r1543594, r1619774, r1619105, r1619118, r1619153, r1619802)
* skip generating uniquifiers if rep-sharing is not supported (r1561426)
* mod_dav_svn: reject requests with missing repository paths (r1643409)
* mod_dav_svn: reject requests with invalid virtual transaction names
(r1643437)
* mod_dav_svn: avoid unneeded memory growth in resource walking
(issue #4531)
Developer-visible changes:
- General:
* make sure all members of the repos layer notify struct are valid,
fixes crashes in API users using all members (r1616131)
* properly generate a version resource when building on Windows (r1542610,
r1564576, r1568180)
* fix LIBTOOL_M4 and LIBTOOL_CONFIG variable not be evaluated properly
during a unix build (r1637826)
* allow the use of libtool 2.4.3 (r1640862, r1640873, r1643793)
Version 1.8.10
(11 Aug 2014, from /branches/1.8.x)
http://svn.apache.org/repos/asf/subversion/tags/1.8.10
@ -316,7 +451,7 @@ http://svn.apache.org/repos/asf/subversion/tags/1.8.1
* merge: rename 'automatic merge' to 'complete merge' (r1491432)
* mergeinfo: reduce network usage for '--show-revs' (r1492005)
* ra_serf: improve http status handling (r1495104)
* merge: avoid unneeded ra session (r1493475)
* merge: avoid unneeded RA session (r1493475)
* merge: reduce network usage (r1478987)
* merge: remove duplicated ancestry check (r1493424, r1495597)
* ra_serf: fix 'Accept-Encoding' header for IIS interoperability (r1497551)
@ -729,6 +864,81 @@ http://svn.apache.org/repos/asf/subversion/tags/1.8.0
* fix some reference counting bugs in swig-py bindings (r1464899, r1466524)
Version 1.7.21
(5 Aug 2015, from /branches/1.7.x)
http://svn.apache.org/repos/asf/subversion/tags/1.8.21
User-visible changes:
- Client-side bugfixes:
* cp: fix 'svn cp ^/A/D/H@1 ^/A' to properly create A (r1674455, r1674456)
* fix issue #4551: svnrdump load commits wrong properties, or fails, on a
non-deltas dumpfile (r1652182 et al.)
- Server-side bugfixes:
* fix 'svnadmin recover' for pre-1.4 FSFS repositories (r1561419)
Developer-visible changes:
- General:
* support building on Windows with Visual Studio 2012, 2013 and 2015 (r1687158, r1692783)
- Bindings:
* swig-pl: fix some stack memory problems (r1668618, 1671388)
Version 1.7.20
(31 Mar 2015, from /branches/1.7.x)
http://svn.apache.org/repos/asf/subversion/tags/1.7.20
User-visible changes:
- Client-side bugfixes:
* fix 'svn patch' output for reordered hunks (issue #4533)
- Server-side bugfixes:
* reject invalid get-location-segments requests in mod_dav_svn and
svnserve (r1667233)
* mod_dav_svn: reject invalid txnprop change requests (r1667235)
- Client-side and server-side bugfixes:
* properly duplicate svn error objects (r1651759)
* fix use-after-free in config parser (1646785, r1646786, r1646797)
Developer-visible changes:
* add lock file config for testing against HTTPD 2.4+ (r1544302, r1544303)
* fix build with absolute path to neon install (r1664789)
Version 1.7.19
(15 Dec 2014, from /branches/1.7.x)
http://svn.apache.org/repos/asf/subversion/tags/1.7.19
User-visible changes:
- Client-side bugfixes:
* rm: display the proper URL in commit log editor (r1591123)
* diff: fix invalid read during suffix scanning (issue #4339)
* fix crash in eclipse IDE with GNOME Keyring (issue #3498)
* checkout/update: fix file externals failing to follow history and
subsequently silently failing (issue #4185)
- Server-side bugfixes:
* svnadmin dump: don't let invalid mergeinfo prevent a dump (issue #4476)
* mod_dav_svn: reject requests with missing repository paths (r1643409)
* mod_dav_svn: reject requests with invalid virtual transaction names
(r1643437)
* mod_dav_svn: avoid unneeded memory growth in resource walking
(issue #4531)
Developer-visible changes:
- General:
* properly generate a version resource when building on Windows (r1542610,
r1564576, r1568180)
* fix a problem with the unix build that could result in linking to the
wrong Subversion libraries at build or at run time (r1594157)
* use a proper intermediate directory when building with Visual Studio
2003-2008 (r1595431)
* fix LIBTOOL_M4 and LIBTOOL_CONFIG variable not be evaluated properly
during a unix build (r1637826)
* allow the use of libtool 2.4.3 (r1640862, r1640873, r1643793)
Version 1.7.18
(11 Aug 2014, from /branches/1.7.x)
http://svn.apache.org/repos/asf/subversion/tags/1.7.18

View File

@ -357,6 +357,7 @@ TEST_SHLIB_VAR_SWIG_RB=\
fi;
APXS = @APXS@
HTTPD_VERSION = @HTTPD_VERSION@
PYTHON = @PYTHON@
PERL = @PERL@
@ -509,6 +510,9 @@ check: bin @TRANSFORM_LIBTOOL_SCRIPTS@ $(TEST_DEPS) @BDB_TEST_DEPS@
if test "$(HTTP_LIBRARY)" != ""; then \
flags="--http-library $(HTTP_LIBRARY) $$flags"; \
fi; \
if test "$(HTTPD_VERSION)" != ""; then \
flags="--httpd-version $(HTTPD_VERSION) $$flags"; \
fi; \
if test "$(SERVER_MINOR_VERSION)" != ""; then \
flags="--server-minor-version $(SERVER_MINOR_VERSION) $$flags"; \
fi; \

View File

@ -1,5 +1,5 @@
Apache Subversion
Copyright 2014 The Apache Software Foundation
Copyright 2015 The Apache Software Foundation
This product includes software developed by many people, and distributed
under Contributor License Agreements to The Apache Software Foundation

View File

@ -23,6 +23,10 @@
### Run this to produce everything needed for configuration. ###
# Some shells can produce output when running 'cd' which interferes
# with the construct 'abs=`cd dir && pwd`'.
(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
# Run tests to ensure that our build requirements are met
RELEASE_MODE=""
RELEASE_ARGS=""
@ -71,48 +75,80 @@ rm -f build/config.guess build/config.sub
$libtoolize --copy --automake --force
ltpath="`dirname $libtoolize`"
ltfile=${LIBTOOL_M4-`cd $ltpath/../share/aclocal ; pwd`/libtool.m4}
if [ ! -f $ltfile ]; then
echo "$ltfile not found (try setting the LIBTOOL_M4 environment variable)"
if [ "x$LIBTOOL_M4" = "x" ]; then
ltm4_error='(try setting the LIBTOOL_M4 environment variable)'
if [ -d "$ltpath/../share/aclocal/." ]; then
ltm4=`cd "$ltpath/../share/aclocal" && pwd`
else
echo "Libtool helper path not found $ltm4_error"
echo " expected at: '$ltpath/../share/aclocal'"
exit 1
fi
else
ltm4_error="(the LIBTOOL_M4 environment variable is: $LIBTOOL_M4)"
ltm4="$LIBTOOL_M4"
fi
ltfile="$ltm4/libtool.m4"
if [ ! -f "$ltfile" ]; then
echo "$ltfile not found $ltm4_error"
exit 1
fi
echo "Copying libtool helper: $ltfile"
echo "Copying libtool helper: $ltfile"
# An ancient helper might already be present from previous builds,
# and it might be write-protected (e.g. mode 444, seen on FreeBSD).
# This would cause cp to fail and print an error message, but leave
# behind a potentially outdated libtool helper. So, remove before
# copying:
rm -f build/libtool.m4
cp $ltfile build/libtool.m4
cp "$ltfile" build/libtool.m4
for file in ltoptions.m4 ltsugar.m4 ltversion.m4 lt~obsolete.m4; do
rm -f build/$file
if [ $lt_major_version -ge 2 ]; then
ltfile=${LIBTOOL_M4-`cd $ltpath/../share/aclocal ; pwd`/$file}
ltfile="$ltm4/$file"
if [ ! -f $ltfile ]; then
echo "$ltfile not found (try setting the LIBTOOL_M4 environment variable)"
if [ ! -f "$ltfile" ]; then
echo "$ltfile not found $ltm4_error"
exit 1
fi
echo "Copying libtool helper: $ltfile"
cp $ltfile build/$file
echo "Copying libtool helper: $ltfile"
cp "$ltfile" "build/$file"
fi
done
if [ $lt_major_version -ge 2 ]; then
for file in config.guess config.sub; do
configfile=${LIBTOOL_CONFIG-`cd $ltpath/../share/libtool/config ; pwd`/$file}
if [ "x$LIBTOOL_CONFIG" = "x" ]; then
ltconfig_error='(try setting the LIBTOOL_CONFIG environment variable)'
if [ -d "$ltpath/../share/libtool/config/." ]; then
ltconfig=`cd "$ltpath/../share/libtool/config" && pwd`
elif [ -d "$ltpath/../share/libtool/build-aux/." ]; then
ltconfig=`cd "$ltpath/../share/libtool/build-aux" && pwd`
else
echo "Autoconf helper path not found $ltconfig_error"
echo " expected at: '$ltpath/../share/libtool/config'"
echo " or: '$ltpath/../share/libtool/build-aux'"
exit 1
fi
else
ltconfig_error="(the LIBTOOL_CONFIG environment variable is: $LIBTOOL_CONFIG)"
ltconfig="$LIBTOOL_CONFIG"
fi
if [ ! -f $configfile ]; then
echo "$configfile not found (try setting the LIBTOOL_CONFIG environment variable)"
for file in config.guess config.sub; do
configfile="$ltconfig/$file"
if [ ! -f "$configfile" ]; then
echo "$configfile not found $ltconfig_error"
exit 1
fi
cp $configfile build/$file
echo "Copying autoconf helper: $configfile"
cp "$configfile" build/$file
done
fi

File diff suppressed because one or more lines are too long

View File

@ -1357,6 +1357,7 @@ install = tools
libs = libsvn_repos libsvn_fs libsvn_subr apr
[svn-populate-node-origins-index]
description = Tool to populate the node origins index of a repository
type = exe
path = tools/server-side
sources = svn-populate-node-origins-index.c

File diff suppressed because it is too large Load Diff

View File

@ -1246,7 +1246,7 @@ AC_PATH_PROG(PERL, perl, none)
if test -n "$RUBY"; then
AC_PATH_PROG(RUBY, "$RUBY", none)
else
AC_PATH_PROGS(RUBY, ruby ruby1.8 ruby18 ruby1.9 ruby1 ruby1.9.3 ruby193, none)
AC_PATH_PROGS(RUBY, ruby ruby1.8 ruby18 ruby1.9 ruby1 ruby1.9.3 ruby193 ruby2.0 ruby2.1, none)
fi
if test "$RUBY" != "none"; then
AC_MSG_CHECKING([rb_hash_foreach])
@ -1255,7 +1255,7 @@ if test "$RUBY" != "none"; then
if test -n "$RDOC"; then
AC_PATH_PROG(RDOC, "$RDOC", none)
else
AC_PATH_PROGS(RDOC, rdoc rdoc1.8 rdoc18 rdoc1.9 rdoc19 rdoc1.9.3 rdoc193, none)
AC_PATH_PROGS(RDOC, rdoc rdoc1.8 rdoc18 rdoc1.9 rdoc19 rdoc1.9.3 rdoc193 rdoc2.0 rdoc2.1, none)
fi
AC_CACHE_CHECK([for Ruby major version], [svn_cv_ruby_major],[
svn_cv_ruby_major="`$RUBY -rrbconfig -e 'print RbConfig::CONFIG.fetch(%q(MAJOR))'`"

View File

@ -33,7 +33,7 @@
APR_VERSION=${APR_VERSION:-"1.4.6"}
APU_VERSION=${APU_VERSION:-"1.5.1"}
SERF_VERSION=${SERF_VERSION:-"1.3.4"}
SERF_VERSION=${SERF_VERSION:-"1.3.8"}
ZLIB_VERSION=${ZLIB_VERSION:-"1.2.8"}
SQLITE_VERSION=${SQLITE_VERSION:-"3.7.15.1"}
GTEST_VERSION=${GTEST_VERSION:-"1.6.0"}

View File

@ -97,7 +97,7 @@ svn_diff__unidiff_write_header(svn_stream_t *output_stream,
* merged or reverse merged; otherwise (or if the mergeinfo property values
* don't parse correctly) display them just like any other property.
*
* Use @a pool for temporary allocations.
* Use @a scratch_pool for temporary allocations.
*/
svn_error_t *
svn_diff__display_prop_diffs(svn_stream_t *outstream,
@ -105,7 +105,7 @@ svn_diff__display_prop_diffs(svn_stream_t *outstream,
const apr_array_header_t *propchanges,
apr_hash_t *original_props,
svn_boolean_t pretty_print_mergeinfo,
apr_pool_t *pool);
apr_pool_t *scratch_pool);
#ifdef __cplusplus

View File

@ -65,6 +65,27 @@ svn_error_t *
svn_rangelist__combine_adjacent_ranges(svn_rangelist_t *rangelist,
apr_pool_t *scratch_pool);
/** Canonicalize the @a rangelist: sort the ranges, and combine adjacent or
* overlapping ranges into single ranges where possible.
*
* If overlapping ranges have different inheritability, return an error.
*
* Modify @a rangelist in place. Use @a scratch_pool for temporary
* allocations.
*/
svn_error_t *
svn_rangelist__canonicalize(svn_rangelist_t *rangelist,
apr_pool_t *scratch_pool);
/** Canonicalize the revision range lists in the @a mergeinfo.
*
* Modify @a mergeinfo in place. Use @a scratch_pool for temporary
* allocations.
*/
svn_error_t *
svn_mergeinfo__canonicalize_ranges(svn_mergeinfo_t mergeinfo,
apr_pool_t *scratch_pool);
/* Set inheritability of all rangelists in MERGEINFO to INHERITABLE.
If MERGEINFO is NULL do nothing. If a rangelist in MERGEINFO is
NULL leave it alone. */

View File

@ -113,6 +113,10 @@ svn_repos__replay_ev2(svn_fs_root_t *root,
void *authz_read_baton,
apr_pool_t *scratch_pool);
/* A private addition to svn_repos_notify_warning_t. */
#define svn_repos__notify_warning_invalid_mergeinfo \
((svn_repos_notify_warning_t)(-1))
#ifdef __cplusplus
}

View File

@ -484,6 +484,32 @@ svn_sqlite__with_immediate_transaction(svn_sqlite__db_t *db,
SVN_ERR(svn_sqlite__finish_savepoint(svn_sqlite__db, svn_sqlite__err)); \
} while (0)
/* Evaluate the expression EXPR1..EXPR4 within a 'savepoint'. Savepoints can
* be nested.
*
* Begin a savepoint in DB; evaluate the expression EXPR1, which would
* typically be a function call that does some work in DB; if no error occurred,
* run EXPR2; if no error occurred EXPR3; ... and finally release
* the savepoint if EXPR evaluated to SVN_NO_ERROR, otherwise roll back
* to the savepoint and then release it.
*/
#define SVN_SQLITE__WITH_LOCK4(expr1, expr2, expr3, expr4, db) \
do { \
svn_sqlite__db_t *svn_sqlite__db = (db); \
svn_error_t *svn_sqlite__err; \
\
SVN_ERR(svn_sqlite__begin_savepoint(svn_sqlite__db)); \
svn_sqlite__err = (expr1); \
if (!svn_sqlite__err) \
svn_sqlite__err = (expr2); \
if (!svn_sqlite__err) \
svn_sqlite__err = (expr3); \
if (!svn_sqlite__err) \
svn_sqlite__err = (expr4); \
SVN_ERR(svn_sqlite__finish_savepoint(svn_sqlite__db, svn_sqlite__err)); \
} while (0)
/* Helper function to handle several SQLite operations inside a shared lock.
This callback is similar to svn_sqlite__with_transaction(), but can be
nested (even with a transaction).

View File

@ -1148,6 +1148,8 @@ svn_stream_read(svn_stream_t *stream,
* of reads or a simple seek operation. If the stream implementation has
* not provided a skip function, this will read from the stream and
* discard the data.
*
* @since New in 1.7.
*/
svn_error_t *
svn_stream_skip(svn_stream_t *stream,

View File

@ -28,15 +28,13 @@
#define SVN_VERSION_H
/* Hack to prevent the resource compiler from including
apr_general.h. It doesn't resolve the include paths
correctly and blows up without this.
*/
#ifndef APR_STRINGIFY
apr and other headers. */
#ifndef SVN_WIN32_RESOURCE_COMPILATION
#include <apr_general.h>
#endif
#include <apr_tables.h>
#include "svn_types.h"
#endif
#ifdef __cplusplus
extern "C" {
@ -72,7 +70,7 @@ extern "C" {
*
* @since New in 1.1.
*/
#define SVN_VER_PATCH 10
#define SVN_VER_PATCH 14
/** @deprecated Provided for backward compatibility with the 1.0 API. */
@ -95,7 +93,7 @@ extern "C" {
*
* Always change this at the same time as SVN_VER_NUMTAG.
*/
#define SVN_VER_TAG " (r1615264)"
#define SVN_VER_TAG " (r1692801)"
/** Number tag: a string describing the version.
@ -121,7 +119,7 @@ extern "C" {
* When rolling a tarball, we automatically replace it with what we
* guess to be the correct revision number.
*/
#define SVN_VER_REVISION 1615264
#define SVN_VER_REVISION 1692801
/* Version strings composed from the above definitions. */

View File

@ -49,120 +49,19 @@
/*-----------------------------------------------------------------------*/
struct gnome_keyring_baton
{
const char *keyring_name;
GnomeKeyringInfo *info;
GMainLoop *loop;
};
/* Callback function to destroy gnome_keyring_baton. */
static void
callback_destroy_data_keyring(void *data)
{
struct gnome_keyring_baton *key_info = data;
if (data == NULL)
return;
free((void*)key_info->keyring_name);
key_info->keyring_name = NULL;
if (key_info->info)
{
gnome_keyring_info_free(key_info->info);
key_info->info = NULL;
}
return;
}
/* Callback function to complete the keyring operation. */
static void
callback_done(GnomeKeyringResult result,
gpointer data)
{
struct gnome_keyring_baton *key_info = data;
g_main_loop_quit(key_info->loop);
return;
}
/* Callback function to get the keyring info. */
static void
callback_get_info_keyring(GnomeKeyringResult result,
GnomeKeyringInfo *info,
void *data)
{
struct gnome_keyring_baton *key_info = data;
if (result == GNOME_KEYRING_RESULT_OK && info != NULL)
{
key_info->info = gnome_keyring_info_copy(info);
}
else
{
if (key_info->info != NULL)
gnome_keyring_info_free(key_info->info);
key_info->info = NULL;
}
g_main_loop_quit(key_info->loop);
return;
}
/* Callback function to get the default keyring string name. */
static void
callback_default_keyring(GnomeKeyringResult result,
const char *string,
void *data)
{
struct gnome_keyring_baton *key_info = data;
if (result == GNOME_KEYRING_RESULT_OK && string != NULL)
{
key_info->keyring_name = strdup(string);
}
else
{
free((void*)key_info->keyring_name);
key_info->keyring_name = NULL;
}
g_main_loop_quit(key_info->loop);
return;
}
/* Returns the default keyring name, allocated in RESULT_POOL. */
static char*
get_default_keyring_name(apr_pool_t *result_pool)
{
char *def = NULL;
struct gnome_keyring_baton key_info;
char *name, *def;
GnomeKeyringResult gkr;
key_info.info = NULL;
key_info.keyring_name = NULL;
gkr = gnome_keyring_get_default_keyring_sync(&name);
if (gkr != GNOME_KEYRING_RESULT_OK)
return NULL;
/* Finds default keyring. */
key_info.loop = g_main_loop_new(NULL, FALSE);
gnome_keyring_get_default_keyring(callback_default_keyring, &key_info, NULL);
g_main_loop_run(key_info.loop);
if (key_info.keyring_name == NULL)
{
callback_destroy_data_keyring(&key_info);
return NULL;
}
def = apr_pstrdup(result_pool, key_info.keyring_name);
callback_destroy_data_keyring(&key_info);
def = apr_pstrdup(result_pool, name);
g_free(name);
return def;
}
@ -171,28 +70,22 @@ get_default_keyring_name(apr_pool_t *result_pool)
static svn_boolean_t
check_keyring_is_locked(const char *keyring_name)
{
struct gnome_keyring_baton key_info;
GnomeKeyringInfo *info;
svn_boolean_t locked;
GnomeKeyringResult gkr;
key_info.info = NULL;
key_info.keyring_name = NULL;
/* Get details about the default keyring. */
key_info.loop = g_main_loop_new(NULL, FALSE);
gnome_keyring_get_info(keyring_name, callback_get_info_keyring, &key_info,
NULL);
g_main_loop_run(key_info.loop);
if (key_info.info == NULL)
{
callback_destroy_data_keyring(&key_info);
return FALSE;
}
/* Check if keyring is locked. */
if (gnome_keyring_info_get_is_locked(key_info.info))
return TRUE;
else
gkr = gnome_keyring_get_info_sync(keyring_name, &info);
if (gkr != GNOME_KEYRING_RESULT_OK)
return FALSE;
if (gnome_keyring_info_get_is_locked(info))
locked = TRUE;
else
locked = FALSE;
gnome_keyring_info_free(info);
return locked;
}
/* Unlock the KEYRING_NAME with the KEYRING_PASSWORD. If KEYRING was
@ -202,34 +95,19 @@ unlock_gnome_keyring(const char *keyring_name,
const char *keyring_password,
apr_pool_t *pool)
{
struct gnome_keyring_baton key_info;
GnomeKeyringInfo *info;
GnomeKeyringResult gkr;
key_info.info = NULL;
key_info.keyring_name = NULL;
/* Get details about the default keyring. */
key_info.loop = g_main_loop_new(NULL, FALSE);
gnome_keyring_get_info(keyring_name, callback_get_info_keyring,
&key_info, NULL);
g_main_loop_run(key_info.loop);
if (key_info.info == NULL)
{
callback_destroy_data_keyring(&key_info);
return FALSE;
}
else
{
key_info.loop = g_main_loop_new(NULL, FALSE);
gnome_keyring_unlock(keyring_name, keyring_password,
callback_done, &key_info, NULL);
g_main_loop_run(key_info.loop);
}
callback_destroy_data_keyring(&key_info);
if (check_keyring_is_locked(keyring_name))
gkr = gnome_keyring_get_info_sync(keyring_name, &info);
if (gkr != GNOME_KEYRING_RESULT_OK)
return FALSE;
return TRUE;
gkr = gnome_keyring_unlock_sync(keyring_name, keyring_password);
gnome_keyring_info_free(info);
if (gkr != GNOME_KEYRING_RESULT_OK)
return FALSE;
return check_keyring_is_locked(keyring_name);
}

View File

@ -1006,7 +1006,10 @@ repos_to_repos_copy(const apr_array_header_t *copy_pairs,
&& (relpath != NULL && *relpath != '\0'))
{
info->resurrection = TRUE;
top_url = svn_uri_dirname(top_url, pool);
top_url = svn_uri_get_longest_ancestor(
top_url,
svn_uri_dirname(pair->dst_abspath_or_url, pool),
pool);
SVN_ERR(svn_ra_reparent(ra_session, top_url, pool));
}
}

View File

@ -146,6 +146,7 @@ relegate_dir_external(svn_wc_context_t *wc_ctx,
static svn_error_t *
switch_dir_external(const char *local_abspath,
const char *url,
const char *url_from_externals_definition,
const svn_opt_revision_t *peg_revision,
const svn_opt_revision_t *revision,
const char *defining_abspath,
@ -169,6 +170,46 @@ switch_dir_external(const char *local_abspath,
if (revision->kind == svn_opt_revision_number)
external_rev = revision->value.number;
/*
* The code below assumes existing versioned paths are *not* part of
* the external's defining working copy.
* The working copy library does not support registering externals
* on top of existing BASE nodes and will error out if we try.
* So if the external target is part of the defining working copy's
* BASE tree, don't attempt to create the external. Doing so would
* leave behind a switched path instead of an external (since the
* switch succeeds but registration of the external in the DB fails).
* The working copy then cannot be updated until the path is switched back.
* See issue #4085.
*/
SVN_ERR(svn_wc__node_get_base(&kind, NULL, NULL,
&repos_root_url, &repos_uuid,
NULL, ctx->wc_ctx, local_abspath,
TRUE, /* ignore_enoent */
TRUE, /* show hidden */
pool, pool));
if (kind != svn_node_unknown)
{
const char *wcroot_abspath;
const char *defining_wcroot_abspath;
SVN_ERR(svn_wc__get_wcroot(&wcroot_abspath, ctx->wc_ctx,
local_abspath, pool, pool));
SVN_ERR(svn_wc__get_wcroot(&defining_wcroot_abspath, ctx->wc_ctx,
defining_abspath, pool, pool));
if (strcmp(wcroot_abspath, defining_wcroot_abspath) == 0)
return svn_error_createf(SVN_ERR_WC_PATH_UNEXPECTED_STATUS, NULL,
_("The external '%s' defined in %s at '%s' "
"cannot be checked out because '%s' is "
"already a versioned path."),
url_from_externals_definition,
SVN_PROP_EXTERNALS,
svn_dirent_local_style(defining_abspath,
pool),
svn_dirent_local_style(local_abspath,
pool));
}
/* If path is a directory, try to update/switch to the correct URL
and revision. */
SVN_ERR(svn_io_check_path(local_abspath, &kind, pool));
@ -201,6 +242,20 @@ switch_dir_external(const char *local_abspath,
FALSE, TRUE,
timestamp_sleep,
ctx, subpool));
/* We just decided that this existing directory is an external,
so update the external registry with this information, like
when checking out an external */
SVN_ERR(svn_wc__external_register(ctx->wc_ctx,
defining_abspath,
local_abspath, svn_node_dir,
repos_root_url, repos_uuid,
svn_uri_skip_ancestor(repos_root_url,
url, pool),
external_peg_rev,
external_rev,
pool));
svn_pool_destroy(subpool);
goto cleanup;
}
@ -460,7 +515,10 @@ switch_file_external(const char *local_abspath,
svn_dirent_split(&dir_abspath, &target, local_abspath, scratch_pool);
/* Open an RA session to 'source' URL */
/* ### Why do we open a new session? RA_SESSION is a valid
### session -- the caller used it to call svn_ra_check_path on
### this very URL, the caller also did the resolving and
### reparenting that is repeated here. */
SVN_ERR(svn_client__ra_session_from_path2(&ra_session, &switch_loc,
url, dir_abspath,
peg_revision, revision,
@ -497,7 +555,7 @@ switch_file_external(const char *local_abspath,
invalid revnum, that means RA will use the latest revision. */
SVN_ERR(svn_ra_do_switch3(ra_session, &reporter, &report_baton,
switch_loc->rev,
target, svn_depth_unknown, url,
target, svn_depth_unknown, switch_loc->url,
FALSE /* send_copyfrom */,
TRUE /* ignore_ancestry */,
switch_editor, switch_baton,
@ -738,6 +796,7 @@ handle_external_item_change(svn_client_ctx_t *ctx,
{
case svn_node_dir:
SVN_ERR(switch_dir_external(local_abspath, new_loc->url,
new_item->url,
&(new_item->peg_revision),
&(new_item->revision),
parent_dir_abspath,

View File

@ -814,10 +814,12 @@ svn_client_log5(const apr_array_header_t *targets,
svn_ra_session_t *ra_session;
const char *old_session_url;
const char *ra_target;
const char *path_or_url;
svn_opt_revision_t youngest_opt_rev;
svn_revnum_t youngest_rev;
svn_revnum_t oldest_rev;
svn_opt_revision_t peg_rev;
svn_client__pathrev_t *ra_session_loc;
svn_client__pathrev_t *actual_loc;
apr_array_header_t *log_segments;
apr_array_header_t *revision_ranges;
@ -837,7 +839,7 @@ svn_client_log5(const apr_array_header_t *targets,
SVN_ERR(resolve_log_targets(&relative_targets, &ra_target, &peg_rev,
targets, ctx, pool, pool));
SVN_ERR(svn_client__ra_session_from_path2(&ra_session, &actual_loc,
SVN_ERR(svn_client__ra_session_from_path2(&ra_session, &ra_session_loc,
ra_target, NULL, &peg_rev, &peg_rev,
ctx, pool));
@ -851,11 +853,22 @@ svn_client_log5(const apr_array_header_t *targets,
opt_rev_ranges, &peg_rev,
ctx, pool, pool));
/* For some peg revisions we must resolve revision and url via a local path
so use the original RA_TARGET. For others, use the potentially corrected
(redirected) ra session URL. */
if (peg_rev.kind == svn_opt_revision_previous ||
peg_rev.kind == svn_opt_revision_base ||
peg_rev.kind == svn_opt_revision_committed ||
peg_rev.kind == svn_opt_revision_working)
path_or_url = ra_target;
else
path_or_url = ra_session_loc->url;
/* Make ACTUAL_LOC and RA_SESSION point to the youngest operative rev. */
youngest_opt_rev.kind = svn_opt_revision_number;
youngest_opt_rev.value.number = youngest_rev;
SVN_ERR(svn_client__resolve_rev_and_url(&actual_loc, ra_session,
ra_target, &peg_rev,
path_or_url, &peg_rev,
&youngest_opt_rev, ctx, pool));
SVN_ERR(svn_client__ensure_ra_session_url(&old_session_url, ra_session,
actual_loc->url, pool));

View File

@ -1258,13 +1258,14 @@ record_skip(merge_cmd_baton_t *merge_b,
svn_node_kind_t kind,
svn_wc_notify_action_t action,
svn_wc_notify_state_t state,
struct merge_dir_baton_t *pdb,
apr_pool_t *scratch_pool)
{
if (merge_b->record_only)
return SVN_NO_ERROR; /* ### Why? - Legacy compatibility */
if (merge_b->merge_source.ancestral
|| merge_b->reintegrate_merge)
if ((merge_b->merge_source.ancestral || merge_b->reintegrate_merge)
&& !(pdb && pdb->shadowed))
{
store_path(merge_b->skipped_abspaths, local_abspath);
}
@ -1979,7 +1980,8 @@ merge_file_changed(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_file,
svn_wc_notify_update_shadowed_update,
fb->skip_reason, scratch_pool));
fb->skip_reason, fb->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -2148,7 +2150,8 @@ merge_file_added(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_file,
svn_wc_notify_update_shadowed_add,
fb->skip_reason, scratch_pool));
fb->skip_reason, fb->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -2359,7 +2362,8 @@ merge_file_deleted(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_file,
svn_wc_notify_update_shadowed_delete,
fb->skip_reason, scratch_pool));
fb->skip_reason, fb->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -2723,6 +2727,12 @@ merge_dir_opened(void **new_dir_baton,
/* Set a tree conflict */
db->shadowed = TRUE;
db->tree_conflict_reason = svn_wc_conflict_reason_obstructed;
if ((merge_b->merge_source.ancestral || merge_b->reintegrate_merge)
&& !(pdb && pdb->shadowed))
{
store_path(merge_b->skipped_abspaths, local_abspath);
}
}
}
@ -2847,7 +2857,8 @@ merge_dir_changed(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_dir,
svn_wc_notify_update_shadowed_update,
db->skip_reason, scratch_pool));
db->skip_reason, db->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -2931,7 +2942,8 @@ merge_dir_added(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_dir,
svn_wc_notify_update_shadowed_add,
db->skip_reason, scratch_pool));
db->skip_reason, db->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -3098,7 +3110,8 @@ merge_dir_deleted(const char *relpath,
/* We haven't notified for this node yet: report a skip */
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_dir,
svn_wc_notify_update_shadowed_delete,
db->skip_reason, scratch_pool));
db->skip_reason, db->parent_baton,
scratch_pool));
}
return SVN_NO_ERROR;
@ -3278,13 +3291,14 @@ merge_node_absent(const char *relpath,
apr_pool_t *scratch_pool)
{
merge_cmd_baton_t *merge_b = processor->baton;
struct merge_dir_baton_t *db = dir_baton;
const char *local_abspath = svn_dirent_join(merge_b->target->abspath,
relpath, scratch_pool);
SVN_ERR(record_skip(merge_b, local_abspath, svn_node_unknown,
svn_wc_notify_skip, svn_wc_notify_state_missing,
scratch_pool));
db, scratch_pool));
return SVN_NO_ERROR;
}

View File

@ -2057,6 +2057,56 @@ send_patch_notification(const patch_target_t *target,
return SVN_NO_ERROR;
}
static void
svn_sort__array(apr_array_header_t *array,
int (*comparison_func)(const void *,
const void *))
{
qsort(array->elts, array->nelts, array->elt_size, comparison_func);
}
/* Implements the callback for svn_sort__array. Puts hunks that match
before hunks that do not match, puts hunks that match in order
based on postion matched, puts hunks that do not match in order
based on original position. */
static int
sort_matched_hunks(const void *a, const void *b)
{
const hunk_info_t *item1 = *((const hunk_info_t * const *)a);
const hunk_info_t *item2 = *((const hunk_info_t * const *)b);
svn_boolean_t matched1 = !item1->rejected && !item1->already_applied;
svn_boolean_t matched2 = !item2->rejected && !item2->already_applied;
svn_linenum_t original1, original2;
if (matched1 && matched2)
{
/* Both match so use order matched in file. */
if (item1->matched_line > item2->matched_line)
return 1;
else if (item1->matched_line == item2->matched_line)
return 0;
else
return -1;
}
else if (matched2)
/* Only second matches, put it before first. */
return 1;
else if (matched1)
/* Only first matches, put it before second. */
return -1;
/* Neither matches, sort by original_start. */
original1 = svn_diff_hunk_get_original_start(item1->hunk);
original2 = svn_diff_hunk_get_original_start(item2->hunk);
if (original1 > original2)
return 1;
else if (original1 == original2)
return 0;
else
return -1;
}
/* Apply a PATCH to a working copy at ABS_WC_PATH and put the result
* into temporary files, to be installed in the working copy later.
* Return information about the patch target in *PATCH_TARGET, allocated
@ -2138,6 +2188,10 @@ apply_one_patch(patch_target_t **patch_target, svn_patch_t *patch,
APR_ARRAY_PUSH(target->content->hunks, hunk_info_t *) = hi;
}
/* Hunks are applied in the order determined by the matched line and
this may be different from the order of the original lines. */
svn_sort__array(target->content->hunks, sort_matched_hunks);
/* Apply or reject hunks. */
for (i = 0; i < target->content->hunks->nelts; i++)
{

View File

@ -82,6 +82,14 @@ fetch_repos_info(const char **repos_root,
return SVN_NO_ERROR;
}
/* Forward definition. Upgrades svn:externals properties in the working copy
LOCAL_ABSPATH to the WC-NG storage.
*/
static svn_error_t *
upgrade_externals_from_properties(svn_client_ctx_t *ctx,
const char *local_abspath,
apr_pool_t *scratch_pool);
svn_error_t *
svn_client_upgrade(const char *path,
svn_client_ctx_t *ctx,
@ -89,10 +97,6 @@ svn_client_upgrade(const char *path,
{
const char *local_abspath;
apr_hash_t *externals;
apr_hash_index_t *hi;
apr_pool_t *iterpool;
apr_pool_t *iterpool2;
svn_opt_revision_t rev = {svn_opt_revision_unspecified, {0}};
struct repos_info_baton info_baton;
info_baton.state_pool = scratch_pool;
@ -111,6 +115,80 @@ svn_client_upgrade(const char *path,
ctx->notify_func2, ctx->notify_baton2,
scratch_pool));
SVN_ERR(svn_wc__externals_defined_below(&externals,
ctx->wc_ctx, local_abspath,
scratch_pool, scratch_pool));
if (apr_hash_count(externals) > 0)
{
apr_pool_t *iterpool = svn_pool_create(scratch_pool);
apr_hash_index_t *hi;
/* We are upgrading from >= 1.7. No need to upgrade from
svn:externals properties. And by that avoiding the removal
of recorded externals information (issue #4519)
Only directory externals need an explicit upgrade */
for (hi = apr_hash_first(scratch_pool, externals);
hi;
hi = apr_hash_next(hi))
{
const char *ext_abspath;
svn_node_kind_t kind;
svn_pool_clear(iterpool);
ext_abspath = svn__apr_hash_index_key(hi);
SVN_ERR(svn_wc__read_external_info(&kind, NULL, NULL, NULL, NULL,
ctx->wc_ctx, local_abspath,
ext_abspath, FALSE,
iterpool, iterpool));
if (kind == svn_node_dir)
{
svn_error_t *err = svn_client_upgrade(ext_abspath, ctx, iterpool);
if (err)
{
svn_wc_notify_t *notify =
svn_wc_create_notify(ext_abspath,
svn_wc_notify_failed_external,
iterpool);
notify->err = err;
ctx->notify_func2(ctx->notify_baton2,
notify, iterpool);
svn_error_clear(err);
/* Next external node, please... */
}
}
}
svn_pool_destroy(iterpool);
}
else
{
/* Upgrading from <= 1.6, or no svn:properties defined.
(There is no way to detect the difference from libsvn_client :( ) */
SVN_ERR(upgrade_externals_from_properties(ctx, local_abspath,
scratch_pool));
}
return SVN_NO_ERROR;
}
static svn_error_t *
upgrade_externals_from_properties(svn_client_ctx_t *ctx,
const char *local_abspath,
apr_pool_t *scratch_pool)
{
apr_hash_index_t *hi;
apr_pool_t *iterpool;
apr_pool_t *iterpool2;
apr_hash_t *externals;
svn_opt_revision_t rev = {svn_opt_revision_unspecified, {0}};
struct repos_info_baton info_baton;
/* Now it's time to upgrade the externals too. We do it after the wc
upgrade to avoid that errors in the externals causes the wc upgrade to
fail. Thanks to caching the performance penalty of walking the wc a
@ -163,7 +241,7 @@ svn_client_upgrade(const char *path,
iterpool);
if (!err)
err = svn_wc_parse_externals_description3(
&externals_p, svn_dirent_dirname(path, iterpool),
&externals_p, svn_dirent_dirname(local_abspath, iterpool),
external_desc->data, FALSE, iterpool);
if (err)
{

View File

@ -830,23 +830,23 @@ write_handler(void *baton,
p = decode_file_offset(&sview_offset, p, end);
if (p == NULL)
return SVN_NO_ERROR;
break;
p = decode_size(&sview_len, p, end);
if (p == NULL)
return SVN_NO_ERROR;
break;
p = decode_size(&tview_len, p, end);
if (p == NULL)
return SVN_NO_ERROR;
break;
p = decode_size(&inslen, p, end);
if (p == NULL)
return SVN_NO_ERROR;
break;
p = decode_size(&newlen, p, end);
if (p == NULL)
return SVN_NO_ERROR;
break;
if (tview_len > SVN_DELTA_WINDOW_SIZE ||
sview_len > SVN_DELTA_WINDOW_SIZE ||
@ -904,7 +904,15 @@ write_handler(void *baton,
db->subpool = newpool;
}
/* NOTREACHED */
/* At this point we processed all integral windows and DB->BUFFER is empty
or contains partially read window header.
Check that unprocessed data is not larger that theoretical maximum
window header size. */
if (db->buffer->len > 5 * MAX_ENCODED_INT_LEN)
return svn_error_create(SVN_ERR_SVNDIFF_CORRUPT_WINDOW, NULL,
_("Svndiff contains a too-large window header"));
return SVN_NO_ERROR;
}
/* Minimal svn_stream_t write handler, doing nothing */

View File

@ -1313,6 +1313,7 @@ svn_diff_parse_next_patch(svn_patch_t **patch,
line_after_tree_header_read = TRUE;
}
else if (! valid_header_line && state != state_start
&& state != state_git_diff_seen
&& !starts_with(line->data, "index "))
{
/* We've encountered an invalid diff header.

View File

@ -34,6 +34,7 @@
#include "svn_diff.h"
#include "svn_types.h"
#include "svn_ctype.h"
#include "svn_sorts.h"
#include "svn_utf.h"
#include "svn_version.h"
@ -486,23 +487,37 @@ display_mergeinfo_diff(const char *old_mergeinfo_val,
return SVN_NO_ERROR;
}
/* qsort callback handling svn_prop_t by name */
static int
propchange_sort(const void *k1, const void *k2)
{
const svn_prop_t *propchange1 = k1;
const svn_prop_t *propchange2 = k2;
return strcmp(propchange1->name, propchange2->name);
}
svn_error_t *
svn_diff__display_prop_diffs(svn_stream_t *outstream,
const char *encoding,
const apr_array_header_t *propchanges,
apr_hash_t *original_props,
svn_boolean_t pretty_print_mergeinfo,
apr_pool_t *pool)
apr_pool_t *scratch_pool)
{
apr_pool_t *pool = scratch_pool;
apr_pool_t *iterpool = svn_pool_create(pool);
apr_array_header_t *changes = apr_array_copy(scratch_pool, propchanges);
int i;
for (i = 0; i < propchanges->nelts; i++)
qsort(changes->elts, changes->nelts, changes->elt_size, propchange_sort);
for (i = 0; i < changes->nelts; i++)
{
const char *action;
const svn_string_t *original_value;
const svn_prop_t *propchange
= &APR_ARRAY_IDX(propchanges, i, svn_prop_t);
= &APR_ARRAY_IDX(changes, i, svn_prop_t);
if (original_props)
original_value = svn_hash_gets(original_props, propchange->name);

View File

@ -89,7 +89,7 @@ read_config(svn_memcache_t **memcache_p,
fs_fs_data_t *ffd = fs->fsap_data;
SVN_ERR(svn_cache__make_memcache_from_config(memcache_p, ffd->config,
fs->pool));
fs->pool));
/* No cache namespace by default. I.e. all FS instances share the
* cached data. If you specify different namespaces, the data will
@ -129,23 +129,9 @@ read_config(svn_memcache_t **memcache_p,
SVN_FS_CONFIG_FSFS_CACHE_FULLTEXTS,
TRUE);
/* don't cache revprops by default.
* Revprop caching significantly speeds up operations like
* svn ls -v. However, it requires synchronization that may
* not be available or efficient in the current server setup.
*
* If the caller chose option "2", enable revprop caching if
* the required API support is there to make it efficient.
/* For now, always disable revprop caching.
*/
if (strcmp(svn_hash__get_cstring(fs->config,
SVN_FS_CONFIG_FSFS_CACHE_REVPROPS,
""), "2"))
*cache_revprops
= svn_hash__get_bool(fs->config,
SVN_FS_CONFIG_FSFS_CACHE_REVPROPS,
FALSE);
else
*cache_revprops = svn_named_atomic__is_efficient();
*cache_revprops = FALSE;
return svn_config_get_bool(ffd->config, fail_stop,
CONFIG_SECTION_CACHES, CONFIG_OPTION_FAIL_STOP,

View File

@ -3988,17 +3988,23 @@ write_non_packed_revprop(const char **final_path,
apr_hash_t *proplist,
apr_pool_t *pool)
{
apr_file_t *file;
svn_stream_t *stream;
*final_path = path_revprops(fs, rev, pool);
/* ### do we have a directory sitting around already? we really shouldn't
### have to get the dirname here. */
SVN_ERR(svn_stream_open_unique(&stream, tmp_path,
svn_dirent_dirname(*final_path, pool),
svn_io_file_del_none, pool, pool));
SVN_ERR(svn_io_open_unique_file3(&file, tmp_path,
svn_dirent_dirname(*final_path, pool),
svn_io_file_del_none, pool, pool));
stream = svn_stream_from_aprfile2(file, TRUE, pool);
SVN_ERR(svn_hash_write2(proplist, stream, SVN_HASH_TERMINATOR, pool));
SVN_ERR(svn_stream_close(stream));
/* Flush temporary file to disk and close it. */
SVN_ERR(svn_io_file_flush_to_disk(file, pool));
SVN_ERR(svn_io_file_close(file, pool));
return SVN_NO_ERROR;
}
@ -4085,7 +4091,7 @@ serialize_revprops_header(svn_stream_t *stream,
return SVN_NO_ERROR;
}
/* Writes the a pack file to FILE_STREAM. It copies the serialized data
/* Writes the a pack file to FILE. It copies the serialized data
* from REVPROPS for the indexes [START,END) except for index CHANGED_INDEX.
*
* The data for the latter is taken from NEW_SERIALIZED. Note, that
@ -4103,7 +4109,7 @@ repack_revprops(svn_fs_t *fs,
int changed_index,
svn_stringbuf_t *new_serialized,
apr_off_t new_total_size,
svn_stream_t *file_stream,
apr_file_t *file,
apr_pool_t *pool)
{
fs_fs_data_t *ffd = fs->fsap_data;
@ -4151,9 +4157,11 @@ repack_revprops(svn_fs_t *fs,
? SVN_DELTA_COMPRESSION_LEVEL_DEFAULT
: SVN_DELTA_COMPRESSION_LEVEL_NONE));
/* finally, write the content to the target stream and close it */
SVN_ERR(svn_stream_write(file_stream, compressed->data, &compressed->len));
SVN_ERR(svn_stream_close(file_stream));
/* finally, write the content to the target file, flush and close it */
SVN_ERR(svn_io_file_write_full(file, compressed->data, compressed->len,
NULL, pool));
SVN_ERR(svn_io_file_flush_to_disk(file, pool));
SVN_ERR(svn_io_file_close(file, pool));
return SVN_NO_ERROR;
}
@ -4161,23 +4169,22 @@ repack_revprops(svn_fs_t *fs,
/* Allocate a new pack file name for revisions
* [REVPROPS->START_REVISION + START, REVPROPS->START_REVISION + END - 1]
* of REVPROPS->MANIFEST. Add the name of old file to FILES_TO_DELETE,
* auto-create that array if necessary. Return an open file stream to
* the new file in *STREAM allocated in POOL.
* auto-create that array if necessary. Return an open file *FILE that is
* allocated in POOL.
*/
static svn_error_t *
repack_stream_open(svn_stream_t **stream,
svn_fs_t *fs,
packed_revprops_t *revprops,
int start,
int end,
apr_array_header_t **files_to_delete,
apr_pool_t *pool)
repack_file_open(apr_file_t **file,
svn_fs_t *fs,
packed_revprops_t *revprops,
int start,
int end,
apr_array_header_t **files_to_delete,
apr_pool_t *pool)
{
apr_int64_t tag;
const char *tag_string;
svn_string_t *new_filename;
int i;
apr_file_t *file;
int manifest_offset
= (int)(revprops->start_revision - revprops->manifest_start);
@ -4209,12 +4216,11 @@ repack_stream_open(svn_stream_t **stream,
APR_ARRAY_IDX(revprops->manifest, i + manifest_offset, const char*)
= new_filename->data;
/* create a file stream for the new file */
SVN_ERR(svn_io_file_open(&file, svn_dirent_join(revprops->folder,
new_filename->data,
pool),
/* open the file */
SVN_ERR(svn_io_file_open(file, svn_dirent_join(revprops->folder,
new_filename->data,
pool),
APR_WRITE | APR_CREATE, APR_OS_DEFAULT, pool));
*stream = svn_stream_from_aprfile2(file, FALSE, pool);
return SVN_NO_ERROR;
}
@ -4238,6 +4244,7 @@ write_packed_revprop(const char **final_path,
packed_revprops_t *revprops;
apr_int64_t generation = 0;
svn_stream_t *stream;
apr_file_t *file;
svn_stringbuf_t *serialized;
apr_off_t new_total_size;
int changed_index;
@ -4273,11 +4280,11 @@ write_packed_revprop(const char **final_path,
*final_path = svn_dirent_join(revprops->folder, revprops->filename,
pool);
SVN_ERR(svn_stream_open_unique(&stream, tmp_path, revprops->folder,
svn_io_file_del_none, pool, pool));
SVN_ERR(svn_io_open_unique_file3(&file, tmp_path, revprops->folder,
svn_io_file_del_none, pool, pool));
SVN_ERR(repack_revprops(fs, revprops, 0, revprops->sizes->nelts,
changed_index, serialized, new_total_size,
stream, pool));
file, pool));
}
else
{
@ -4323,50 +4330,53 @@ write_packed_revprop(const char **final_path,
/* write the new, split files */
if (left_count)
{
SVN_ERR(repack_stream_open(&stream, fs, revprops, 0,
left_count, files_to_delete, pool));
SVN_ERR(repack_file_open(&file, fs, revprops, 0,
left_count, files_to_delete, pool));
SVN_ERR(repack_revprops(fs, revprops, 0, left_count,
changed_index, serialized, new_total_size,
stream, pool));
file, pool));
}
if (left_count + right_count < revprops->sizes->nelts)
{
SVN_ERR(repack_stream_open(&stream, fs, revprops, changed_index,
changed_index + 1, files_to_delete,
pool));
SVN_ERR(repack_file_open(&file, fs, revprops, changed_index,
changed_index + 1, files_to_delete,
pool));
SVN_ERR(repack_revprops(fs, revprops, changed_index,
changed_index + 1,
changed_index, serialized, new_total_size,
stream, pool));
file, pool));
}
if (right_count)
{
SVN_ERR(repack_stream_open(&stream, fs, revprops,
revprops->sizes->nelts - right_count,
revprops->sizes->nelts,
files_to_delete, pool));
SVN_ERR(repack_file_open(&file, fs, revprops,
revprops->sizes->nelts - right_count,
revprops->sizes->nelts,
files_to_delete, pool));
SVN_ERR(repack_revprops(fs, revprops,
revprops->sizes->nelts - right_count,
revprops->sizes->nelts, changed_index,
serialized, new_total_size, stream,
serialized, new_total_size, file,
pool));
}
/* write the new manifest */
*final_path = svn_dirent_join(revprops->folder, PATH_MANIFEST, pool);
SVN_ERR(svn_stream_open_unique(&stream, tmp_path, revprops->folder,
svn_io_file_del_none, pool, pool));
SVN_ERR(svn_io_open_unique_file3(&file, tmp_path, revprops->folder,
svn_io_file_del_none, pool, pool));
for (i = 0; i < revprops->manifest->nelts; ++i)
{
const char *filename = APR_ARRAY_IDX(revprops->manifest, i,
const char*);
SVN_ERR(svn_stream_printf(stream, pool, "%s\n", filename));
SVN_ERR(svn_io_file_write_full(file, filename, strlen(filename),
NULL, pool));
SVN_ERR(svn_io_file_putc('\n', file, pool));
}
SVN_ERR(svn_stream_close(stream));
SVN_ERR(svn_io_file_flush_to_disk(file, pool));
SVN_ERR(svn_io_file_close(file, pool));
}
return SVN_NO_ERROR;
@ -5062,9 +5072,11 @@ get_combined_window(svn_stringbuf_t **result,
/* Maybe, we've got a PLAIN start representation. If we do, read
as much data from it as the needed for the txdelta window's source
view.
Note that BUF / SOURCE may only be NULL in the first iteration. */
Note that BUF / SOURCE may only be NULL in the first iteration.
Also note that we may have short-cut reading the delta chain --
in which case SRC_OPS is 0 and it might not be a PLAIN rep. */
source = buf;
if (source == NULL && rb->src_state != NULL)
if (source == NULL && rb->src_state != NULL && window->src_ops)
SVN_ERR(read_plain_window(&source, rb->src_state, window->sview_len,
pool));
@ -6966,8 +6978,13 @@ svn_fs_fs__set_entry(svn_fs_t *fs,
rep = apr_pcalloc(pool, sizeof(*rep));
rep->revision = SVN_INVALID_REVNUM;
rep->txn_id = txn_id;
SVN_ERR(get_new_txn_node_id(&unique_suffix, fs, txn_id, pool));
rep->uniquifier = apr_psprintf(pool, "%s/%s", txn_id, unique_suffix);
if (ffd->format >= SVN_FS_FS__MIN_REP_SHARING_FORMAT)
{
SVN_ERR(get_new_txn_node_id(&unique_suffix, fs, txn_id, pool));
rep->uniquifier = apr_psprintf(pool, "%s/%s", txn_id, unique_suffix);
}
parent_noderev->data_rep = rep;
SVN_ERR(svn_fs_fs__put_node_revision(fs, parent_noderev->id,
parent_noderev, FALSE, pool));
@ -7551,6 +7568,7 @@ rep_write_contents_close(void *baton)
representation_t *rep;
representation_t *old_rep;
apr_off_t offset;
fs_fs_data_t *ffd = b->fs->fsap_data;
rep = apr_pcalloc(b->parent_pool, sizeof(*rep));
rep->offset = b->rep_offset;
@ -7567,9 +7585,13 @@ rep_write_contents_close(void *baton)
/* Fill in the rest of the representation field. */
rep->expanded_size = b->rep_size;
rep->txn_id = svn_fs_fs__id_txn_id(b->noderev->id);
SVN_ERR(get_new_txn_node_id(&unique_suffix, b->fs, rep->txn_id, b->pool));
rep->uniquifier = apr_psprintf(b->parent_pool, "%s/%s", rep->txn_id,
unique_suffix);
if (ffd->format >= SVN_FS_FS__MIN_REP_SHARING_FORMAT)
{
SVN_ERR(get_new_txn_node_id(&unique_suffix, b->fs, rep->txn_id, b->pool));
rep->uniquifier = apr_psprintf(b->parent_pool, "%s/%s", rep->txn_id,
unique_suffix);
}
rep->revision = SVN_INVALID_REVNUM;
/* Finalize the checksum. */
@ -7842,7 +7864,7 @@ write_hash_rep(representation_t *rep,
/* update the representation */
rep->size = whb->size;
rep->expanded_size = 0;
rep->expanded_size = whb->size;
}
return SVN_NO_ERROR;
@ -9070,7 +9092,9 @@ recover_find_max_ids(svn_fs_t *fs, svn_revnum_t rev,
stored in the representation. */
baton.file = rev_file;
baton.pool = pool;
baton.remaining = data_rep->expanded_size;
baton.remaining = data_rep->expanded_size
? data_rep->expanded_size
: data_rep->size;
stream = svn_stream_create(&baton, pool);
svn_stream_set_read(stream, read_handler_recover);
@ -10912,6 +10936,9 @@ hotcopy_update_current(svn_revnum_t *dst_youngest,
{
apr_off_t root_offset;
apr_file_t *rev_file;
char max_node_id[MAX_KEY_SIZE] = "0";
char max_copy_id[MAX_KEY_SIZE] = "0";
apr_size_t len;
if (dst_ffd->format >= SVN_FS_FS__MIN_PACKED_FORMAT)
SVN_ERR(update_min_unpacked_rev(dst_fs, scratch_pool));
@ -10921,9 +10948,15 @@ hotcopy_update_current(svn_revnum_t *dst_youngest,
SVN_ERR(get_root_changes_offset(&root_offset, NULL, rev_file,
dst_fs, new_youngest, scratch_pool));
SVN_ERR(recover_find_max_ids(dst_fs, new_youngest, rev_file,
root_offset, next_node_id, next_copy_id,
root_offset, max_node_id, max_copy_id,
scratch_pool));
SVN_ERR(svn_io_file_close(rev_file, scratch_pool));
/* We store the _next_ ids. */
len = strlen(max_node_id);
svn_fs_fs__next_key(max_node_id, &len, next_node_id);
len = strlen(max_copy_id);
svn_fs_fs__next_key(max_copy_id, &len, next_copy_id);
}
/* Update 'current'. */

View File

@ -1,4 +1,4 @@
/* This file is automatically generated from rep-cache-db.sql and .dist_sandbox/subversion-1.8.10/subversion/libsvn_fs_fs/token-map.h.
/* This file is automatically generated from rep-cache-db.sql and .dist_sandbox/subversion-1.8.14/subversion/libsvn_fs_fs/token-map.h.
* Do not edit this file -- edit the source and rerun gen-make.py */
#define STMT_CREATE_SCHEMA 0

View File

@ -127,7 +127,6 @@ typedef struct fs_txn_root_data_t
static svn_error_t * get_dag(dag_node_t **dag_node_p,
svn_fs_root_t *root,
const char *path,
svn_boolean_t needs_lock_cache,
apr_pool_t *pool);
static svn_fs_root_t *make_revision_root(svn_fs_t *fs, svn_revnum_t rev,
@ -178,34 +177,10 @@ typedef struct cache_entry_t
*/
enum { BUCKET_COUNT = 256 };
/* Each pool that has received a DAG node, will hold at least on lock on
our cache to ensure that the node remains valid despite being allocated
in the cache's pool. This is the structure to represent the lock.
*/
typedef struct cache_lock_t
{
/* pool holding the lock */
apr_pool_t *pool;
/* cache being locked */
fs_fs_dag_cache_t *cache;
/* next lock. NULL at EOL */
struct cache_lock_t *next;
/* previous lock. NULL at list head. Only then this==cache->first_lock */
struct cache_lock_t *prev;
} cache_lock_t;
/* The actual cache structure. All nodes will be allocated in POOL.
When the number of INSERTIONS (i.e. objects created form that pool)
exceeds a certain threshold, the pool will be cleared and the cache
with it.
To ensure that nodes returned from this structure remain valid, the
cache will get locked for the lifetime of the _receiving_ pools (i.e.
those in which we would allocate the node if there was no cache.).
The cache will only be cleared FIRST_LOCK is 0.
*/
struct fs_fs_dag_cache_t
{
@ -221,106 +196,23 @@ struct fs_fs_dag_cache_t
/* Property lookups etc. have a very high locality (75% re-hit).
Thus, remember the last hit location for optimistic lookup. */
apr_size_t last_hit;
/* List of receiving pools that are still alive. */
cache_lock_t *first_lock;
};
/* Cleanup function to be called when a receiving pool gets cleared.
Unlocks the cache once.
*/
static apr_status_t
unlock_cache(void *baton_void)
{
cache_lock_t *lock = baton_void;
/* remove lock from chain. Update the head */
if (lock->next)
lock->next->prev = lock->prev;
if (lock->prev)
lock->prev->next = lock->next;
else
lock->cache->first_lock = lock->next;
return APR_SUCCESS;
}
/* Cleanup function to be called when the cache itself gets destroyed.
In that case, we must unregister all unlock requests.
*/
static apr_status_t
unregister_locks(void *baton_void)
{
fs_fs_dag_cache_t *cache = baton_void;
cache_lock_t *lock;
for (lock = cache->first_lock; lock; lock = lock->next)
apr_pool_cleanup_kill(lock->pool,
lock,
unlock_cache);
return APR_SUCCESS;
}
fs_fs_dag_cache_t*
svn_fs_fs__create_dag_cache(apr_pool_t *pool)
{
fs_fs_dag_cache_t *result = apr_pcalloc(pool, sizeof(*result));
result->pool = svn_pool_create(pool);
apr_pool_cleanup_register(pool,
result,
unregister_locks,
apr_pool_cleanup_null);
return result;
}
/* Prevent the entries in CACHE from being destroyed, for as long as the
POOL lives.
*/
static void
lock_cache(fs_fs_dag_cache_t* cache, apr_pool_t *pool)
{
/* we only need to lock / unlock once per pool. Since we will often ask
for multiple nodes with the same pool, we can reduce the overhead.
However, if e.g. pools are being used in an alternating pattern,
we may lock the cache more than once for the same pool (and register
just as many cleanup actions).
*/
cache_lock_t *lock = cache->first_lock;
/* try to find an existing lock for POOL.
But limit the time spent on chasing pointers. */
int limiter = 8;
while (lock && --limiter)
if (lock->pool == pool)
return;
/* create a new lock and put it at the beginning of the lock chain */
lock = apr_palloc(pool, sizeof(*lock));
lock->cache = cache;
lock->pool = pool;
lock->next = cache->first_lock;
lock->prev = NULL;
if (cache->first_lock)
cache->first_lock->prev = lock;
cache->first_lock = lock;
/* instruct POOL to remove the look upon cleanup */
apr_pool_cleanup_register(pool,
lock,
unlock_cache,
apr_pool_cleanup_null);
}
/* Clears the CACHE at regular intervals (destroying all cached nodes)
*/
static void
auto_clear_dag_cache(fs_fs_dag_cache_t* cache)
{
if (cache->first_lock == NULL && cache->insertions > BUCKET_COUNT)
if (cache->insertions > BUCKET_COUNT)
{
svn_pool_clear(cache->pool);
@ -433,18 +325,12 @@ locate_cache(svn_cache__t **cache,
}
}
/* Return NODE for PATH from ROOT's node cache, or NULL if the node
isn't cached; read it from the FS. *NODE remains valid until either
POOL or the FS gets cleared or destroyed (whichever comes first).
Since locking can be expensive and POOL may be long-living, for
nodes that will not need to survive the next call to this function,
set NEEDS_LOCK_CACHE to FALSE. */
/* Return NODE_P for PATH from ROOT's node cache, or NULL if the node
isn't cached; read it from the FS. *NODE_P is allocated in POOL. */
static svn_error_t *
dag_node_cache_get(dag_node_t **node_p,
svn_fs_root_t *root,
const char *path,
svn_boolean_t needs_lock_cache,
apr_pool_t *pool)
{
svn_boolean_t found;
@ -466,25 +352,23 @@ dag_node_cache_get(dag_node_t **node_p,
if (bucket->node == NULL)
{
locate_cache(&cache, &key, root, path, pool);
SVN_ERR(svn_cache__get((void **)&node, &found, cache, key,
ffd->dag_node_cache->pool));
SVN_ERR(svn_cache__get((void **)&node, &found, cache, key, pool));
if (found && node)
{
/* Patch up the FS, since this might have come from an old FS
* object. */
svn_fs_fs__dag_set_fs(node, root->fs);
bucket->node = node;
/* Retain the DAG node in L1 cache. */
bucket->node = svn_fs_fs__dag_dup(node,
ffd->dag_node_cache->pool);
}
}
else
{
node = bucket->node;
/* Copy the node from L1 cache into the passed-in POOL. */
node = svn_fs_fs__dag_dup(bucket->node, pool);
}
/* if we found a node, make sure it remains valid at least as long
as it would when allocated in POOL. */
if (node && needs_lock_cache)
lock_cache(ffd->dag_node_cache, pool);
}
else
{
@ -822,7 +706,7 @@ get_copy_inheritance(copy_id_inherit_t *inherit_p,
SVN_ERR(svn_fs_fs__dag_get_copyroot(&copyroot_rev, &copyroot_path,
child->node));
SVN_ERR(svn_fs_fs__revision_root(&copyroot_root, fs, copyroot_rev, pool));
SVN_ERR(get_dag(&copyroot_node, copyroot_root, copyroot_path, FALSE, pool));
SVN_ERR(get_dag(&copyroot_node, copyroot_root, copyroot_path, pool));
copyroot_id = svn_fs_fs__dag_get_id(copyroot_node);
if (svn_fs_fs__id_compare(copyroot_id, child_id) == -1)
@ -938,7 +822,7 @@ open_path(parent_path_t **parent_path_p,
{
directory = svn_dirent_dirname(path, pool);
if (directory[1] != 0) /* root nodes are covered anyway */
SVN_ERR(dag_node_cache_get(&here, root, directory, TRUE, pool));
SVN_ERR(dag_node_cache_get(&here, root, directory, pool));
}
/* did the shortcut work? */
@ -998,8 +882,8 @@ open_path(parent_path_t **parent_path_p,
element if we already know the lookup to fail for the
complete path. */
if (next || !(flags & open_path_uncached))
SVN_ERR(dag_node_cache_get(&cached_node, root, path_so_far,
TRUE, pool));
SVN_ERR(dag_node_cache_get(&cached_node, root, path_so_far, pool));
if (cached_node)
child = cached_node;
else
@ -1136,8 +1020,7 @@ make_path_mutable(svn_fs_root_t *root,
parent_path->node));
SVN_ERR(svn_fs_fs__revision_root(&copyroot_root, root->fs,
copyroot_rev, pool));
SVN_ERR(get_dag(&copyroot_node, copyroot_root, copyroot_path,
FALSE, pool));
SVN_ERR(get_dag(&copyroot_node, copyroot_root, copyroot_path, pool));
child_id = svn_fs_fs__dag_get_id(parent_path->node);
copyroot_id = svn_fs_fs__dag_get_id(copyroot_node);
@ -1174,16 +1057,11 @@ make_path_mutable(svn_fs_root_t *root,
/* Open the node identified by PATH in ROOT. Set DAG_NODE_P to the
node we find, allocated in POOL. Return the error
SVN_ERR_FS_NOT_FOUND if this node doesn't exist.
Since locking can be expensive and POOL may be long-living, for
nodes that will not need to survive the next call to this function,
set NEEDS_LOCK_CACHE to FALSE. */
SVN_ERR_FS_NOT_FOUND if this node doesn't exist. */
static svn_error_t *
get_dag(dag_node_t **dag_node_p,
svn_fs_root_t *root,
const char *path,
svn_boolean_t needs_lock_cache,
apr_pool_t *pool)
{
parent_path_t *parent_path;
@ -1192,7 +1070,7 @@ get_dag(dag_node_t **dag_node_p,
/* First we look for the DAG in our cache
(if the path may be canonical). */
if (*path == '/')
SVN_ERR(dag_node_cache_get(&node, root, path, needs_lock_cache, pool));
SVN_ERR(dag_node_cache_get(&node, root, path, pool));
if (! node)
{
@ -1202,8 +1080,7 @@ get_dag(dag_node_t **dag_node_p,
path = svn_fs__canonicalize_abspath(path, pool);
/* Try again with the corrected path. */
SVN_ERR(dag_node_cache_get(&node, root, path, needs_lock_cache,
pool));
SVN_ERR(dag_node_cache_get(&node, root, path, pool));
}
if (! node)
@ -1281,7 +1158,7 @@ svn_fs_fs__node_id(const svn_fs_id_t **id_p,
{
dag_node_t *node;
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
*id_p = svn_fs_fs__id_copy(svn_fs_fs__dag_get_id(node), pool);
}
return SVN_NO_ERROR;
@ -1296,7 +1173,7 @@ svn_fs_fs__node_created_rev(svn_revnum_t *revision,
{
dag_node_t *node;
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
return svn_fs_fs__dag_get_revision(revision, node, pool);
}
@ -1311,7 +1188,7 @@ fs_node_created_path(const char **created_path,
{
dag_node_t *node;
SVN_ERR(get_dag(&node, root, path, TRUE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
*created_path = svn_fs_fs__dag_get_created_path(node);
return SVN_NO_ERROR;
@ -1375,7 +1252,7 @@ fs_node_prop(svn_string_t **value_p,
dag_node_t *node;
apr_hash_t *proplist;
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
SVN_ERR(svn_fs_fs__dag_get_proplist(&proplist, node, pool));
*value_p = NULL;
if (proplist)
@ -1398,7 +1275,7 @@ fs_node_proplist(apr_hash_t **table_p,
apr_hash_t *table;
dag_node_t *node;
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
SVN_ERR(svn_fs_fs__dag_get_proplist(&table, node, pool));
*table_p = table ? table : apr_hash_make(pool);
@ -1515,8 +1392,8 @@ fs_props_changed(svn_boolean_t *changed_p,
(SVN_ERR_FS_GENERAL, NULL,
_("Cannot compare property value between two different filesystems"));
SVN_ERR(get_dag(&node1, root1, path1, TRUE, pool));
SVN_ERR(get_dag(&node2, root2, path2, TRUE, pool));
SVN_ERR(get_dag(&node1, root1, path1, pool));
SVN_ERR(get_dag(&node2, root2, path2, pool));
return svn_fs_fs__dag_things_different(changed_p, NULL,
node1, node2);
}
@ -1529,7 +1406,7 @@ fs_props_changed(svn_boolean_t *changed_p,
static svn_error_t *
get_root(dag_node_t **node, svn_fs_root_t *root, apr_pool_t *pool)
{
return get_dag(node, root, "/", TRUE, pool);
return get_dag(node, root, "/", pool);
}
@ -2193,7 +2070,7 @@ fs_dir_entries(apr_hash_t **table_p,
dag_node_t *node;
/* Get the entries for this path in the caller's pool. */
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
return svn_fs_fs__dag_dir_entries(table_p, node, pool);
}
@ -2365,7 +2242,7 @@ copy_helper(svn_fs_root_t *from_root,
_("Copy from mutable tree not currently supported"));
/* Get the NODE for FROM_PATH in FROM_ROOT.*/
SVN_ERR(get_dag(&from_node, from_root, from_path, TRUE, pool));
SVN_ERR(get_dag(&from_node, from_root, from_path, pool));
/* Build up the parent path from TO_PATH in TO_ROOT. If the last
component does not exist, it's not that big a deal. We'll just
@ -2442,7 +2319,7 @@ copy_helper(svn_fs_root_t *from_root,
pool));
/* Make a record of this modification in the changes table. */
SVN_ERR(get_dag(&new_node, to_root, to_path, TRUE, pool));
SVN_ERR(get_dag(&new_node, to_root, to_path, pool));
SVN_ERR(add_change(to_root->fs, txn_id, to_path,
svn_fs_fs__dag_get_id(new_node), kind, FALSE, FALSE,
svn_fs_fs__dag_node_kind(from_node),
@ -2553,7 +2430,7 @@ fs_copied_from(svn_revnum_t *rev_p,
{
/* There is no cached entry, look it up the old-fashioned
way. */
SVN_ERR(get_dag(&node, root, path, TRUE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
SVN_ERR(svn_fs_fs__dag_get_copyfrom_rev(&copyfrom_rev, node));
SVN_ERR(svn_fs_fs__dag_get_copyfrom_path(&copyfrom_path, node));
}
@ -2628,7 +2505,7 @@ fs_file_length(svn_filesize_t *length_p,
dag_node_t *file;
/* First create a dag_node_t from the root/path pair. */
SVN_ERR(get_dag(&file, root, path, FALSE, pool));
SVN_ERR(get_dag(&file, root, path, pool));
/* Now fetch its length */
return svn_fs_fs__dag_file_length(length_p, file, pool);
@ -2647,7 +2524,7 @@ fs_file_checksum(svn_checksum_t **checksum,
{
dag_node_t *file;
SVN_ERR(get_dag(&file, root, path, FALSE, pool));
SVN_ERR(get_dag(&file, root, path, pool));
return svn_fs_fs__dag_file_checksum(checksum, file, kind, pool);
}
@ -2666,7 +2543,7 @@ fs_file_contents(svn_stream_t **contents,
svn_stream_t *file_stream;
/* First create a dag_node_t from the root/path pair. */
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
/* Then create a readable stream from the dag_node_t. */
SVN_ERR(svn_fs_fs__dag_get_contents(&file_stream, node, pool));
@ -2689,7 +2566,7 @@ fs_try_process_file_contents(svn_boolean_t *success,
apr_pool_t *pool)
{
dag_node_t *node;
SVN_ERR(get_dag(&node, root, path, FALSE, pool));
SVN_ERR(get_dag(&node, root, path, pool));
return svn_fs_fs__dag_try_process_file_contents(success, node,
processor, baton, pool);
@ -3071,8 +2948,8 @@ fs_contents_changed(svn_boolean_t *changed_p,
(SVN_ERR_FS_GENERAL, NULL, _("'%s' is not a file"), path2);
}
SVN_ERR(get_dag(&node1, root1, path1, TRUE, pool));
SVN_ERR(get_dag(&node2, root2, path2, TRUE, pool));
SVN_ERR(get_dag(&node1, root1, path1, pool));
SVN_ERR(get_dag(&node2, root2, path2, pool));
return svn_fs_fs__dag_things_different(NULL, changed_p,
node1, node2);
}
@ -3092,10 +2969,10 @@ fs_get_file_delta_stream(svn_txdelta_stream_t **stream_p,
dag_node_t *source_node, *target_node;
if (source_root && source_path)
SVN_ERR(get_dag(&source_node, source_root, source_path, TRUE, pool));
SVN_ERR(get_dag(&source_node, source_root, source_path, pool));
else
source_node = NULL;
SVN_ERR(get_dag(&target_node, target_root, target_path, TRUE, pool));
SVN_ERR(get_dag(&target_node, target_root, target_path, pool));
/* Create a delta stream that turns the source into the target. */
return svn_fs_fs__dag_get_file_delta_stream(stream_p, source_node,
@ -3588,7 +3465,7 @@ history_prev(void *baton, apr_pool_t *pool)
SVN_ERR(svn_fs_fs__revision_root(&copyroot_root, fs, copyroot_rev,
pool));
SVN_ERR(get_dag(&node, copyroot_root, copyroot_path, FALSE, pool));
SVN_ERR(get_dag(&node, copyroot_root, copyroot_path, pool));
copy_dst = svn_fs_fs__dag_get_created_path(node);
/* If our current path was the very destination of the copy,
@ -3785,7 +3662,7 @@ crawl_directory_dag_for_mergeinfo(svn_fs_root_t *root,
svn_pool_clear(iterpool);
kid_path = svn_fspath__join(this_path, dirent->name, iterpool);
SVN_ERR(get_dag(&kid_dag, root, kid_path, TRUE, iterpool));
SVN_ERR(get_dag(&kid_dag, root, kid_path, iterpool));
SVN_ERR(svn_fs_fs__dag_has_mergeinfo(&has_mergeinfo, kid_dag));
SVN_ERR(svn_fs_fs__dag_has_descendants_with_mergeinfo(&go_down, kid_dag));
@ -4031,7 +3908,7 @@ add_descendant_mergeinfo(svn_mergeinfo_catalog_t result_catalog,
dag_node_t *this_dag;
svn_boolean_t go_down;
SVN_ERR(get_dag(&this_dag, root, path, TRUE, scratch_pool));
SVN_ERR(get_dag(&this_dag, root, path, scratch_pool));
SVN_ERR(svn_fs_fs__dag_has_descendants_with_mergeinfo(&go_down,
this_dag));
if (go_down)

View File

@ -2234,6 +2234,7 @@ close_edit(void *edit_baton,
ctx->activity_url ? ctx->activity_url : ctx->txn_url;
const svn_commit_info_t *commit_info;
int response_code;
svn_error_t *err = NULL;
/* MERGE our activity */
SVN_ERR(svn_ra_serf__run_merge(&commit_info, &response_code,
@ -2252,9 +2253,11 @@ close_edit(void *edit_baton,
response_code);
}
ctx->txn_url = NULL; /* If HTTPv2, the txn is now done */
/* Inform the WC that we did a commit. */
if (ctx->callback)
SVN_ERR(ctx->callback(commit_info, ctx->callback_baton, pool));
err = ctx->callback(commit_info, ctx->callback_baton, pool);
/* If we're using activities, DELETE our completed activity. */
if (ctx->activity_url)
@ -2271,11 +2274,17 @@ close_edit(void *edit_baton,
handler->response_handler = svn_ra_serf__expect_empty_body;
handler->response_baton = handler;
SVN_ERR(svn_ra_serf__context_run_one(handler, pool));
ctx->activity_url = NULL; /* Don't try again in abort_edit() on fail */
SVN_ERR(svn_error_compose_create(
err,
svn_ra_serf__context_run_one(handler, pool)));
SVN_ERR_ASSERT(handler->sline.code == 204);
}
SVN_ERR(err);
return SVN_NO_ERROR;
}

View File

@ -33,6 +33,7 @@
#include "svn_ra.h"
#include "svn_dav.h"
#include "svn_xml.h"
#include "svn_ctype.h"
#include "../libsvn_ra/ra_loader.h"
#include "svn_private_config.h"
@ -227,7 +228,9 @@ capabilities_headers_iterator_callback(void *baton,
}
/* SVN-specific headers -- if present, server supports HTTP protocol v2 */
else if (strncmp(key, "SVN", 3) == 0)
else if (!svn_ctype_casecmp(key[0], 'S')
&& !svn_ctype_casecmp(key[1], 'V')
&& !svn_ctype_casecmp(key[2], 'N'))
{
/* If we've not yet seen any information about supported POST
requests, we'll initialize the list/hash with "create-txn"

View File

@ -777,6 +777,13 @@ close_edit(void *edit_baton,
post_commit_err = svn_repos__post_commit_error_str(err, pool);
svn_error_clear(err);
}
/* Make sure a future abort doesn't perform
any work. This may occur if the commit
callback returns an error! */
eb->txn = NULL;
eb->txn_root = NULL;
}
else
{

View File

@ -40,6 +40,7 @@
#include <apr_lib.h>
#include "private/svn_repos_private.h"
#include "private/svn_fspath.h"
#include "private/svn_dep_compat.h"
#include "private/svn_mergeinfo_private.h"
@ -61,7 +62,7 @@ struct parse_baton
const char *parent_dir; /* repository relpath, or NULL */
svn_repos_notify_func_t notify_func;
void *notify_baton;
svn_repos_notify_t *notify;
apr_pool_t *notify_pool; /* scratch pool for notifications */
apr_pool_t *pool;
/* Start and end (inclusive) of revision range we'll pay attention
@ -329,16 +330,7 @@ renumber_mergeinfo_revs(svn_string_t **final_val,
SVN_ERR(svn_mergeinfo_merge2(final_mergeinfo, predates_stream_mergeinfo,
subpool, subpool));
SVN_ERR(svn_mergeinfo_sort(final_mergeinfo, subpool));
/* Mergeinfo revision sources for r0 and r1 are invalid; you can't merge r0
or r1. However, svndumpfilter can be abused to produce r1 merge source
revs. So if we encounter any, then strip them out, no need to put them
into the load target. */
SVN_ERR(svn_mergeinfo__filter_mergeinfo_by_ranges(&final_mergeinfo,
final_mergeinfo,
1, 0, FALSE,
subpool, subpool));
SVN_ERR(svn_mergeinfo__canonicalize_ranges(final_mergeinfo, subpool));
SVN_ERR(svn_mergeinfo_to_string(final_val, final_mergeinfo, pool));
svn_pool_destroy(subpool);
@ -502,9 +494,14 @@ new_revision_record(void **revision_baton,
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_txn_start;
pb->notify->old_revision = rb->rev;
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_txn_start,
pb->notify_pool);
notify->old_revision = rb->rev;
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
/* Stash the oldest "old" revision committed from the load stream. */
@ -515,9 +512,14 @@ new_revision_record(void **revision_baton,
/* If we're skipping this revision, try to notify someone. */
if (rb->skipped && pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_skipped_rev;
pb->notify->old_revision = rb->rev;
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_skipped_rev,
pb->notify_pool);
notify->old_revision = rb->rev;
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
/* If we're parsing revision 0, only the revision are (possibly)
@ -586,8 +588,13 @@ maybe_add_with_history(struct node_baton *nb,
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_copied_node;
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_copied_node,
pb->notify_pool);
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
}
@ -656,10 +663,14 @@ new_node_record(void **node_baton,
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_node_start;
pb->notify->node_action = nb->action;
pb->notify->path = nb->path;
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_node_start,
pb->notify_pool);
notify->path = nb->path;
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
switch (nb->action)
@ -726,6 +737,67 @@ set_revision_property(void *baton,
}
/* Adjust mergeinfo:
* - normalize line endings (if all CRLF, change to LF; but error if mixed);
* - adjust revision numbers (see renumber_mergeinfo_revs());
* - adjust paths (see prefix_mergeinfo_paths()).
*/
static svn_error_t *
adjust_mergeinfo_property(struct revision_baton *rb,
svn_string_t **new_value_p,
const svn_string_t *old_value,
apr_pool_t *result_pool)
{
struct parse_baton *pb = rb->pb;
svn_string_t prop_val = *old_value;
/* Tolerate mergeinfo with "\r\n" line endings because some
dumpstream sources might contain as much. If so normalize
the line endings to '\n' and make a notification to
PARSE_BATON->FEEDBACK_STREAM that we have made this
correction. */
if (strstr(prop_val.data, "\r"))
{
const char *prop_eol_normalized;
SVN_ERR(svn_subst_translate_cstring2(prop_val.data,
&prop_eol_normalized,
"\n", /* translate to LF */
FALSE, /* no repair */
NULL, /* no keywords */
FALSE, /* no expansion */
result_pool));
prop_val.data = prop_eol_normalized;
prop_val.len = strlen(prop_eol_normalized);
if (pb->notify_func)
{
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify
= svn_repos_notify_create(
svn_repos_notify_load_normalized_mergeinfo,
pb->notify_pool);
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
}
/* Renumber mergeinfo as appropriate. */
SVN_ERR(renumber_mergeinfo_revs(new_value_p, &prop_val, rb,
result_pool));
if (pb->parent_dir)
{
/* Prefix the merge source paths with PB->parent_dir. */
/* ASSUMPTION: All source paths are included in the dump stream. */
SVN_ERR(prefix_mergeinfo_paths(new_value_p, *new_value_p,
pb->parent_dir, result_pool));
}
return SVN_NO_ERROR;
}
static svn_error_t *
set_node_property(void *baton,
const char *name,
@ -739,51 +811,42 @@ set_node_property(void *baton,
if (rb->skipped)
return SVN_NO_ERROR;
/* Adjust mergeinfo. If this fails, presumably because the mergeinfo
property has an ill-formed value, then we must not fail to load
the repository (at least if it's a simple load with no revision
offset adjustments, path changes, etc.) so just warn and leave it
as it is. */
if (strcmp(name, SVN_PROP_MERGEINFO) == 0)
{
svn_string_t *renumbered_mergeinfo;
/* ### Need to cast away const. We cannot change the declaration of
* ### this function since it is part of svn_repos_parse_fns2_t. */
svn_string_t *prop_val = (svn_string_t *)value;
svn_string_t *new_value;
svn_error_t *err;
/* Tolerate mergeinfo with "\r\n" line endings because some
dumpstream sources might contain as much. If so normalize
the line endings to '\n' and make a notification to
PARSE_BATON->FEEDBACK_STREAM that we have made this
correction. */
if (strstr(prop_val->data, "\r"))
err = adjust_mergeinfo_property(rb, &new_value, value, nb->pool);
if (err)
{
const char *prop_eol_normalized;
SVN_ERR(svn_subst_translate_cstring2(prop_val->data,
&prop_eol_normalized,
"\n", /* translate to LF */
FALSE, /* no repair */
NULL, /* no keywords */
FALSE, /* no expansion */
nb->pool));
prop_val->data = prop_eol_normalized;
prop_val->len = strlen(prop_eol_normalized);
if (pb->validate_props)
{
return svn_error_quick_wrap(
err,
_("Invalid svn:mergeinfo value"));
}
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_normalized_mergeinfo;
pb->notify_func(pb->notify_baton, pb->notify, nb->pool);
}
}
svn_repos_notify_t *notify
= svn_repos_notify_create(svn_repos_notify_warning,
pb->notify_pool);
/* Renumber mergeinfo as appropriate. */
SVN_ERR(renumber_mergeinfo_revs(&renumbered_mergeinfo, prop_val, rb,
nb->pool));
value = renumbered_mergeinfo;
if (pb->parent_dir)
notify->warning = svn_repos__notify_warning_invalid_mergeinfo;
notify->warning_str = _("Invalid svn:mergeinfo value; "
"leaving unchanged");
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
svn_error_clear(err);
}
else
{
/* Prefix the merge source paths with PB->parent_dir. */
/* ASSUMPTION: All source paths are included in the dump stream. */
svn_string_t *mergeinfo_val;
SVN_ERR(prefix_mergeinfo_paths(&mergeinfo_val, value,
pb->parent_dir, nb->pool));
value = mergeinfo_val;
value = new_value;
}
}
@ -896,8 +959,13 @@ close_node(void *baton)
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_node_done;
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_node_done,
pb->notify_pool);
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
return SVN_NO_ERROR;
@ -1016,12 +1084,17 @@ close_revision(void *baton)
if (pb->notify_func)
{
pb->notify->action = svn_repos_notify_load_txn_committed;
pb->notify->new_revision = committed_rev;
pb->notify->old_revision = ((committed_rev == rb->rev)
/* ### TODO: Use proper scratch pool instead of pb->notify_pool */
svn_repos_notify_t *notify = svn_repos_notify_create(
svn_repos_notify_load_txn_committed,
pb->notify_pool);
notify->new_revision = committed_rev;
notify->old_revision = ((committed_rev == rb->rev)
? SVN_INVALID_REVNUM
: rb->rev);
pb->notify_func(pb->notify_baton, pb->notify, rb->pool);
pb->notify_func(pb->notify_baton, notify, pb->notify_pool);
svn_pool_clear(pb->notify_pool);
}
return SVN_NO_ERROR;
@ -1079,10 +1152,10 @@ svn_repos_get_fs_build_parser4(const svn_repos_parse_fns3_t **callbacks,
pb->validate_props = validate_props;
pb->notify_func = notify_func;
pb->notify_baton = notify_baton;
pb->notify = svn_repos_notify_create(svn_repos_notify_load_txn_start, pool);
pb->uuid_action = uuid_action;
pb->parent_dir = parent_dir;
pb->pool = pool;
pb->notify_pool = svn_pool_create(pool);
pb->rev_map = apr_hash_make(pool);
pb->oldest_old_rev = SVN_INVALID_REVNUM;
pb->last_rev_mapped = SVN_INVALID_REVNUM;

View File

@ -726,23 +726,6 @@ svn_repos_trace_node_locations(svn_fs_t *fs,
if (! prev_path)
break;
if (authz_read_func)
{
svn_boolean_t readable;
svn_fs_root_t *tmp_root;
SVN_ERR(svn_fs_revision_root(&tmp_root, fs, revision, currpool));
SVN_ERR(authz_read_func(&readable, tmp_root, path,
authz_read_baton, currpool));
if (! readable)
{
svn_pool_destroy(lastpool);
svn_pool_destroy(currpool);
return SVN_NO_ERROR;
}
}
/* Assign the current path to all younger revisions until we reach
the copy target rev. */
while ((revision_ptr < revision_ptr_end)
@ -765,6 +748,20 @@ svn_repos_trace_node_locations(svn_fs_t *fs,
path = prev_path;
revision = prev_rev;
if (authz_read_func)
{
svn_boolean_t readable;
SVN_ERR(svn_fs_revision_root(&root, fs, revision, currpool));
SVN_ERR(authz_read_func(&readable, root, path,
authz_read_baton, currpool));
if (!readable)
{
svn_pool_destroy(lastpool);
svn_pool_destroy(currpool);
return SVN_NO_ERROR;
}
}
/* Clear last pool and switch. */
svn_pool_clear(lastpool);
tmppool = lastpool;

View File

@ -101,6 +101,21 @@
* on their hash key.
*/
/* APR's read-write lock implementation on Windows is horribly inefficient.
* Even with very low contention a runtime overhead of 35% percent has been
* measured for 'svn-bench null-export' over ra_serf.
*
* Use a simple mutex on Windows. Because there is one mutex per segment,
* large machines should (and usually can) be configured with large caches
* such that read contention is kept low. This is basically the situation
* we head before 1.8.
*/
#ifdef WIN32
# define USE_SIMPLE_MUTEX 1
#else
# define USE_SIMPLE_MUTEX 0
#endif
/* A 16-way associative cache seems to be a good compromise between
* performance (worst-case lookups) and efficiency-loss due to collisions.
*
@ -465,11 +480,15 @@ struct svn_membuffer_t
* the cache's creator doesn't feel the cache needs to be
* thread-safe.
*/
# if USE_SIMPLE_MUTEX
svn_mutex__t *lock;
# else
apr_thread_rwlock_t *lock;
# endif
/* If set, write access will wait until they get exclusive access.
* Otherwise, they will become no-ops if the segment is currently
* read-locked.
* read-locked. Only used when LOCK is an r/w lock.
*/
svn_boolean_t allow_blocking_writes;
#endif
@ -489,12 +508,16 @@ static svn_error_t *
read_lock_cache(svn_membuffer_t *cache)
{
#if APR_HAS_THREADS
# if USE_SIMPLE_MUTEX
return svn_mutex__lock(cache->lock);
# else
if (cache->lock)
{
apr_status_t status = apr_thread_rwlock_rdlock(cache->lock);
if (status)
return svn_error_wrap_apr(status, _("Can't lock cache mutex"));
}
# endif
#endif
return SVN_NO_ERROR;
}
@ -505,6 +528,12 @@ static svn_error_t *
write_lock_cache(svn_membuffer_t *cache, svn_boolean_t *success)
{
#if APR_HAS_THREADS
# if USE_SIMPLE_MUTEX
return svn_mutex__lock(cache->lock);
# else
if (cache->lock)
{
apr_status_t status;
@ -526,6 +555,8 @@ write_lock_cache(svn_membuffer_t *cache, svn_boolean_t *success)
return svn_error_wrap_apr(status,
_("Can't write-lock cache mutex"));
}
# endif
#endif
return SVN_NO_ERROR;
}
@ -537,10 +568,18 @@ static svn_error_t *
force_write_lock_cache(svn_membuffer_t *cache)
{
#if APR_HAS_THREADS
# if USE_SIMPLE_MUTEX
return svn_mutex__lock(cache->lock);
# else
apr_status_t status = apr_thread_rwlock_wrlock(cache->lock);
if (status)
return svn_error_wrap_apr(status,
_("Can't write-lock cache mutex"));
# endif
#endif
return SVN_NO_ERROR;
}
@ -552,6 +591,12 @@ static svn_error_t *
unlock_cache(svn_membuffer_t *cache, svn_error_t *err)
{
#if APR_HAS_THREADS
# if USE_SIMPLE_MUTEX
return svn_mutex__unlock(cache->lock, err);
# else
if (cache->lock)
{
apr_status_t status = apr_thread_rwlock_unlock(cache->lock);
@ -561,6 +606,8 @@ unlock_cache(svn_membuffer_t *cache, svn_error_t *err)
if (status)
return svn_error_wrap_apr(status, _("Can't unlock cache mutex"));
}
# endif
#endif
return err;
}
@ -1290,6 +1337,12 @@ svn_cache__membuffer_cache_create(svn_membuffer_t **cache,
* the cache's creator doesn't feel the cache needs to be
* thread-safe.
*/
# if USE_SIMPLE_MUTEX
SVN_ERR(svn_mutex__init(&c[seg].lock, thread_safe, pool));
# else
c[seg].lock = NULL;
if (thread_safe)
{
@ -1299,6 +1352,8 @@ svn_cache__membuffer_cache_create(svn_membuffer_t **cache,
return svn_error_wrap_apr(status, _("Can't create cache mutex"));
}
# endif
/* Select the behavior of write operations.
*/
c[seg].allow_blocking_writes = allow_blocking_writes;

View File

@ -487,14 +487,15 @@ make_string_from_option(const char **valuep, svn_config_t *cfg,
expand_option_value(cfg, section, opt->value, &opt->x_value, tmp_pool);
opt->expanded = TRUE;
if (!x_pool)
if (x_pool != cfg->x_pool)
{
/* Grab the fully expanded value from tmp_pool before its
disappearing act. */
if (opt->x_value)
opt->x_value = apr_pstrmemdup(cfg->x_pool, opt->x_value,
strlen(opt->x_value));
svn_pool_destroy(tmp_pool);
if (!x_pool)
svn_pool_destroy(tmp_pool);
}
}
else

View File

@ -28,6 +28,7 @@
#include "svn_private_config.h"
#include "private/svn_mutex.h"
#include "private/svn_atomic.h"
/* A mutex to protect our global pool and cache. */
static svn_mutex__t *dso_mutex = NULL;
@ -41,18 +42,18 @@ static apr_hash_t *dso_cache;
/* Just an arbitrary location in memory... */
static int not_there_sentinel;
static volatile svn_atomic_t atomic_init_status = 0;
/* A specific value we store in the dso_cache to indicate that the
library wasn't found. This keeps us from allocating extra memory
from dso_pool when trying to find libraries we already know aren't
there. */
#define NOT_THERE ((void *) &not_there_sentinel)
svn_error_t *
svn_dso_initialize2(void)
static svn_error_t *
atomic_init_func(void *baton,
apr_pool_t *pool)
{
if (dso_pool)
return SVN_NO_ERROR;
dso_pool = svn_pool_create(NULL);
SVN_ERR(svn_mutex__init(&dso_mutex, TRUE, dso_pool));
@ -61,6 +62,15 @@ svn_dso_initialize2(void)
return SVN_NO_ERROR;
}
svn_error_t *
svn_dso_initialize2(void)
{
SVN_ERR(svn_atomic__init_once(&atomic_init_status, atomic_init_func,
NULL, NULL));
return SVN_NO_ERROR;
}
#if APR_HAS_DSO
static svn_error_t *
svn_dso_load_internal(apr_dso_handle_t **dso, const char *fname)
@ -107,8 +117,7 @@ svn_dso_load_internal(apr_dso_handle_t **dso, const char *fname)
svn_error_t *
svn_dso_load(apr_dso_handle_t **dso, const char *fname)
{
if (! dso_pool)
SVN_ERR(svn_dso_initialize2());
SVN_ERR(svn_dso_initialize2());
SVN_MUTEX__WITH_LOCK(dso_mutex, svn_dso_load_internal(dso, fname));

View File

@ -289,6 +289,8 @@ svn_error_compose(svn_error_t *chain, svn_error_t *new_err)
*chain = *new_err;
if (chain->message)
chain->message = apr_pstrdup(pool, new_err->message);
if (chain->file)
chain->file = apr_pstrdup(pool, new_err->file);
chain->pool = pool;
#if defined(SVN_DEBUG)
if (! new_err->child)
@ -358,6 +360,8 @@ svn_error_dup(svn_error_t *err)
tmp_err->pool = pool;
if (tmp_err->message)
tmp_err->message = apr_pstrdup(pool, tmp_err->message);
if (tmp_err->file)
tmp_err->file = apr_pstrdup(pool, tmp_err->file);
}
#if defined(SVN_DEBUG)

View File

@ -72,6 +72,9 @@
#include "svn_cmdline.h"
#include "svn_checksum.h"
#include "svn_string.h"
#include "svn_hash.h"
#include "svn_user.h"
#include "svn_dirent_uri.h"
#include "private/svn_auth_private.h"
@ -80,6 +83,7 @@
#ifdef SVN_HAVE_GPG_AGENT
#define BUFFER_SIZE 1024
#define ATTEMPT_PARAMETER "svn.simple.gpg_agent.attempt"
/* Modify STR in-place such that blanks are escaped as required by the
* gpg-agent protocol. Return a pointer to STR. */
@ -98,6 +102,24 @@ escape_blanks(char *str)
return str;
}
/* Generate the string CACHE_ID_P based on the REALMSTRING allocated in
* RESULT_POOL using SCRATCH_POOL for temporary allocations. This is similar
* to other password caching mechanisms. */
static svn_error_t *
get_cache_id(const char **cache_id_p, const char *realmstring,
apr_pool_t *scratch_pool, apr_pool_t *result_pool)
{
const char *cache_id = NULL;
svn_checksum_t *digest = NULL;
SVN_ERR(svn_checksum(&digest, svn_checksum_md5, realmstring,
strlen(realmstring), scratch_pool));
cache_id = svn_checksum_to_cstring(digest, result_pool);
*cache_id_p = cache_id;
return SVN_NO_ERROR;
}
/* Attempt to read a gpg-agent response message from the socket SD into
* buffer BUF. Buf is assumed to be N bytes large. Return TRUE if a response
* message could be read that fits into the buffer. Else return FALSE.
@ -156,6 +178,17 @@ send_option(int sd, char *buf, size_t n, const char *option, const char *value,
return (strncmp(buf, "OK", 2) == 0);
}
/* Send the BYE command and disconnect from the gpg-agent. Doing this avoids
* gpg-agent emitting a "Connection reset by peer" log message with some
* versions of gpg-agent. */
static void
bye_gpg_agent(int sd)
{
/* don't bother to check the result of the write, it either worked or it
* didn't, but either way we're closing. */
write(sd, "BYE\n", 4);
close(sd);
}
/* Locate a running GPG Agent, and return an open file descriptor
* for communication with the agent in *NEW_SD. If no running agent
@ -173,17 +206,34 @@ find_running_gpg_agent(int *new_sd, apr_pool_t *pool)
*new_sd = -1;
/* This implements the method of finding the socket as described in
* the gpg-agent man page under the --use-standard-socket option.
* The manage page misleadingly says the standard socket is
* "named 'S.gpg-agent' located in the home directory." The standard
* socket path is actually in the .gnupg directory in the home directory,
* i.e. ~/.gnupg/S.gpg-agent */
gpg_agent_info = getenv("GPG_AGENT_INFO");
if (gpg_agent_info != NULL)
{
apr_array_header_t *socket_details;
/* For reference GPG_AGENT_INFO consists of 3 : separated fields.
* The path to the socket, the pid of the gpg-agent process and
* finally the version of the protocol the agent talks. */
socket_details = svn_cstring_split(gpg_agent_info, ":", TRUE,
pool);
socket_name = APR_ARRAY_IDX(socket_details, 0, const char *);
}
else
return SVN_NO_ERROR;
{
const char *homedir = svn_user_get_homedir(pool);
if (!homedir)
return SVN_NO_ERROR;
socket_name = svn_dirent_join_many(pool, homedir, ".gnupg",
"S.gpg-agent", NULL);
}
if (socket_name != NULL)
{
@ -210,13 +260,13 @@ find_running_gpg_agent(int *new_sd, apr_pool_t *pool)
buffer = apr_palloc(pool, BUFFER_SIZE);
if (!receive_from_gpg_agent(sd, buffer, BUFFER_SIZE))
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (strncmp(buffer, "OK", 2) != 0)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
@ -226,19 +276,19 @@ find_running_gpg_agent(int *new_sd, apr_pool_t *pool)
request = "GETINFO socket_name\n";
if (write(sd, request, strlen(request)) == -1)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (!receive_from_gpg_agent(sd, buffer, BUFFER_SIZE))
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (strncmp(buffer, "D", 1) == 0)
p = &buffer[2];
if (!p)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
ep = strchr(p, '\n');
@ -246,18 +296,18 @@ find_running_gpg_agent(int *new_sd, apr_pool_t *pool)
*ep = '\0';
if (strcmp(socket_name, p) != 0)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
/* The agent will terminate its response with "OK". */
if (!receive_from_gpg_agent(sd, buffer, BUFFER_SIZE))
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (strncmp(buffer, "OK", 2) != 0)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
@ -265,6 +315,55 @@ find_running_gpg_agent(int *new_sd, apr_pool_t *pool)
return SVN_NO_ERROR;
}
static svn_boolean_t
send_options(int sd, char *buf, size_t n, apr_pool_t *scratch_pool)
{
const char *tty_name;
const char *tty_type;
const char *lc_ctype;
const char *display;
/* Send TTY_NAME to the gpg-agent daemon. */
tty_name = getenv("GPG_TTY");
if (tty_name != NULL)
{
if (!send_option(sd, buf, n, "ttyname", tty_name, scratch_pool))
return FALSE;
}
/* Send TTY_TYPE to the gpg-agent daemon. */
tty_type = getenv("TERM");
if (tty_type != NULL)
{
if (!send_option(sd, buf, n, "ttytype", tty_type, scratch_pool))
return FALSE;
}
/* Compute LC_CTYPE. */
lc_ctype = getenv("LC_ALL");
if (lc_ctype == NULL)
lc_ctype = getenv("LC_CTYPE");
if (lc_ctype == NULL)
lc_ctype = getenv("LANG");
/* Send LC_CTYPE to the gpg-agent daemon. */
if (lc_ctype != NULL)
{
if (!send_option(sd, buf, n, "lc-ctype", lc_ctype, scratch_pool))
return FALSE;
}
/* Send DISPLAY to the gpg-agent daemon. */
display = getenv("DISPLAY");
if (display != NULL)
{
if (!send_option(sd, buf, n, "display", display, scratch_pool))
return FALSE;
}
return TRUE;
}
/* Implementation of svn_auth__password_get_t that retrieves the password
from gpg-agent */
static svn_error_t *
@ -283,101 +382,59 @@ password_get_gpg_agent(svn_boolean_t *done,
char *buffer;
const char *request = NULL;
const char *cache_id = NULL;
const char *tty_name;
const char *tty_type;
const char *lc_ctype;
const char *display;
svn_checksum_t *digest = NULL;
char *password_prompt;
char *realm_prompt;
char *error_prompt;
int *attempt;
*done = FALSE;
attempt = svn_hash_gets(parameters, ATTEMPT_PARAMETER);
SVN_ERR(find_running_gpg_agent(&sd, pool));
if (sd == -1)
return SVN_NO_ERROR;
buffer = apr_palloc(pool, BUFFER_SIZE);
/* Send TTY_NAME to the gpg-agent daemon. */
tty_name = getenv("GPG_TTY");
if (tty_name != NULL)
if (!send_options(sd, buffer, BUFFER_SIZE, pool))
{
if (!send_option(sd, buffer, BUFFER_SIZE, "ttyname", tty_name, pool))
{
close(sd);
return SVN_NO_ERROR;
}
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
/* Send TTY_TYPE to the gpg-agent daemon. */
tty_type = getenv("TERM");
if (tty_type != NULL)
{
if (!send_option(sd, buffer, BUFFER_SIZE, "ttytype", tty_type, pool))
{
close(sd);
return SVN_NO_ERROR;
}
}
/* Compute LC_CTYPE. */
lc_ctype = getenv("LC_ALL");
if (lc_ctype == NULL)
lc_ctype = getenv("LC_CTYPE");
if (lc_ctype == NULL)
lc_ctype = getenv("LANG");
/* Send LC_CTYPE to the gpg-agent daemon. */
if (lc_ctype != NULL)
{
if (!send_option(sd, buffer, BUFFER_SIZE, "lc-ctype", lc_ctype, pool))
{
close(sd);
return SVN_NO_ERROR;
}
}
/* Send DISPLAY to the gpg-agent daemon. */
display = getenv("DISPLAY");
if (display != NULL)
{
if (!send_option(sd, buffer, BUFFER_SIZE, "display", display, pool))
{
close(sd);
return SVN_NO_ERROR;
}
}
/* Create the CACHE_ID which will be generated based on REALMSTRING similar
to other password caching mechanisms. */
SVN_ERR(svn_checksum(&digest, svn_checksum_md5, realmstring,
strlen(realmstring), pool));
cache_id = svn_checksum_to_cstring(digest, pool);
SVN_ERR(get_cache_id(&cache_id, realmstring, pool, pool));
password_prompt = apr_psprintf(pool, _("Password for '%s': "), username);
realm_prompt = apr_psprintf(pool, _("Enter your Subversion password for %s"),
realmstring);
if (*attempt == 1)
/* X means no error to the gpg-agent protocol */
error_prompt = apr_pstrdup(pool, "X");
else
error_prompt = apr_pstrdup(pool, _("Authentication failed"));
request = apr_psprintf(pool,
"GET_PASSPHRASE --data %s--repeat=1 "
"%s X %s %s\n",
"GET_PASSPHRASE --data %s"
"%s %s %s %s\n",
non_interactive ? "--no-ask " : "",
cache_id,
escape_blanks(error_prompt),
escape_blanks(password_prompt),
escape_blanks(realm_prompt));
if (write(sd, request, strlen(request)) == -1)
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (!receive_from_gpg_agent(sd, buffer, BUFFER_SIZE))
{
close(sd);
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
close(sd);
bye_gpg_agent(sd);
if (strncmp(buffer, "ERR", 3) == 0)
return SVN_NO_ERROR;
@ -424,7 +481,7 @@ password_set_gpg_agent(svn_boolean_t *done,
if (sd == -1)
return SVN_NO_ERROR;
close(sd);
bye_gpg_agent(sd);
*done = TRUE;
return SVN_NO_ERROR;
@ -440,11 +497,108 @@ simple_gpg_agent_first_creds(void **credentials,
const char *realmstring,
apr_pool_t *pool)
{
return svn_auth__simple_creds_cache_get(credentials, iter_baton,
provider_baton, parameters,
realmstring, password_get_gpg_agent,
SVN_AUTH__GPG_AGENT_PASSWORD_TYPE,
pool);
svn_error_t *err;
int *attempt = apr_palloc(pool, sizeof(*attempt));
*attempt = 1;
svn_hash_sets(parameters, ATTEMPT_PARAMETER, attempt);
err = svn_auth__simple_creds_cache_get(credentials, iter_baton,
provider_baton, parameters,
realmstring, password_get_gpg_agent,
SVN_AUTH__GPG_AGENT_PASSWORD_TYPE,
pool);
*iter_baton = attempt;
return err;
}
/* An implementation of svn_auth_provider_t::next_credentials() */
static svn_error_t *
simple_gpg_agent_next_creds(void **credentials,
void *iter_baton,
void *provider_baton,
apr_hash_t *parameters,
const char *realmstring,
apr_pool_t *pool)
{
int *attempt = (int *)iter_baton;
int sd;
char *buffer;
const char *cache_id = NULL;
const char *request = NULL;
*credentials = NULL;
/* The users previous credentials failed so first remove the cached entry,
* before trying to retrieve them again. Because gpg-agent stores cached
* credentials immediately upon retrieving them, this gives us the
* opportunity to remove the invalid credentials and prompt the
* user again. While it's possible that server side issues could trigger
* this, this cache is ephemeral so at worst we're just speeding up
* when the user would need to re-enter their password. */
if (svn_hash_gets(parameters, SVN_AUTH_PARAM_NON_INTERACTIVE))
{
/* In this case since we're running non-interactively we do not
* want to clear the cache since the user was never prompted by
* gpg-agent to set a password. */
return SVN_NO_ERROR;
}
*attempt = *attempt + 1;
SVN_ERR(find_running_gpg_agent(&sd, pool));
if (sd == -1)
return SVN_NO_ERROR;
buffer = apr_palloc(pool, BUFFER_SIZE);
if (!send_options(sd, buffer, BUFFER_SIZE, pool))
{
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
SVN_ERR(get_cache_id(&cache_id, realmstring, pool, pool));
request = apr_psprintf(pool, "CLEAR_PASSPHRASE %s\n", cache_id);
if (write(sd, request, strlen(request)) == -1)
{
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (!receive_from_gpg_agent(sd, buffer, BUFFER_SIZE))
{
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
if (strncmp(buffer, "OK\n", 3) != 0)
{
bye_gpg_agent(sd);
return SVN_NO_ERROR;
}
/* TODO: This attempt limit hard codes it at 3 attempts (or 2 retries)
* which matches svn command line client's retry_limit as set in
* svn_cmdline_create_auth_baton(). It would be nice to have that
* limit reflected here but that violates the boundry between the
* prompt provider and the cache provider. gpg-agent is acting as
* both here due to the peculiarties of their design so we'll have to
* live with this for now. Note that when these failures get exceeded
* it'll eventually fall back on the retry limits of whatever prompt
* provider is in effect, so this effectively doubles the limit. */
if (*attempt < 4)
return svn_auth__simple_creds_cache_get(credentials, &iter_baton,
provider_baton, parameters,
realmstring,
password_get_gpg_agent,
SVN_AUTH__GPG_AGENT_PASSWORD_TYPE,
pool);
return SVN_NO_ERROR;
}
@ -468,7 +622,7 @@ simple_gpg_agent_save_creds(svn_boolean_t *saved,
static const svn_auth_provider_t gpg_agent_simple_provider = {
SVN_AUTH_CRED_SIMPLE,
simple_gpg_agent_first_creds,
NULL,
simple_gpg_agent_next_creds,
simple_gpg_agent_save_creds
};

View File

@ -1,4 +1,4 @@
/* This file is automatically generated from internal_statements.sql and .dist_sandbox/subversion-1.8.10/subversion/libsvn_subr/token-map.h.
/* This file is automatically generated from internal_statements.sql and .dist_sandbox/subversion-1.8.14/subversion/libsvn_subr/token-map.h.
* Do not edit this file -- edit the source and rerun gen-make.py */
#define STMT_INTERNAL_SAVEPOINT_SVN 0

View File

@ -4675,8 +4675,25 @@ svn_io_open_unique_file3(apr_file_t **file,
* case, but only if the umask allows it. */
if (!using_system_temp_dir)
{
svn_error_t *err;
SVN_ERR(merge_default_file_perms(tempfile, &perms, scratch_pool));
SVN_ERR(file_perms_set2(tempfile, perms, scratch_pool));
err = file_perms_set2(tempfile, perms, scratch_pool);
if (err)
{
if (APR_STATUS_IS_INCOMPLETE(err->apr_err) ||
APR_STATUS_IS_ENOTIMPL(err->apr_err))
svn_error_clear(err);
else
{
const char *message;
message = apr_psprintf(scratch_pool,
_("Can't set permissions on '%s'"),
svn_dirent_local_style(tempname,
scratch_pool));
return svn_error_quick_wrap(err, message);
}
}
}
#endif

View File

@ -611,6 +611,43 @@ svn_rangelist__parse(svn_rangelist_t **rangelist,
return SVN_NO_ERROR;
}
/* Return TRUE, if all ranges in RANGELIST are in ascending order and do
* not overlap and are not adjacent.
*
* ### Can yield false negatives: ranges of differing inheritance are
* allowed to be adjacent.
*
* If this returns FALSE, you probaly want to qsort() the
* ranges and then call svn_rangelist__combine_adjacent_ranges().
*/
static svn_boolean_t
is_rangelist_normalized(svn_rangelist_t *rangelist)
{
int i;
svn_merge_range_t **ranges = (svn_merge_range_t **)rangelist->elts;
for (i = 0; i < rangelist->nelts-1; ++i)
if (ranges[i]->end >= ranges[i+1]->start)
return FALSE;
return TRUE;
}
svn_error_t *
svn_rangelist__canonicalize(svn_rangelist_t *rangelist,
apr_pool_t *scratch_pool)
{
if (! is_rangelist_normalized(rangelist))
{
qsort(rangelist->elts, rangelist->nelts, rangelist->elt_size,
svn_sort_compare_ranges);
SVN_ERR(svn_rangelist__combine_adjacent_ranges(rangelist, scratch_pool));
}
return SVN_NO_ERROR;
}
svn_error_t *
svn_rangelist__combine_adjacent_ranges(svn_rangelist_t *rangelist,
apr_pool_t *scratch_pool)
@ -692,15 +729,11 @@ parse_revision_line(const char **input, const char *end, svn_mergeinfo_t hash,
if (*input != end)
*input = *input + 1;
/* Sort the rangelist, combine adjacent ranges into single ranges,
and make sure there are no overlapping ranges. */
if (rangelist->nelts > 1)
{
qsort(rangelist->elts, rangelist->nelts, rangelist->elt_size,
svn_sort_compare_ranges);
SVN_ERR(svn_rangelist__combine_adjacent_ranges(rangelist, scratch_pool));
}
/* Sort the rangelist, combine adjacent ranges into single ranges, and
make sure there are no overlapping ranges. Luckily, most data in
svn:mergeinfo will already be in normalized form and this will be quick.
*/
SVN_ERR(svn_rangelist__canonicalize(rangelist, scratch_pool));
/* Handle any funky mergeinfo with relative merge source paths that
might exist due to issue #3547. It's possible that this issue allowed
@ -1973,6 +2006,22 @@ svn_mergeinfo_sort(svn_mergeinfo_t input, apr_pool_t *pool)
return SVN_NO_ERROR;
}
svn_error_t *
svn_mergeinfo__canonicalize_ranges(svn_mergeinfo_t mergeinfo,
apr_pool_t *scratch_pool)
{
apr_hash_index_t *hi;
for (hi = apr_hash_first(scratch_pool, mergeinfo); hi; hi = apr_hash_next(hi))
{
apr_array_header_t *rl = svn__apr_hash_index_val(hi);
SVN_ERR(svn_rangelist__canonicalize(rl, scratch_pool));
}
return SVN_NO_ERROR;
}
svn_mergeinfo_catalog_t
svn_mergeinfo_catalog_dup(svn_mergeinfo_catalog_t mergeinfo_catalog,
apr_pool_t *pool)

View File

@ -24,7 +24,7 @@
/* Include sqlite3 inline, making all symbols private. */
#ifdef SVN_SQLITE_INLINE
# define SQLITE_OMIT_DEPRECATED
# define SQLITE_OMIT_DEPRECATED 1
# define SQLITE_API static
# if __GNUC__ > 4 || (__GNUC__ == 4 && (__GNUC_MINOR__ >= 6 || __APPLE_CC__))
# if !__APPLE_CC__ || __GNUC_MINOR__ >= 6

View File

@ -619,7 +619,7 @@ svn_stringbuf_insert(svn_stringbuf_t *str,
if (bytes + count > str->data && bytes < str->data + str->blocksize)
{
/* special case: BYTES overlaps with this string -> copy the source */
const char *temp = apr_pstrndup(str->pool, bytes, count);
const char *temp = apr_pmemdup(str->pool, bytes, count);
svn_stringbuf_insert(str, pos, temp, count);
}
else
@ -659,7 +659,7 @@ svn_stringbuf_replace(svn_stringbuf_t *str,
if (bytes + new_count > str->data && bytes < str->data + str->blocksize)
{
/* special case: BYTES overlaps with this string -> copy the source */
const char *temp = apr_pstrndup(str->pool, bytes, new_count);
const char *temp = apr_pmemdup(str->pool, bytes, new_count);
svn_stringbuf_replace(str, pos, old_count, temp, new_count);
}
else

View File

@ -136,7 +136,7 @@ svn_version_extended(svn_boolean_t verbose,
info->build_time = NULL;
info->build_host = SVN_BUILD_HOST;
info->copyright = apr_pstrdup
(pool, _("Copyright (C) 2014 The Apache Software Foundation.\n"
(pool, _("Copyright (C) 2015 The Apache Software Foundation.\n"
"This software consists of contributions made by many people;\n"
"see the NOTICE file for more information.\n"
"Subversion is open source software, see "

View File

@ -965,7 +965,8 @@ svn_wc_add4(svn_wc_context_t *wc_ctx,
repos_relpath,
repos_root_url, repos_uuid,
copyfrom_rev,
NULL /* children */, FALSE, depth,
NULL /* children */, depth,
FALSE /* is_move */,
NULL /* conflicts */,
NULL /* work items */,
scratch_pool));

View File

@ -67,69 +67,13 @@ can_be_cleaned(int *wc_format,
return SVN_NO_ERROR;
}
/* Do a modifed check for LOCAL_ABSPATH, and all working children, to force
timestamp repair. */
/* Dummy svn_wc_status_func4_t implementation */
static svn_error_t *
repair_timestamps(svn_wc__db_t *db,
const char *local_abspath,
svn_cancel_func_t cancel_func,
void *cancel_baton,
apr_pool_t *scratch_pool)
status_dummy_callback(void *baton,
const char *local_abspath,
const svn_wc_status3_t *status,
apr_pool_t *scratch_pool)
{
svn_node_kind_t kind;
svn_wc__db_status_t status;
if (cancel_func)
SVN_ERR(cancel_func(cancel_baton));
SVN_ERR(svn_wc__db_read_info(&status, &kind,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL,
db, local_abspath, scratch_pool, scratch_pool));
if (status == svn_wc__db_status_server_excluded
|| status == svn_wc__db_status_deleted
|| status == svn_wc__db_status_excluded
|| status == svn_wc__db_status_not_present)
return SVN_NO_ERROR;
if (kind == svn_node_file
|| kind == svn_node_symlink)
{
svn_boolean_t modified;
SVN_ERR(svn_wc__internal_file_modified_p(&modified,
db, local_abspath, FALSE,
scratch_pool));
}
else if (kind == svn_node_dir)
{
apr_pool_t *iterpool = svn_pool_create(scratch_pool);
const apr_array_header_t *children;
int i;
SVN_ERR(svn_wc__db_read_children_of_working_node(&children, db,
local_abspath,
scratch_pool,
iterpool));
for (i = 0; i < children->nelts; ++i)
{
const char *child_abspath;
svn_pool_clear(iterpool);
child_abspath = svn_dirent_join(local_abspath,
APR_ARRAY_IDX(children, i,
const char *),
iterpool);
SVN_ERR(repair_timestamps(db, child_abspath,
cancel_func, cancel_baton, iterpool));
}
svn_pool_destroy(iterpool);
}
return SVN_NO_ERROR;
}
@ -184,8 +128,17 @@ cleanup_internal(svn_wc__db_t *db,
SVN_ERR(svn_wc__db_pristine_cleanup(db, dir_abspath, scratch_pool));
}
SVN_ERR(repair_timestamps(db, dir_abspath, cancel_func, cancel_baton,
scratch_pool));
/* Instead of implementing a separate repair step here, use the standard
status walker's optimized implementation, which performs repairs when
there is a lock. */
SVN_ERR(svn_wc__internal_walk_status(db, dir_abspath, svn_depth_infinity,
FALSE /* get_all */,
FALSE /* no_ignore */,
FALSE /* ignore_text_mods */,
NULL /* ignore patterns */,
status_dummy_callback, NULL,
cancel_func, cancel_baton,
scratch_pool));
/* All done, toss the lock */
SVN_ERR(svn_wc__db_wclock_release(db, dir_abspath, scratch_pool));

View File

@ -1642,7 +1642,13 @@ eval_text_conflict_func_result(svn_skel_t **work_items,
}
}
SVN_ERR_ASSERT(install_from_abspath != NULL);
if (install_from_abspath == NULL)
return svn_error_createf(SVN_ERR_WC_CONFLICT_RESOLVER_FAILURE, NULL,
_("Conflict on '%s' could not be resolved "
"because the chosen version of the file "
"is not available."),
svn_dirent_local_style(local_abspath,
scratch_pool));
{
svn_skel_t *work_item;
@ -1761,6 +1767,7 @@ resolve_text_conflict(svn_skel_t **work_items,
svn_skel_t *work_item;
svn_wc_conflict_description2_t *cdesc;
apr_hash_t *props;
const char *mime_type;
*work_items = NULL;
*was_resolved = FALSE;
@ -1773,8 +1780,9 @@ resolve_text_conflict(svn_skel_t **work_items,
cdesc = svn_wc_conflict_description_create_text2(local_abspath,
scratch_pool);
cdesc->is_binary = FALSE;
cdesc->mime_type = svn_prop_get_value(props, SVN_PROP_MIME_TYPE);
mime_type = svn_prop_get_value(props, SVN_PROP_MIME_TYPE);
cdesc->is_binary = mime_type ? svn_mime_type_is_binary(mime_type) : FALSE;
cdesc->mime_type = mime_type;
cdesc->base_abspath = left_abspath;
cdesc->their_abspath = right_abspath;
cdesc->my_abspath = detranslated_target;
@ -2262,6 +2270,8 @@ svn_wc__read_conflicts(const apr_array_header_t **conflicts,
if (text_conflicted)
{
apr_hash_t *props;
const char *mime_type;
svn_wc_conflict_description2_t *desc;
desc = svn_wc_conflict_description_create_text2(local_abspath,
result_pool);
@ -2270,6 +2280,12 @@ svn_wc__read_conflicts(const apr_array_header_t **conflicts,
desc->src_left_version = left_version;
desc->src_right_version = right_version;
SVN_ERR(svn_wc__db_read_props(&props, db, local_abspath,
scratch_pool, scratch_pool));
mime_type = svn_prop_get_value(props, SVN_PROP_MIME_TYPE);
desc->is_binary = mime_type ? svn_mime_type_is_binary(mime_type) : FALSE;
desc->mime_type = mime_type;
SVN_ERR(svn_wc__conflict_read_text_conflict(&desc->my_abspath,
&desc->base_abspath,
&desc->their_abspath,
@ -2913,6 +2929,13 @@ conflict_status_walker(void *baton,
cd = APR_ARRAY_IDX(conflicts, i, const svn_wc_conflict_description2_t *);
if ((cd->kind == svn_wc_conflict_kind_property && !cswb->resolve_prop)
|| (cd->kind == svn_wc_conflict_kind_text && !cswb->resolve_text)
|| (cd->kind == svn_wc_conflict_kind_tree && !cswb->resolve_tree))
{
continue; /* Easy out. Don't call resolver func and ignore result */
}
svn_pool_clear(iterpool);
if (my_choice == svn_wc_conflict_choose_unspecified)

View File

@ -891,18 +891,18 @@ remove_node_conflict_markers(svn_wc__db_t *db,
{
const char *marker_abspath;
const char *child_relpath;
const char *child_abpath;
const char *child_abspath;
marker_abspath = APR_ARRAY_IDX(markers, i, const char *);
child_relpath = svn_dirent_is_child(src_dir, marker_abspath, NULL);
child_relpath = svn_dirent_skip_ancestor(src_dir, marker_abspath);
if (child_relpath)
{
child_abpath = svn_dirent_join(dst_dir, child_relpath,
scratch_pool);
child_abspath = svn_dirent_join(dst_dir, child_relpath,
scratch_pool);
SVN_ERR(svn_io_remove_file2(child_abpath, TRUE, scratch_pool));
SVN_ERR(svn_io_remove_file2(child_abspath, TRUE, scratch_pool));
}
}
}
@ -922,7 +922,7 @@ remove_node_conflict_markers(svn_wc__db_t *db,
static svn_error_t *
remove_all_conflict_markers(svn_wc__db_t *db,
const char *src_dir_abspath,
const char *wc_dir_abspath,
const char *dst_dir_abspath,
apr_pool_t *scratch_pool)
{
apr_pool_t *iterpool = svn_pool_create(scratch_pool);
@ -951,7 +951,7 @@ remove_all_conflict_markers(svn_wc__db_t *db,
SVN_ERR(remove_node_conflict_markers(
db,
svn_dirent_join(src_dir_abspath, name, iterpool),
svn_dirent_join(wc_dir_abspath, name, iterpool),
svn_dirent_join(dst_dir_abspath, name, iterpool),
iterpool));
}
if (info->kind == svn_node_dir)
@ -960,7 +960,7 @@ remove_all_conflict_markers(svn_wc__db_t *db,
SVN_ERR(remove_all_conflict_markers(
db,
svn_dirent_join(src_dir_abspath, name, iterpool),
svn_dirent_join(wc_dir_abspath, name, iterpool),
svn_dirent_join(dst_dir_abspath, name, iterpool),
iterpool));
}
}
@ -1033,8 +1033,16 @@ svn_wc__move2(svn_wc_context_t *wc_ctx,
scratch_pool));
if (conflicted)
SVN_ERR(remove_node_conflict_markers(db, src_abspath, dst_abspath,
scratch_pool));
{
/* When we moved a directory, we moved the conflict markers
with the target... if we moved a file we only moved the
file itself and the markers are still in the old location */
SVN_ERR(remove_node_conflict_markers(db, src_abspath,
(kind == svn_node_dir)
? dst_abspath
: src_abspath,
scratch_pool));
}
SVN_ERR(svn_wc__db_op_delete(db, src_abspath,
move_degraded_to_copy ? NULL : dst_abspath,

View File

@ -47,9 +47,6 @@ extern "C" {
svn_wc__db_status_added. When DIFF_PRISTINE is TRUE, report the pristine
version of LOCAL_ABSPATH as ADDED. In this case an
svn_wc__db_status_deleted may shadow an added or deleted node.
If CHANGELIST_HASH is not NULL and LOCAL_ABSPATH's changelist is not
in the changelist, don't report the node.
*/
svn_error_t *
svn_wc__diff_local_only_file(svn_wc__db_t *db,
@ -57,7 +54,6 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
const char *relpath,
const svn_diff_tree_processor_t *processor,
void *processor_parent_baton,
apr_hash_t *changelist_hash,
svn_boolean_t diff_pristine,
svn_cancel_func_t cancel_func,
void *cancel_baton,
@ -73,9 +69,6 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
svn_wc__db_status_added. When DIFF_PRISTINE is TRUE, report the pristine
version of LOCAL_ABSPATH as ADDED. In this case an
svn_wc__db_status_deleted may shadow an added or deleted node.
If CHANGELIST_HASH is not NULL and LOCAL_ABSPATH's changelist is not
in the changelist, don't report the node.
*/
svn_error_t *
svn_wc__diff_local_only_dir(svn_wc__db_t *db,
@ -84,7 +77,6 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
svn_depth_t depth,
const svn_diff_tree_processor_t *processor,
void *processor_parent_baton,
apr_hash_t *changelist_hash,
svn_boolean_t diff_pristine,
svn_cancel_func_t cancel_func,
void *cancel_baton,
@ -132,7 +124,6 @@ svn_wc__diff_base_working_diff(svn_wc__db_t *db,
const char *local_abspath,
const char *relpath,
svn_revnum_t revision,
apr_hash_t *changelist_hash,
const svn_diff_tree_processor_t *processor,
void *processor_dir_baton,
svn_boolean_t diff_pristine,
@ -140,6 +131,32 @@ svn_wc__diff_base_working_diff(svn_wc__db_t *db,
void *cancel_baton,
apr_pool_t *scratch_pool);
/* Return a tree processor filter that filters by changelist membership.
*
* This filter only passes on the changes for a file if the file's path
* (in the WC) is assigned to one of the changelists in @a changelist_hash.
* It also passes on the opening and closing of each directory that contains
* such a change, and possibly also of other directories, but not addition
* or deletion or changes to a directory.
*
* If @a changelist_hash is null then no filtering is performed and the
* returned diff processor is driven exactly like the input @a processor.
*
* @a wc_ctx is the WC context and @a root_local_abspath is the WC path of
* the root of the diff (for which relpath = "" in the diff processor).
*
* Allocate the returned diff processor in @a result_pool, or if no
* filtering is required then the input pointer @a processor itself may be
* returned.
*/
const svn_diff_tree_processor_t *
svn_wc__changelist_filter_tree_processor_create(
const svn_diff_tree_processor_t *processor,
svn_wc_context_t *wc_ctx,
const char *root_local_abspath,
apr_hash_t *changelist_hash,
apr_pool_t *result_pool);
#ifdef __cplusplus
}

View File

@ -114,9 +114,6 @@ struct edit_baton_t
/* Possibly diff repos against text-bases instead of working files. */
svn_boolean_t diff_pristine;
/* Hash whose keys are const char * changelist names. */
apr_hash_t *changelist_hash;
/* Cancel function/baton */
svn_cancel_func_t cancel_func;
void *cancel_baton;
@ -238,43 +235,26 @@ struct file_baton_t
* calculating diffs. USE_TEXT_BASE defines whether to compare
* against working files or text-bases. REVERSE_ORDER defines which
* direction to perform the diff.
*
* CHANGELIST_FILTER is a list of const char * changelist names, used to
* filter diff output responses to only those items in one of the
* specified changelists, empty (or NULL altogether) if no changelist
* filtering is requested.
*/
static svn_error_t *
make_edit_baton(struct edit_baton_t **edit_baton,
svn_wc__db_t *db,
const char *anchor_abspath,
const char *target,
const svn_wc_diff_callbacks4_t *callbacks,
void *callback_baton,
const svn_diff_tree_processor_t *processor,
svn_depth_t depth,
svn_boolean_t ignore_ancestry,
svn_boolean_t show_copies_as_adds,
svn_boolean_t use_text_base,
svn_boolean_t reverse_order,
const apr_array_header_t *changelist_filter,
svn_cancel_func_t cancel_func,
void *cancel_baton,
apr_pool_t *pool)
{
apr_hash_t *changelist_hash = NULL;
struct edit_baton_t *eb;
const svn_diff_tree_processor_t *processor;
SVN_ERR_ASSERT(svn_dirent_is_absolute(anchor_abspath));
if (changelist_filter && changelist_filter->nelts)
SVN_ERR(svn_hash_from_cstring_keys(&changelist_hash, changelist_filter,
pool));
SVN_ERR(svn_wc__wrap_diff_callbacks(&processor,
callbacks, callback_baton, TRUE,
pool, pool));
if (reverse_order)
processor = svn_diff__tree_processor_reverse_create(processor, NULL, pool);
@ -295,7 +275,6 @@ make_edit_baton(struct edit_baton_t **edit_baton,
eb->ignore_ancestry = ignore_ancestry;
eb->local_before_remote = reverse_order;
eb->diff_pristine = use_text_base;
eb->changelist_hash = changelist_hash;
eb->cancel_func = cancel_func;
eb->cancel_baton = cancel_baton;
eb->pool = pool;
@ -409,7 +388,6 @@ svn_wc__diff_base_working_diff(svn_wc__db_t *db,
const char *local_abspath,
const char *relpath,
svn_revnum_t revision,
apr_hash_t *changelist_hash,
const svn_diff_tree_processor_t *processor,
void *processor_dir_baton,
svn_boolean_t diff_pristine,
@ -436,12 +414,11 @@ svn_wc__diff_base_working_diff(svn_wc__db_t *db,
apr_hash_t *base_props;
apr_hash_t *local_props;
apr_array_header_t *prop_changes;
const char *changelist;
SVN_ERR(svn_wc__db_read_info(&status, NULL, &db_revision, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, &working_checksum, NULL,
NULL, NULL, NULL, NULL, NULL, &recorded_size,
&recorded_time, &changelist, NULL, NULL,
&recorded_time, NULL, NULL, NULL,
&had_props, &props_mod, NULL, NULL, NULL,
db, local_abspath, scratch_pool, scratch_pool));
checksum = working_checksum;
@ -450,12 +427,6 @@ svn_wc__diff_base_working_diff(svn_wc__db_t *db,
|| status == svn_wc__db_status_added
|| (status == svn_wc__db_status_deleted && diff_pristine));
/* If the item is not a member of a specified changelist (and there are
some specified changelists), skip it. */
if (changelist_hash && !svn_hash_gets(changelist_hash, changelist))
return SVN_NO_ERROR;
if (status != svn_wc__db_status_normal)
{
SVN_ERR(svn_wc__db_base_get_info(&base_status, NULL, &db_revision,
@ -780,7 +751,6 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
SVN_ERR(svn_wc__diff_local_only_file(db, child_abspath,
child_relpath,
eb->processor, dir_baton,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func,
eb->cancel_baton,
@ -790,7 +760,6 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
child_relpath,
depth_below_here,
eb->processor, dir_baton,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func,
eb->cancel_baton,
@ -826,7 +795,6 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
db, child_abspath,
child_relpath,
eb->revnum,
eb->changelist_hash,
eb->processor, dir_baton,
eb->diff_pristine,
eb->cancel_func,
@ -849,7 +817,6 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
SVN_ERR(svn_wc__diff_local_only_file(db, child_abspath,
child_relpath,
eb->processor, dir_baton,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func,
eb->cancel_baton,
@ -858,7 +825,6 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
SVN_ERR(svn_wc__diff_local_only_dir(db, child_abspath,
child_relpath, depth_below_here,
eb->processor, dir_baton,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func,
eb->cancel_baton,
@ -870,13 +836,9 @@ walk_local_nodes_diff(struct edit_baton_t *eb,
if (compared)
return SVN_NO_ERROR;
/* Check for local property mods on this directory, if we haven't
already reported them and we aren't changelist-filted.
### it should be noted that we do not currently allow directories
### to be part of changelists, so if a changelist is provided, the
### changelist check will always fail. */
/* Check for local property mods on this directory, if we haven't
already reported them. */
if (! skip
&& ! eb->changelist_hash
&& ! in_anchor_not_target
&& props_mod)
{
@ -919,7 +881,6 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
const char *relpath,
const svn_diff_tree_processor_t *processor,
void *processor_parent_baton,
apr_hash_t *changelist_hash,
svn_boolean_t diff_pristine,
svn_cancel_func_t cancel_func,
void *cancel_baton,
@ -932,7 +893,6 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
const svn_checksum_t *checksum;
const char *original_repos_relpath;
svn_revnum_t original_revision;
const char *changelist;
svn_boolean_t had_props;
svn_boolean_t props_mod;
apr_hash_t *pristine_props;
@ -948,7 +908,7 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
NULL, NULL, NULL, NULL, &checksum, NULL,
&original_repos_relpath, NULL, NULL,
&original_revision, NULL, NULL, NULL,
&changelist, NULL, NULL, &had_props,
NULL, NULL, NULL, &had_props,
&props_mod, NULL, NULL, NULL,
db, local_abspath,
scratch_pool, scratch_pool));
@ -959,10 +919,6 @@ svn_wc__diff_local_only_file(svn_wc__db_t *db,
|| (status == svn_wc__db_status_deleted && diff_pristine)));
if (changelist && changelist_hash
&& !svn_hash_gets(changelist_hash, changelist))
return SVN_NO_ERROR;
if (status == svn_wc__db_status_deleted)
{
assert(diff_pristine);
@ -1065,12 +1021,19 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
svn_depth_t depth,
const svn_diff_tree_processor_t *processor,
void *processor_parent_baton,
apr_hash_t *changelist_hash,
svn_boolean_t diff_pristine,
svn_cancel_func_t cancel_func,
void *cancel_baton,
apr_pool_t *scratch_pool)
{
svn_wc__db_status_t status;
svn_node_kind_t kind;
svn_boolean_t had_props;
svn_boolean_t props_mod;
const char *original_repos_relpath;
svn_revnum_t original_revision;
svn_diff_source_t *copyfrom_src = NULL;
apr_hash_t *pristine_props;
const apr_array_header_t *children;
int i;
apr_pool_t *iterpool;
@ -1083,6 +1046,47 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
apr_hash_t *nodes;
apr_hash_t *conflicts;
SVN_ERR(svn_wc__db_read_info(&status, &kind, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL,
&original_repos_relpath, NULL, NULL,
&original_revision, NULL, NULL, NULL,
NULL, NULL, NULL, &had_props,
&props_mod, NULL, NULL, NULL,
db, local_abspath,
scratch_pool, scratch_pool));
if (original_repos_relpath)
{
copyfrom_src = svn_diff__source_create(original_revision, scratch_pool);
copyfrom_src->repos_relpath = original_repos_relpath;
}
/* svn_wc__db_status_incomplete should never happen, as the result won't be
stable or guaranteed related to what is in the repository for this
revision, but without this it would be hard to diagnose that status... */
assert(kind == svn_node_dir
&& (status == svn_wc__db_status_normal
|| status == svn_wc__db_status_incomplete
|| status == svn_wc__db_status_added
|| (status == svn_wc__db_status_deleted && diff_pristine)));
if (status == svn_wc__db_status_deleted)
{
assert(diff_pristine);
SVN_ERR(svn_wc__db_read_pristine_info(NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, &had_props,
&pristine_props,
db, local_abspath,
scratch_pool, scratch_pool));
props_mod = FALSE;
}
else if (!had_props)
pristine_props = apr_hash_make(scratch_pool);
else
SVN_ERR(svn_wc__db_read_pristine_props(&pristine_props,
db, local_abspath,
scratch_pool, scratch_pool));
/* Report the addition of the directory's contents. */
iterpool = svn_pool_create(scratch_pool);
@ -1090,10 +1094,11 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
relpath,
NULL,
right_src,
NULL /* copyfrom_src */,
copyfrom_src,
processor_parent_baton,
processor,
scratch_pool, iterpool));
/* ### skip_children is not used */
SVN_ERR(svn_wc__db_read_children_info(&nodes, &conflicts, db, local_abspath,
scratch_pool, iterpool));
@ -1138,7 +1143,6 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
SVN_ERR(svn_wc__diff_local_only_file(db, child_abspath,
child_relpath,
processor, pdb,
changelist_hash,
diff_pristine,
cancel_func, cancel_baton,
scratch_pool));
@ -1150,7 +1154,6 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
SVN_ERR(svn_wc__diff_local_only_dir(db, child_abspath,
child_relpath, depth_below_here,
processor, pdb,
changelist_hash,
diff_pristine,
cancel_func, cancel_baton,
iterpool));
@ -1165,17 +1168,19 @@ svn_wc__diff_local_only_dir(svn_wc__db_t *db,
if (!skip)
{
apr_hash_t *right_props;
if (diff_pristine)
SVN_ERR(svn_wc__db_read_pristine_props(&right_props, db, local_abspath,
scratch_pool, scratch_pool));
if (props_mod && !diff_pristine)
SVN_ERR(svn_wc__db_read_props(&right_props, db, local_abspath,
scratch_pool, scratch_pool));
else
SVN_ERR(svn_wc__get_actual_props(&right_props, db, local_abspath,
scratch_pool, scratch_pool));
right_props = svn_prop_hash_dup(pristine_props, scratch_pool);
SVN_ERR(processor->dir_added(relpath,
NULL /* copyfrom_src */,
copyfrom_src,
right_src,
NULL,
copyfrom_src
? pristine_props
: NULL,
right_props,
pdb,
processor,
@ -1246,7 +1251,6 @@ handle_local_only(struct dir_baton_t *pb,
svn_relpath_join(pb->relpath, name, scratch_pool),
repos_delete ? svn_depth_infinity : depth,
eb->processor, pb->pdb,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func, eb->cancel_baton,
scratch_pool));
@ -1257,7 +1261,6 @@ handle_local_only(struct dir_baton_t *pb,
svn_dirent_join(pb->local_abspath, name, scratch_pool),
svn_relpath_join(pb->relpath, name, scratch_pool),
eb->processor, pb->pdb,
eb->changelist_hash,
eb->diff_pristine,
eb->cancel_func, eb->cancel_baton,
scratch_pool));
@ -2032,7 +2035,14 @@ close_file(void *file_baton,
const char *repos_file;
apr_hash_t *repos_props;
if (!fb->skip && expected_md5_digest != NULL)
if (fb->skip)
{
svn_pool_destroy(fb->pool); /* destroys scratch_pool and fb */
SVN_ERR(maybe_done(pb));
return SVN_NO_ERROR;
}
if (expected_md5_digest != NULL)
{
svn_checksum_t *expected_checksum;
const svn_checksum_t *result_checksum;
@ -2087,11 +2097,7 @@ close_file(void *file_baton,
}
}
if (fb->skip)
{
/* Diff processor requested skipping information */
}
else if (fb->repos_only)
if (fb->repos_only)
{
SVN_ERR(eb->processor->file_deleted(fb->relpath,
fb->left_src,
@ -2271,6 +2277,7 @@ svn_wc__get_diff_editor(const svn_delta_editor_t **editor,
struct svn_wc__shim_fetch_baton_t *sfb;
svn_delta_shim_callbacks_t *shim_callbacks =
svn_delta_shim_callbacks_default(result_pool);
const svn_diff_tree_processor_t *diff_processor;
SVN_ERR_ASSERT(svn_dirent_is_absolute(anchor_abspath));
@ -2278,12 +2285,28 @@ svn_wc__get_diff_editor(const svn_delta_editor_t **editor,
if (use_git_diff_format)
show_copies_as_adds = TRUE;
SVN_ERR(svn_wc__wrap_diff_callbacks(&diff_processor,
callbacks, callback_baton, TRUE,
result_pool, scratch_pool));
/* Apply changelist filtering to the output */
if (changelist_filter && changelist_filter->nelts)
{
apr_hash_t *changelist_hash;
SVN_ERR(svn_hash_from_cstring_keys(&changelist_hash, changelist_filter,
result_pool));
diff_processor = svn_wc__changelist_filter_tree_processor_create(
diff_processor, wc_ctx, anchor_abspath,
changelist_hash, result_pool);
}
SVN_ERR(make_edit_baton(&eb,
wc_ctx->db,
anchor_abspath, target,
callbacks, callback_baton,
diff_processor,
depth, ignore_ancestry, show_copies_as_adds,
use_text_base, reverse_order, changelist_filter,
use_text_base, reverse_order,
cancel_func, cancel_baton,
result_pool));
@ -2390,8 +2413,8 @@ wrap_dir_opened(void **new_dir_baton,
wc_diff_wrap_baton_t *wb = processor->baton;
svn_boolean_t tree_conflicted = FALSE;
assert(left_source || right_source);
assert(!copyfrom_source || !right_source);
assert(left_source || right_source); /* Must exist at one point. */
assert(!left_source || !copyfrom_source); /* Either existed or added. */
/* Maybe store state and tree_conflicted in baton? */
if (left_source != NULL)
@ -2749,3 +2772,329 @@ svn_wc__wrap_diff_callbacks(const svn_diff_tree_processor_t **diff_processor,
*diff_processor = processor;
return SVN_NO_ERROR;
}
/* =====================================================================
* A tree processor filter that filters by changelist membership
* =====================================================================
*
* The current implementation queries the WC for the changelist of each
* file as it comes through, and sets the 'skip' flag for a non-matching
* file.
*
* (It doesn't set the 'skip' flag for a directory, as we need to receive
* the changed/added/deleted/closed call to know when it is closed, in
* order to preserve the strict open-close semantics for the wrapped tree
* processor.)
*
* It passes on the opening and closing of every directory, even if there
* are no file changes to be passed on inside that directory.
*/
typedef struct filter_tree_baton_t
{
const svn_diff_tree_processor_t *processor;
svn_wc_context_t *wc_ctx;
/* WC path of the root of the diff (where relpath = "") */
const char *root_local_abspath;
/* Hash whose keys are const char * changelist names. */
apr_hash_t *changelist_hash;
} filter_tree_baton_t;
static svn_error_t *
filter_dir_opened(void **new_dir_baton,
svn_boolean_t *skip,
svn_boolean_t *skip_children,
const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
const svn_diff_source_t *copyfrom_source,
void *parent_dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->dir_opened(new_dir_baton, skip, skip_children,
relpath,
left_source, right_source,
copyfrom_source,
parent_dir_baton,
fb->processor,
result_pool, scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_dir_added(const char *relpath,
const svn_diff_source_t *copyfrom_source,
const svn_diff_source_t *right_source,
/*const*/ apr_hash_t *copyfrom_props,
/*const*/ apr_hash_t *right_props,
void *dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->dir_closed(relpath,
NULL,
right_source,
dir_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_dir_deleted(const char *relpath,
const svn_diff_source_t *left_source,
/*const*/ apr_hash_t *left_props,
void *dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->dir_closed(relpath,
left_source,
NULL,
dir_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_dir_changed(const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
/*const*/ apr_hash_t *left_props,
/*const*/ apr_hash_t *right_props,
const apr_array_header_t *prop_changes,
void *dir_baton,
const struct svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->dir_closed(relpath,
left_source,
right_source,
dir_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_dir_closed(const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
void *dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->dir_closed(relpath,
left_source,
right_source,
dir_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_file_opened(void **new_file_baton,
svn_boolean_t *skip,
const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
const svn_diff_source_t *copyfrom_source,
void *dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
const char *local_abspath
= svn_dirent_join(fb->root_local_abspath, relpath, scratch_pool);
/* Skip if not a member of a given changelist */
if (! svn_wc__changelist_match(fb->wc_ctx, local_abspath,
fb->changelist_hash, scratch_pool))
{
*skip = TRUE;
return SVN_NO_ERROR;
}
SVN_ERR(fb->processor->file_opened(new_file_baton,
skip,
relpath,
left_source,
right_source,
copyfrom_source,
dir_baton,
fb->processor,
result_pool,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_file_added(const char *relpath,
const svn_diff_source_t *copyfrom_source,
const svn_diff_source_t *right_source,
const char *copyfrom_file,
const char *right_file,
/*const*/ apr_hash_t *copyfrom_props,
/*const*/ apr_hash_t *right_props,
void *file_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->file_added(relpath,
copyfrom_source,
right_source,
copyfrom_file,
right_file,
copyfrom_props,
right_props,
file_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_file_deleted(const char *relpath,
const svn_diff_source_t *left_source,
const char *left_file,
/*const*/ apr_hash_t *left_props,
void *file_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->file_deleted(relpath,
left_source,
left_file,
left_props,
file_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_file_changed(const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
const char *left_file,
const char *right_file,
/*const*/ apr_hash_t *left_props,
/*const*/ apr_hash_t *right_props,
svn_boolean_t file_modified,
const apr_array_header_t *prop_changes,
void *file_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->file_changed(relpath,
left_source,
right_source,
left_file,
right_file,
left_props,
right_props,
file_modified,
prop_changes,
file_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_file_closed(const char *relpath,
const svn_diff_source_t *left_source,
const svn_diff_source_t *right_source,
void *file_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->file_closed(relpath,
left_source,
right_source,
file_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
static svn_error_t *
filter_node_absent(const char *relpath,
void *dir_baton,
const svn_diff_tree_processor_t *processor,
apr_pool_t *scratch_pool)
{
struct filter_tree_baton_t *fb = processor->baton;
SVN_ERR(fb->processor->node_absent(relpath,
dir_baton,
fb->processor,
scratch_pool));
return SVN_NO_ERROR;
}
const svn_diff_tree_processor_t *
svn_wc__changelist_filter_tree_processor_create(
const svn_diff_tree_processor_t *processor,
svn_wc_context_t *wc_ctx,
const char *root_local_abspath,
apr_hash_t *changelist_hash,
apr_pool_t *result_pool)
{
struct filter_tree_baton_t *fb;
svn_diff_tree_processor_t *filter;
if (! changelist_hash)
return processor;
fb = apr_pcalloc(result_pool, sizeof(*fb));
fb->processor = processor;
fb->wc_ctx = wc_ctx;
fb->root_local_abspath = root_local_abspath;
fb->changelist_hash = changelist_hash;
filter = svn_diff__tree_processor_create(fb, result_pool);
filter->dir_opened = filter_dir_opened;
filter->dir_added = filter_dir_added;
filter->dir_deleted = filter_dir_deleted;
filter->dir_changed = filter_dir_changed;
filter->dir_closed = filter_dir_closed;
filter->file_opened = filter_file_opened;
filter->file_added = filter_file_added;
filter->file_deleted = filter_file_deleted;
filter->file_changed = filter_file_changed;
filter->file_closed = filter_file_closed;
filter->node_absent = filter_node_absent;
return filter;
}

View File

@ -92,9 +92,6 @@ struct diff_baton
/* Should this diff not compare copied files with their source? */
svn_boolean_t show_copies_as_adds;
/* Hash whose keys are const char * changelist names. */
apr_hash_t *changelist_hash;
/* Cancel function/baton */
svn_cancel_func_t cancel_func;
void *cancel_baton;
@ -252,11 +249,6 @@ diff_status_callback(void *baton,
if (eb->cur && eb->cur->skip_children)
return SVN_NO_ERROR;
if (eb->changelist_hash != NULL
&& (!status->changelist
|| ! svn_hash_gets(eb->changelist_hash, status->changelist)))
return SVN_NO_ERROR; /* Filtered via changelist */
/* This code does about the same thing as the inner body of
walk_local_nodes_diff() in diff_editor.c, except that
it is already filtered by the status walker, doesn't have to
@ -361,7 +353,6 @@ diff_status_callback(void *baton,
SVN_ERR(svn_wc__diff_base_working_diff(db, child_abspath,
child_relpath,
SVN_INVALID_REVNUM,
eb->changelist_hash,
eb->processor,
eb->cur
? eb->cur->baton
@ -405,7 +396,6 @@ diff_status_callback(void *baton,
child_relpath,
eb->processor,
eb->cur ? eb->cur->baton : NULL,
eb->changelist_hash,
FALSE,
eb->cancel_func,
eb->cancel_baton,
@ -415,7 +405,6 @@ diff_status_callback(void *baton,
child_relpath, depth_below_here,
eb->processor,
eb->cur ? eb->cur->baton : NULL,
eb->changelist_hash,
FALSE,
eb->cancel_func,
eb->cancel_baton,
@ -482,16 +471,24 @@ svn_wc_diff6(svn_wc_context_t *wc_ctx,
processor = svn_diff__tree_processor_copy_as_changed_create(processor,
scratch_pool);
/* Apply changelist filtering to the output */
if (changelist_filter && changelist_filter->nelts)
{
apr_hash_t *changelist_hash;
SVN_ERR(svn_hash_from_cstring_keys(&changelist_hash, changelist_filter,
scratch_pool));
processor = svn_wc__changelist_filter_tree_processor_create(
processor, wc_ctx, local_abspath,
changelist_hash, scratch_pool);
}
eb.db = wc_ctx->db;
eb.processor = processor;
eb.ignore_ancestry = ignore_ancestry;
eb.show_copies_as_adds = show_copies_as_adds;
eb.pool = scratch_pool;
if (changelist_filter && changelist_filter->nelts)
SVN_ERR(svn_hash_from_cstring_keys(&eb.changelist_hash, changelist_filter,
scratch_pool));
if (show_copies_as_adds || use_git_diff_format || !ignore_ancestry)
get_all = TRUE; /* We need unmodified descendants of copies */
else

View File

@ -2140,17 +2140,18 @@ write_entry(struct write_baton **entry_node,
below_working_node->presence = svn_wc__db_status_normal;
below_working_node->kind = entry->kind;
below_working_node->repos_id = work->repos_id;
below_working_node->revision = work->revision;
/* This is just guessing. If the node below would have been switched
or if it was updated to a different version, the guess would
fail. But we don't have better information pre wc-ng :( */
if (work->repos_relpath)
below_working_node->repos_relpath
= svn_relpath_join(work->repos_relpath, entry->name,
= svn_relpath_join(work->repos_relpath,
svn_relpath_basename(local_relpath, NULL),
result_pool);
else
below_working_node->repos_relpath = NULL;
below_working_node->revision = parent_node->work->revision;
/* The revert_base checksum isn't available in the entry structure,
so the caller provides it. */

View File

@ -405,9 +405,10 @@ struct edit_baton
const apr_array_header_t *ext_patterns;
const char *diff3cmd;
const char *url;
const char *repos_root_url;
const char *repos_uuid;
const char *old_repos_relpath;
const char *new_repos_relpath;
const char *record_ancestor_abspath;
const char *recorded_repos_relpath;
@ -517,7 +518,8 @@ open_file(const char *path,
*file_baton = eb;
SVN_ERR(svn_wc__db_base_get_info(NULL, &kind, &eb->original_revision,
NULL, NULL, NULL, &eb->changed_rev,
&eb->old_repos_relpath, NULL, NULL,
&eb->changed_rev,
&eb->changed_date, &eb->changed_author,
NULL, &eb->original_checksum, NULL, NULL,
&eb->had_props, NULL, NULL,
@ -677,8 +679,6 @@ close_file(void *file_baton,
const svn_checksum_t *original_checksum = NULL;
svn_boolean_t added = !SVN_IS_VALID_REVNUM(eb->original_revision);
const char *repos_relpath = svn_uri_skip_ancestor(eb->repos_root_url,
eb->url, pool);
if (! added)
{
@ -853,14 +853,14 @@ close_file(void *file_baton,
svn_wc_conflict_version_create2(
eb->repos_root_url,
eb->repos_uuid,
repos_relpath,
eb->old_repos_relpath,
eb->original_revision,
svn_node_file,
pool),
svn_wc_conflict_version_create2(
eb->repos_root_url,
eb->repos_uuid,
repos_relpath,
eb->new_repos_relpath,
*eb->target_revision,
svn_node_file,
pool),
@ -878,7 +878,7 @@ close_file(void *file_baton,
eb->db,
eb->local_abspath,
eb->wri_abspath,
repos_relpath,
eb->new_repos_relpath,
eb->repos_root_url,
eb->repos_uuid,
*eb->target_revision,
@ -945,10 +945,15 @@ close_edit(void *edit_baton,
{
struct edit_baton *eb = edit_baton;
if (!eb->file_closed
|| eb->iprops)
if (!eb->file_closed)
{
apr_hash_t *wcroot_iprops = NULL;
/* The file wasn't updated, but its url or revision might have...
e.g. switch between branches for relative externals.
Just bump the information as that is just as expensive as
investigating when we should and shouldn't update it...
and avoid hard to debug edge cases */
if (eb->iprops)
{
@ -956,13 +961,15 @@ close_edit(void *edit_baton,
svn_hash_sets(wcroot_iprops, eb->local_abspath, eb->iprops);
}
/* The node wasn't updated, so we just have to bump its revision */
SVN_ERR(svn_wc__db_op_bump_revisions_post_update(eb->db,
eb->local_abspath,
svn_depth_infinity,
NULL, NULL, NULL,
eb->new_repos_relpath,
eb->repos_root_url,
eb->repos_uuid,
*eb->target_revision,
apr_hash_make(pool),
apr_hash_make(pool)
/* exclude_relpaths */,
wcroot_iprops,
eb->notify_func,
eb->notify_baton,
@ -1014,9 +1021,12 @@ svn_wc__get_file_external_editor(const svn_delta_editor_t **editor,
eb->name = svn_dirent_basename(eb->local_abspath, NULL);
eb->target_revision = target_revision;
eb->url = apr_pstrdup(edit_pool, url);
eb->repos_root_url = apr_pstrdup(edit_pool, repos_root_url);
eb->repos_uuid = apr_pstrdup(edit_pool, repos_uuid);
eb->new_repos_relpath = svn_uri_skip_ancestor(eb->repos_root_url, url, edit_pool);
eb->old_repos_relpath = eb->new_repos_relpath;
eb->original_revision = SVN_INVALID_REVNUM;
eb->iprops = iprops;

View File

@ -2125,13 +2125,12 @@ add_directory(const char *path,
if (tree_conflict)
{
svn_wc_conflict_reason_t reason;
const char *move_src_op_root_abspath;
/* So this deletion wasn't just a deletion, it is actually a
replacement. Let's install a better tree conflict. */
/* ### Should store the conflict in DB to allow reinstalling
### with theoretically more data in close_directory() */
SVN_ERR(svn_wc__conflict_read_tree_conflict(&reason, NULL, NULL,
SVN_ERR(svn_wc__conflict_read_tree_conflict(&reason, NULL,
&move_src_op_root_abspath,
eb->db,
db->local_abspath,
tree_conflict,
@ -2143,7 +2142,7 @@ add_directory(const char *path,
tree_conflict,
eb->db, db->local_abspath,
reason, svn_wc_conflict_action_replace,
NULL,
move_src_op_root_abspath,
db->pool, db->pool));
/* And now stop checking for conflicts here and just perform
@ -3266,13 +3265,12 @@ add_file(const char *path,
if (tree_conflict)
{
svn_wc_conflict_reason_t reason;
const char *move_src_op_root_abspath;
/* So this deletion wasn't just a deletion, it is actually a
replacement. Let's install a better tree conflict. */
/* ### Should store the conflict in DB to allow reinstalling
### with theoretically more data in close_directory() */
SVN_ERR(svn_wc__conflict_read_tree_conflict(&reason, NULL, NULL,
SVN_ERR(svn_wc__conflict_read_tree_conflict(&reason, NULL,
&move_src_op_root_abspath,
eb->db,
fb->local_abspath,
tree_conflict,
@ -3284,7 +3282,7 @@ add_file(const char *path,
tree_conflict,
eb->db, fb->local_abspath,
reason, svn_wc_conflict_action_replace,
NULL,
move_src_op_root_abspath,
fb->pool, fb->pool));
/* And now stop checking for conflicts here and just perform
@ -5553,8 +5551,8 @@ svn_wc__complete_directory_add(svn_wc_context_t *wc_ctx,
original_repos_relpath, original_root_url,
original_uuid, original_revision,
NULL /* children */,
FALSE /* is_move */,
svn_depth_infinity,
FALSE /* is_move */,
NULL /* conflict */,
NULL /* work_items */,
scratch_pool));

View File

@ -1,4 +1,4 @@
/* This file is automatically generated from wc-checks.sql and .dist_sandbox/subversion-1.8.10/subversion/libsvn_wc/token-map.h.
/* This file is automatically generated from wc-checks.sql and .dist_sandbox/subversion-1.8.14/subversion/libsvn_wc/token-map.h.
* Do not edit this file -- edit the source and rerun gen-make.py */
#define STMT_VERIFICATION_TRIGGERS 0

View File

@ -1,4 +1,4 @@
/* This file is automatically generated from wc-metadata.sql and .dist_sandbox/subversion-1.8.10/subversion/libsvn_wc/token-map.h.
/* This file is automatically generated from wc-metadata.sql and .dist_sandbox/subversion-1.8.14/subversion/libsvn_wc/token-map.h.
* Do not edit this file -- edit the source and rerun gen-make.py */
#define STMT_CREATE_SCHEMA 0
@ -164,21 +164,25 @@
#define STMT_4 \
"ANALYZE sqlite_master; " \
"DELETE FROM sqlite_stat1 " \
"WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK', 'EXTERNALS'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'sqlite_autoindex_NODES_1', '8000 8000 2 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'I_NODES_PARENT', '8000 8000 10 2 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'I_NODES_MOVED', '8000 8000 1 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('ACTUAL_NODE', 'sqlite_autoindex_ACTUAL_NODE_1', '8000 8000 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('ACTUAL_NODE', 'I_ACTUAL_PARENT', '8000 8000 10 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('LOCK', 'sqlite_autoindex_LOCK_1', '100 100 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('WC_LOCK', 'sqlite_autoindex_WC_LOCK_1', '100 100 1'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('EXTERNALS','sqlite_autoindex_EXTERNALS_1', '100 100 1'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('EXTERNALS','I_EXTERNALS_DEFINED', '100 100 3 1'); " \
"ANALYZE sqlite_master; " \
""

View File

@ -598,27 +598,32 @@ CREATE UNIQUE INDEX I_EXTERNALS_DEFINED ON EXTERNALS (wc_id,
ANALYZE sqlite_master; /* Creates empty sqlite_stat1 if necessary */
DELETE FROM sqlite_stat1
WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK');
WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK', 'EXTERNALS');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('NODES', 'sqlite_autoindex_NODES_1', '8000 8000 2 1');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('NODES', 'I_NODES_PARENT', '8000 8000 10 2 1');
/* Tell a lie: We ignore that 99.9% of all moved_to values are NULL */
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('NODES', 'I_NODES_MOVED', '8000 8000 1 1');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('ACTUAL_NODE', 'sqlite_autoindex_ACTUAL_NODE_1', '8000 8000 1');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('ACTUAL_NODE', 'I_ACTUAL_PARENT', '8000 8000 10 1');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('LOCK', 'sqlite_autoindex_LOCK_1', '100 100 1');
INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('WC_LOCK', 'sqlite_autoindex_WC_LOCK_1', '100 100 1');
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('EXTERNALS','sqlite_autoindex_EXTERNALS_1', '100 100 1');
INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES
('EXTERNALS','I_EXTERNALS_DEFINED', '100 100 3 1');
/* sqlite_autoindex_WORK_QUEUE_1 doesn't exist because WORK_QUEUE is
a INTEGER PRIMARY KEY AUTOINCREMENT table */

View File

@ -1,4 +1,4 @@
/* This file is automatically generated from wc-queries.sql and .dist_sandbox/subversion-1.8.10/subversion/libsvn_wc/token-map.h.
/* This file is automatically generated from wc-queries.sql and .dist_sandbox/subversion-1.8.14/subversion/libsvn_wc/token-map.h.
* Do not edit this file -- edit the source and rerun gen-make.py */
#define STMT_SELECT_NODE_INFO 0
@ -2172,9 +2172,16 @@
" AND (inherited_props not null) " \
""
#define STMT_CREATE_SCHEMA 207
#define STMT_207_INFO {"STMT_CREATE_SCHEMA", NULL}
#define STMT_HAVE_STAT1_TABLE 207
#define STMT_207_INFO {"STMT_HAVE_STAT1_TABLE", NULL}
#define STMT_207 \
"SELECT 1 FROM sqlite_master WHERE name='sqlite_stat1' AND type='table' " \
"LIMIT 1 " \
""
#define STMT_CREATE_SCHEMA 208
#define STMT_208_INFO {"STMT_CREATE_SCHEMA", NULL}
#define STMT_208 \
"CREATE TABLE REPOSITORY ( " \
" id INTEGER PRIMARY KEY AUTOINCREMENT, " \
" root TEXT UNIQUE NOT NULL, " \
@ -2239,9 +2246,9 @@
"; " \
""
#define STMT_CREATE_NODES 208
#define STMT_208_INFO {"STMT_CREATE_NODES", NULL}
#define STMT_208 \
#define STMT_CREATE_NODES 209
#define STMT_209_INFO {"STMT_CREATE_NODES", NULL}
#define STMT_209 \
"CREATE TABLE NODES ( " \
" wc_id INTEGER NOT NULL REFERENCES WCROOT (id), " \
" local_relpath TEXT NOT NULL, " \
@ -2281,9 +2288,9 @@
" WHERE op_depth = 0; " \
""
#define STMT_CREATE_NODES_TRIGGERS 209
#define STMT_209_INFO {"STMT_CREATE_NODES_TRIGGERS", NULL}
#define STMT_209 \
#define STMT_CREATE_NODES_TRIGGERS 210
#define STMT_210_INFO {"STMT_CREATE_NODES_TRIGGERS", NULL}
#define STMT_210 \
"CREATE TRIGGER nodes_insert_trigger " \
"AFTER INSERT ON nodes " \
"WHEN NEW.checksum IS NOT NULL " \
@ -2309,9 +2316,9 @@
"END; " \
""
#define STMT_CREATE_EXTERNALS 210
#define STMT_210_INFO {"STMT_CREATE_EXTERNALS", NULL}
#define STMT_210 \
#define STMT_CREATE_EXTERNALS 211
#define STMT_211_INFO {"STMT_CREATE_EXTERNALS", NULL}
#define STMT_211 \
"CREATE TABLE EXTERNALS ( " \
" wc_id INTEGER NOT NULL REFERENCES WCROOT (id), " \
" local_relpath TEXT NOT NULL, " \
@ -2330,32 +2337,36 @@
" local_relpath); " \
""
#define STMT_INSTALL_SCHEMA_STATISTICS 211
#define STMT_211_INFO {"STMT_INSTALL_SCHEMA_STATISTICS", NULL}
#define STMT_211 \
#define STMT_INSTALL_SCHEMA_STATISTICS 212
#define STMT_212_INFO {"STMT_INSTALL_SCHEMA_STATISTICS", NULL}
#define STMT_212 \
"ANALYZE sqlite_master; " \
"DELETE FROM sqlite_stat1 " \
"WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"WHERE tbl in ('NODES', 'ACTUAL_NODE', 'LOCK', 'WC_LOCK', 'EXTERNALS'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'sqlite_autoindex_NODES_1', '8000 8000 2 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'I_NODES_PARENT', '8000 8000 10 2 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('NODES', 'I_NODES_MOVED', '8000 8000 1 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('ACTUAL_NODE', 'sqlite_autoindex_ACTUAL_NODE_1', '8000 8000 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('ACTUAL_NODE', 'I_ACTUAL_PARENT', '8000 8000 10 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('LOCK', 'sqlite_autoindex_LOCK_1', '100 100 1'); " \
"INSERT OR REPLACE INTO sqlite_stat1(tbl, idx, stat) VALUES " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('WC_LOCK', 'sqlite_autoindex_WC_LOCK_1', '100 100 1'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('EXTERNALS','sqlite_autoindex_EXTERNALS_1', '100 100 1'); " \
"INSERT INTO sqlite_stat1(tbl, idx, stat) VALUES " \
" ('EXTERNALS','I_EXTERNALS_DEFINED', '100 100 3 1'); " \
"ANALYZE sqlite_master; " \
""
#define STMT_UPGRADE_TO_20 212
#define STMT_212_INFO {"STMT_UPGRADE_TO_20", NULL}
#define STMT_212 \
#define STMT_UPGRADE_TO_20 213
#define STMT_213_INFO {"STMT_UPGRADE_TO_20", NULL}
#define STMT_213 \
"UPDATE BASE_NODE SET checksum = (SELECT checksum FROM pristine " \
" WHERE md5_checksum = BASE_NODE.checksum) " \
"WHERE EXISTS (SELECT 1 FROM pristine WHERE md5_checksum = BASE_NODE.checksum); " \
@ -2396,59 +2407,59 @@
"PRAGMA user_version = 20; " \
""
#define STMT_UPGRADE_TO_21 213
#define STMT_213_INFO {"STMT_UPGRADE_TO_21", NULL}
#define STMT_213 \
#define STMT_UPGRADE_TO_21 214
#define STMT_214_INFO {"STMT_UPGRADE_TO_21", NULL}
#define STMT_214 \
"PRAGMA user_version = 21; " \
""
#define STMT_UPGRADE_21_SELECT_OLD_TREE_CONFLICT 214
#define STMT_214_INFO {"STMT_UPGRADE_21_SELECT_OLD_TREE_CONFLICT", NULL}
#define STMT_214 \
#define STMT_UPGRADE_21_SELECT_OLD_TREE_CONFLICT 215
#define STMT_215_INFO {"STMT_UPGRADE_21_SELECT_OLD_TREE_CONFLICT", NULL}
#define STMT_215 \
"SELECT wc_id, local_relpath, tree_conflict_data " \
"FROM actual_node " \
"WHERE tree_conflict_data IS NOT NULL " \
""
#define STMT_UPGRADE_21_ERASE_OLD_CONFLICTS 215
#define STMT_215_INFO {"STMT_UPGRADE_21_ERASE_OLD_CONFLICTS", NULL}
#define STMT_215 \
#define STMT_UPGRADE_21_ERASE_OLD_CONFLICTS 216
#define STMT_216_INFO {"STMT_UPGRADE_21_ERASE_OLD_CONFLICTS", NULL}
#define STMT_216 \
"UPDATE actual_node SET tree_conflict_data = NULL " \
""
#define STMT_UPGRADE_TO_22 216
#define STMT_216_INFO {"STMT_UPGRADE_TO_22", NULL}
#define STMT_216 \
#define STMT_UPGRADE_TO_22 217
#define STMT_217_INFO {"STMT_UPGRADE_TO_22", NULL}
#define STMT_217 \
"UPDATE actual_node SET tree_conflict_data = conflict_data; " \
"UPDATE actual_node SET conflict_data = NULL; " \
"PRAGMA user_version = 22; " \
""
#define STMT_UPGRADE_TO_23 217
#define STMT_217_INFO {"STMT_UPGRADE_TO_23", NULL}
#define STMT_217 \
#define STMT_UPGRADE_TO_23 218
#define STMT_218_INFO {"STMT_UPGRADE_TO_23", NULL}
#define STMT_218 \
"PRAGMA user_version = 23; " \
""
#define STMT_UPGRADE_23_HAS_WORKING_NODES 218
#define STMT_218_INFO {"STMT_UPGRADE_23_HAS_WORKING_NODES", NULL}
#define STMT_218 \
#define STMT_UPGRADE_23_HAS_WORKING_NODES 219
#define STMT_219_INFO {"STMT_UPGRADE_23_HAS_WORKING_NODES", NULL}
#define STMT_219 \
"SELECT 1 FROM nodes WHERE op_depth > 0 " \
"LIMIT 1 " \
""
#define STMT_UPGRADE_TO_24 219
#define STMT_219_INFO {"STMT_UPGRADE_TO_24", NULL}
#define STMT_219 \
#define STMT_UPGRADE_TO_24 220
#define STMT_220_INFO {"STMT_UPGRADE_TO_24", NULL}
#define STMT_220 \
"UPDATE pristine SET refcount = " \
" (SELECT COUNT(*) FROM nodes " \
" WHERE checksum = pristine.checksum ); " \
"PRAGMA user_version = 24; " \
""
#define STMT_UPGRADE_TO_25 220
#define STMT_220_INFO {"STMT_UPGRADE_TO_25", NULL}
#define STMT_220 \
#define STMT_UPGRADE_TO_25 221
#define STMT_221_INFO {"STMT_UPGRADE_TO_25", NULL}
#define STMT_221 \
"DROP VIEW IF EXISTS NODES_CURRENT; " \
"CREATE VIEW NODES_CURRENT AS " \
" SELECT * FROM nodes " \
@ -2460,9 +2471,9 @@
"PRAGMA user_version = 25; " \
""
#define STMT_UPGRADE_TO_26 221
#define STMT_221_INFO {"STMT_UPGRADE_TO_26", NULL}
#define STMT_221 \
#define STMT_UPGRADE_TO_26 222
#define STMT_222_INFO {"STMT_UPGRADE_TO_26", NULL}
#define STMT_222 \
"DROP VIEW IF EXISTS NODES_BASE; " \
"CREATE VIEW NODES_BASE AS " \
" SELECT * FROM nodes " \
@ -2470,15 +2481,15 @@
"PRAGMA user_version = 26; " \
""
#define STMT_UPGRADE_TO_27 222
#define STMT_222_INFO {"STMT_UPGRADE_TO_27", NULL}
#define STMT_222 \
#define STMT_UPGRADE_TO_27 223
#define STMT_223_INFO {"STMT_UPGRADE_TO_27", NULL}
#define STMT_223 \
"PRAGMA user_version = 27; " \
""
#define STMT_UPGRADE_27_HAS_ACTUAL_NODES_CONFLICTS 223
#define STMT_223_INFO {"STMT_UPGRADE_27_HAS_ACTUAL_NODES_CONFLICTS", NULL}
#define STMT_223 \
#define STMT_UPGRADE_27_HAS_ACTUAL_NODES_CONFLICTS 224
#define STMT_224_INFO {"STMT_UPGRADE_27_HAS_ACTUAL_NODES_CONFLICTS", NULL}
#define STMT_224 \
"SELECT 1 FROM actual_node " \
"WHERE NOT ((prop_reject IS NULL) AND (conflict_old IS NULL) " \
" AND (conflict_new IS NULL) AND (conflict_working IS NULL) " \
@ -2486,18 +2497,18 @@
"LIMIT 1 " \
""
#define STMT_UPGRADE_TO_28 224
#define STMT_224_INFO {"STMT_UPGRADE_TO_28", NULL}
#define STMT_224 \
#define STMT_UPGRADE_TO_28 225
#define STMT_225_INFO {"STMT_UPGRADE_TO_28", NULL}
#define STMT_225 \
"UPDATE NODES SET checksum = (SELECT checksum FROM pristine " \
" WHERE md5_checksum = nodes.checksum) " \
"WHERE EXISTS (SELECT 1 FROM pristine WHERE md5_checksum = nodes.checksum); " \
"PRAGMA user_version = 28; " \
""
#define STMT_UPGRADE_TO_29 225
#define STMT_225_INFO {"STMT_UPGRADE_TO_29", NULL}
#define STMT_225 \
#define STMT_UPGRADE_TO_29 226
#define STMT_226_INFO {"STMT_UPGRADE_TO_29", NULL}
#define STMT_226 \
"DROP TRIGGER IF EXISTS nodes_update_checksum_trigger; " \
"DROP TRIGGER IF EXISTS nodes_insert_trigger; " \
"DROP TRIGGER IF EXISTS nodes_delete_trigger; " \
@ -2527,9 +2538,9 @@
"PRAGMA user_version = 29; " \
""
#define STMT_UPGRADE_TO_30 226
#define STMT_226_INFO {"STMT_UPGRADE_TO_30", NULL}
#define STMT_226 \
#define STMT_UPGRADE_TO_30 227
#define STMT_227_INFO {"STMT_UPGRADE_TO_30", NULL}
#define STMT_227 \
"CREATE UNIQUE INDEX IF NOT EXISTS I_NODES_MOVED " \
"ON NODES (wc_id, moved_to, op_depth); " \
"CREATE INDEX IF NOT EXISTS I_PRISTINE_MD5 ON PRISTINE (md5_checksum); " \
@ -2537,9 +2548,9 @@
"UPDATE nodes SET file_external=1 WHERE file_external IS NOT NULL; " \
""
#define STMT_UPGRADE_30_SELECT_CONFLICT_SEPARATE 227
#define STMT_227_INFO {"STMT_UPGRADE_30_SELECT_CONFLICT_SEPARATE", NULL}
#define STMT_227 \
#define STMT_UPGRADE_30_SELECT_CONFLICT_SEPARATE 228
#define STMT_228_INFO {"STMT_UPGRADE_30_SELECT_CONFLICT_SEPARATE", NULL}
#define STMT_228 \
"SELECT wc_id, local_relpath, " \
" conflict_old, conflict_working, conflict_new, prop_reject, tree_conflict_data " \
"FROM actual_node " \
@ -2551,24 +2562,24 @@
"ORDER by wc_id, local_relpath " \
""
#define STMT_UPGRADE_30_SET_CONFLICT 228
#define STMT_228_INFO {"STMT_UPGRADE_30_SET_CONFLICT", NULL}
#define STMT_228 \
#define STMT_UPGRADE_30_SET_CONFLICT 229
#define STMT_229_INFO {"STMT_UPGRADE_30_SET_CONFLICT", NULL}
#define STMT_229 \
"UPDATE actual_node SET conflict_data = ?3, conflict_old = NULL, " \
" conflict_working = NULL, conflict_new = NULL, prop_reject = NULL, " \
" tree_conflict_data = NULL " \
"WHERE wc_id = ?1 and local_relpath = ?2 " \
""
#define STMT_UPGRADE_TO_31_ALTER_TABLE 229
#define STMT_229_INFO {"STMT_UPGRADE_TO_31_ALTER_TABLE", NULL}
#define STMT_229 \
#define STMT_UPGRADE_TO_31_ALTER_TABLE 230
#define STMT_230_INFO {"STMT_UPGRADE_TO_31_ALTER_TABLE", NULL}
#define STMT_230 \
"ALTER TABLE NODES ADD COLUMN inherited_props BLOB; " \
""
#define STMT_UPGRADE_TO_31_FINALIZE 230
#define STMT_230_INFO {"STMT_UPGRADE_TO_31_FINALIZE", NULL}
#define STMT_230 \
#define STMT_UPGRADE_TO_31_FINALIZE 231
#define STMT_231_INFO {"STMT_UPGRADE_TO_31_FINALIZE", NULL}
#define STMT_231 \
"DROP INDEX IF EXISTS I_ACTUAL_CHANGELIST; " \
"DROP INDEX IF EXISTS I_EXTERNALS_PARENT; " \
"DROP INDEX I_NODES_PARENT; " \
@ -2580,9 +2591,9 @@
"PRAGMA user_version = 31; " \
""
#define STMT_UPGRADE_31_SELECT_WCROOT_NODES 231
#define STMT_231_INFO {"STMT_UPGRADE_31_SELECT_WCROOT_NODES", NULL}
#define STMT_231 \
#define STMT_UPGRADE_31_SELECT_WCROOT_NODES 232
#define STMT_232_INFO {"STMT_UPGRADE_31_SELECT_WCROOT_NODES", NULL}
#define STMT_232 \
"SELECT l.wc_id, l.local_relpath FROM nodes as l " \
"LEFT OUTER JOIN nodes as r " \
"ON l.wc_id = r.wc_id " \
@ -2594,9 +2605,9 @@
" OR (l.repos_path IS NOT (CASE WHEN (r.local_relpath) = '' THEN (CASE WHEN (r.repos_path) = '' THEN (l.local_relpath) WHEN (l.local_relpath) = '' THEN (r.repos_path) ELSE (r.repos_path) || '/' || (l.local_relpath) END) WHEN (r.repos_path) = '' THEN (CASE WHEN (r.local_relpath) = '' THEN (l.local_relpath) WHEN SUBSTR((l.local_relpath), 1, LENGTH(r.local_relpath)) = (r.local_relpath) THEN CASE WHEN LENGTH(r.local_relpath) = LENGTH(l.local_relpath) THEN '' WHEN SUBSTR((l.local_relpath), LENGTH(r.local_relpath)+1, 1) = '/' THEN SUBSTR((l.local_relpath), LENGTH(r.local_relpath)+2) END END) WHEN SUBSTR((l.local_relpath), 1, LENGTH(r.local_relpath)) = (r.local_relpath) THEN CASE WHEN LENGTH(r.local_relpath) = LENGTH(l.local_relpath) THEN (r.repos_path) WHEN SUBSTR((l.local_relpath), LENGTH(r.local_relpath)+1, 1) = '/' THEN (r.repos_path) || SUBSTR((l.local_relpath), LENGTH(r.local_relpath)+1) END END))) " \
""
#define STMT_UPGRADE_TO_32 232
#define STMT_232_INFO {"STMT_UPGRADE_TO_32", NULL}
#define STMT_232 \
#define STMT_UPGRADE_TO_32 233
#define STMT_233_INFO {"STMT_UPGRADE_TO_32", NULL}
#define STMT_233 \
"DROP INDEX IF EXISTS I_ACTUAL_CHANGELIST; " \
"DROP INDEX IF EXISTS I_EXTERNALS_PARENT; " \
"CREATE INDEX I_EXTERNALS_PARENT ON EXTERNALS (wc_id, parent_relpath); " \
@ -2649,9 +2660,9 @@
"DROP TABLE ACTUAL_NODE_BACKUP; " \
""
#define STMT_VERIFICATION_TRIGGERS 233
#define STMT_233_INFO {"STMT_VERIFICATION_TRIGGERS", NULL}
#define STMT_233 \
#define STMT_VERIFICATION_TRIGGERS 234
#define STMT_234_INFO {"STMT_VERIFICATION_TRIGGERS", NULL}
#define STMT_234 \
"CREATE TEMPORARY TRIGGER no_repository_updates BEFORE UPDATE ON repository " \
"BEGIN " \
" SELECT RAISE(FAIL, 'Updates to REPOSITORY are not allowed.'); " \
@ -2926,6 +2937,7 @@
STMT_231, \
STMT_232, \
STMT_233, \
STMT_234, \
NULL \
}
@ -3165,5 +3177,6 @@
STMT_231_INFO, \
STMT_232_INFO, \
STMT_233_INFO, \
STMT_234_INFO, \
{NULL, NULL} \
}

View File

@ -1716,6 +1716,10 @@ WHERE wc_id = ?1
AND op_depth = 0
AND (inherited_props not null)
-- STMT_HAVE_STAT1_TABLE
SELECT 1 FROM sqlite_master WHERE name='sqlite_stat1' AND type='table'
LIMIT 1
/* ------------------------------------------------------------------------- */
/* Grab all the statements related to the schema. */

View File

@ -190,6 +190,10 @@ extern "C" {
/* A version < this has no work queue (see workqueue.h). */
#define SVN_WC__HAS_WORK_QUEUE 13
/* While we still have this DB version we should verify if there is
sqlite_stat1 table on opening */
#define SVN_WC__ENSURE_STAT1_TABLE 31
/* Return a string indicating the released version (or versions) of
* Subversion that used WC format number WC_FORMAT, or some other
* suitable string if no released version used WC_FORMAT.

View File

@ -708,6 +708,7 @@ insert_base_node(const insert_base_baton_t *pibb,
svn_sqlite__stmt_t *stmt;
svn_filesize_t recorded_size = SVN_INVALID_FILESIZE;
apr_int64_t recorded_time;
svn_boolean_t present;
/* The directory at the WCROOT has a NULL parent_relpath. Otherwise,
bind the appropriate parent_relpath. */
@ -738,6 +739,9 @@ insert_base_node(const insert_base_baton_t *pibb,
SVN_ERR(svn_sqlite__reset(stmt));
}
present = (pibb->status == svn_wc__db_status_normal
|| pibb->status == svn_wc__db_status_incomplete);
SVN_ERR(svn_sqlite__get_statement(&stmt, wcroot->sdb, STMT_INSERT_NODE));
SVN_ERR(svn_sqlite__bindf(stmt, "isdsisr"
"tstr" /* 8 - 11 */
@ -750,15 +754,16 @@ insert_base_node(const insert_base_baton_t *pibb,
pibb->repos_relpath,
pibb->revision,
presence_map, pibb->status, /* 8 */
(pibb->kind == svn_node_dir) ? /* 9 */
svn_token__to_word(depth_map, pibb->depth) : NULL,
(pibb->kind == svn_node_dir && present) /* 9 */
? svn_token__to_word(depth_map, pibb->depth)
: NULL,
kind_map, pibb->kind, /* 10 */
pibb->changed_rev, /* 11 */
pibb->changed_date, /* 12 */
pibb->changed_author, /* 13 */
(pibb->kind == svn_node_symlink) ?
(pibb->kind == svn_node_symlink && present) ?
pibb->target : NULL)); /* 19 */
if (pibb->kind == svn_node_file)
if (pibb->kind == svn_node_file && present)
{
if (!pibb->checksum
&& pibb->status != svn_wc__db_status_not_present
@ -783,11 +788,14 @@ insert_base_node(const insert_base_baton_t *pibb,
assert(pibb->status == svn_wc__db_status_normal
|| pibb->status == svn_wc__db_status_incomplete
|| pibb->props == NULL);
SVN_ERR(svn_sqlite__bind_properties(stmt, 15, pibb->props,
scratch_pool));
if (present)
{
SVN_ERR(svn_sqlite__bind_properties(stmt, 15, pibb->props,
scratch_pool));
SVN_ERR(svn_sqlite__bind_iprops(stmt, 23, pibb->iprops,
SVN_ERR(svn_sqlite__bind_iprops(stmt, 23, pibb->iprops,
scratch_pool));
}
if (pibb->dav_cache)
SVN_ERR(svn_sqlite__bind_properties(stmt, 18, pibb->dav_cache,
@ -992,6 +1000,7 @@ insert_working_node(const insert_working_baton_t *piwb,
const char *moved_to_relpath = NULL;
svn_sqlite__stmt_t *stmt;
svn_boolean_t have_row;
svn_boolean_t present;
SVN_ERR_ASSERT(piwb->op_depth > 0);
@ -1010,6 +1019,9 @@ insert_working_node(const insert_working_baton_t *piwb,
moved_to_relpath = svn_sqlite__column_text(stmt, 0, scratch_pool);
SVN_ERR(svn_sqlite__reset(stmt));
present = (piwb->presence == svn_wc__db_status_normal
|| piwb->presence == svn_wc__db_status_incomplete);
SVN_ERR(svn_sqlite__get_statement(&stmt, wcroot->sdb, STMT_INSERT_NODE));
SVN_ERR(svn_sqlite__bindf(stmt, "isdsnnntstrisn"
"nnnn" /* properties translated_size last_mod_time dav_cache */
@ -1018,14 +1030,14 @@ insert_working_node(const insert_working_baton_t *piwb,
piwb->op_depth,
parent_relpath,
presence_map, piwb->presence,
(piwb->kind == svn_node_dir)
(piwb->kind == svn_node_dir && present)
? svn_token__to_word(depth_map, piwb->depth) : NULL,
kind_map, piwb->kind,
piwb->changed_rev,
piwb->changed_date,
piwb->changed_author,
/* Note: incomplete nodes may have a NULL target. */
(piwb->kind == svn_node_symlink)
(piwb->kind == svn_node_symlink && present)
? piwb->target : NULL,
moved_to_relpath));
@ -1034,7 +1046,7 @@ insert_working_node(const insert_working_baton_t *piwb,
SVN_ERR(svn_sqlite__bind_int(stmt, 8, TRUE));
}
if (piwb->kind == svn_node_file)
if (piwb->kind == svn_node_file && present)
{
SVN_ERR(svn_sqlite__bind_checksum(stmt, 14, piwb->checksum,
scratch_pool));
@ -1051,7 +1063,8 @@ insert_working_node(const insert_working_baton_t *piwb,
assert(piwb->presence == svn_wc__db_status_normal
|| piwb->presence == svn_wc__db_status_incomplete
|| piwb->props == NULL);
SVN_ERR(svn_sqlite__bind_properties(stmt, 15, piwb->props, scratch_pool));
if (present && piwb->original_repos_relpath)
SVN_ERR(svn_sqlite__bind_properties(stmt, 15, piwb->props, scratch_pool));
SVN_ERR(svn_sqlite__insert(NULL, stmt));
@ -2615,17 +2628,21 @@ svn_wc__db_base_get_info(svn_wc__db_status_t *status,
local_abspath, scratch_pool, scratch_pool));
VERIFY_USABLE_WCROOT(wcroot);
SVN_ERR(svn_wc__db_base_get_info_internal(status, kind, revision,
SVN_WC__DB_WITH_TXN4(
svn_wc__db_base_get_info_internal(status, kind, revision,
repos_relpath, &repos_id,
changed_rev, changed_date,
changed_author, depth,
checksum, target, lock,
had_props, props, update_root,
wcroot, local_relpath,
result_pool, scratch_pool));
result_pool, scratch_pool),
svn_wc__db_fetch_repos_info(repos_root_url, repos_uuid,
wcroot->sdb, repos_id, result_pool),
SVN_NO_ERROR,
SVN_NO_ERROR,
wcroot);
SVN_ERR_ASSERT(repos_id != INVALID_REPOS_ID);
SVN_ERR(svn_wc__db_fetch_repos_info(repos_root_url, repos_uuid,
wcroot->sdb, repos_id, result_pool));
return SVN_NO_ERROR;
}
@ -5339,8 +5356,8 @@ svn_wc__db_op_copy_dir(svn_wc__db_t *db,
const char *original_uuid,
svn_revnum_t original_revision,
const apr_array_header_t *children,
svn_boolean_t is_move,
svn_depth_t depth,
svn_boolean_t is_move,
const svn_skel_t *conflict,
const svn_skel_t *work_items,
apr_pool_t *scratch_pool)
@ -5367,11 +5384,6 @@ svn_wc__db_op_copy_dir(svn_wc__db_t *db,
iwb.presence = svn_wc__db_status_normal;
iwb.kind = svn_node_dir;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
if (original_root_url != NULL)
{
SVN_ERR(create_repos_id(&iwb.original_repos_id,
@ -5379,6 +5391,11 @@ svn_wc__db_op_copy_dir(svn_wc__db_t *db,
wcroot->sdb, scratch_pool));
iwb.original_repos_relpath = original_repos_relpath;
iwb.original_revnum = original_revision;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
}
/* ### Should we do this inside the transaction? */
@ -5447,11 +5464,6 @@ svn_wc__db_op_copy_file(svn_wc__db_t *db,
iwb.presence = svn_wc__db_status_normal;
iwb.kind = svn_node_file;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
if (original_root_url != NULL)
{
SVN_ERR(create_repos_id(&iwb.original_repos_id,
@ -5459,6 +5471,11 @@ svn_wc__db_op_copy_file(svn_wc__db_t *db,
wcroot->sdb, scratch_pool));
iwb.original_repos_relpath = original_repos_relpath;
iwb.original_revnum = original_revision;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
}
/* ### Should we do this inside the transaction? */
@ -5501,6 +5518,7 @@ svn_wc__db_op_copy_symlink(svn_wc__db_t *db,
const char *original_uuid,
svn_revnum_t original_revision,
const char *target,
svn_boolean_t is_move,
const svn_skel_t *conflict,
const svn_skel_t *work_items,
apr_pool_t *scratch_pool)
@ -5525,11 +5543,6 @@ svn_wc__db_op_copy_symlink(svn_wc__db_t *db,
iwb.presence = svn_wc__db_status_normal;
iwb.kind = svn_node_symlink;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
iwb.moved_here = FALSE;
if (original_root_url != NULL)
{
@ -5538,6 +5551,11 @@ svn_wc__db_op_copy_symlink(svn_wc__db_t *db,
wcroot->sdb, scratch_pool));
iwb.original_repos_relpath = original_repos_relpath;
iwb.original_revnum = original_revision;
iwb.props = props;
iwb.changed_rev = changed_rev;
iwb.changed_date = changed_date;
iwb.changed_author = changed_author;
}
/* ### Should we do this inside the transaction? */
@ -5547,6 +5565,8 @@ svn_wc__db_op_copy_symlink(svn_wc__db_t *db,
wcroot, local_relpath, scratch_pool));
iwb.target = target;
iwb.moved_here = is_move && (parent_op_depth == 0 ||
iwb.op_depth == parent_op_depth);
iwb.work_items = work_items;
iwb.conflict = conflict;
@ -7518,7 +7538,11 @@ delete_update_movedto(svn_wc__db_wcroot_t *wcroot,
op_depth,
new_moved_to_relpath));
SVN_ERR(svn_sqlite__update(&affected, stmt));
assert(affected == 1);
#ifdef SVN_DEBUG
/* Not fatal in release mode. The move recording is broken,
but the rest of the working copy can handle this. */
SVN_ERR_ASSERT(affected == 1);
#endif
return SVN_NO_ERROR;
}
@ -7908,7 +7932,12 @@ delete_node(void *baton,
child_relpath))
child_op_depth = delete_op_depth;
else
child_op_depth = relpath_depth(child_relpath);
{
/* Calculate depth of the shadowing at the new location */
child_op_depth = child_op_depth
- relpath_depth(local_relpath)
+ relpath_depth(b->moved_to_relpath);
}
fixup = TRUE;
}
@ -8734,19 +8763,22 @@ svn_wc__db_read_info(svn_wc__db_status_t *status,
local_abspath, scratch_pool, scratch_pool));
VERIFY_USABLE_WCROOT(wcroot);
SVN_ERR(read_info(status, kind, revision, repos_relpath, &repos_id,
SVN_WC__DB_WITH_TXN4(
read_info(status, kind, revision, repos_relpath, &repos_id,
changed_rev, changed_date, changed_author,
depth, checksum, target, original_repos_relpath,
&original_repos_id, original_revision, lock,
recorded_size, recorded_time, changelist, conflicted,
op_root, have_props, props_mod,
have_base, have_more_work, have_work,
wcroot, local_relpath, result_pool, scratch_pool));
SVN_ERR(svn_wc__db_fetch_repos_info(repos_root_url, repos_uuid,
wcroot->sdb, repos_id, result_pool));
SVN_ERR(svn_wc__db_fetch_repos_info(original_root_url, original_uuid,
wcroot, local_relpath, result_pool, scratch_pool),
svn_wc__db_fetch_repos_info(repos_root_url, repos_uuid,
wcroot->sdb, repos_id, result_pool),
svn_wc__db_fetch_repos_info(original_root_url, original_uuid,
wcroot->sdb, original_repos_id,
result_pool));
result_pool),
SVN_NO_ERROR,
wcroot);
return SVN_NO_ERROR;
}
@ -10865,59 +10897,86 @@ svn_wc__db_global_relocate(svn_wc__db_t *db,
}
/* Set *REPOS_ID and *REPOS_RELPATH to the BASE repository location of
/* Helper for commit_node()
Set *REPOS_ID and *REPOS_RELPATH to the BASE repository location of
(WCROOT, LOCAL_RELPATH), directly if its BASE row exists or implied from
its parent's BASE row if not. In the latter case, error if the parent
BASE row does not exist. */
static svn_error_t *
determine_repos_info(apr_int64_t *repos_id,
const char **repos_relpath,
svn_wc__db_wcroot_t *wcroot,
const char *local_relpath,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
determine_commit_repos_info(apr_int64_t *repos_id,
const char **repos_relpath,
svn_wc__db_wcroot_t *wcroot,
const char *local_relpath,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
{
svn_sqlite__stmt_t *stmt;
svn_boolean_t have_row;
const char *repos_parent_relpath;
const char *local_parent_relpath, *name;
/* ### is it faster to fetch fewer columns? */
int op_depth;
/* Prefer the current node's repository information. */
SVN_ERR(svn_sqlite__get_statement(&stmt, wcroot->sdb,
STMT_SELECT_BASE_NODE));
STMT_SELECT_NODE_INFO));
SVN_ERR(svn_sqlite__bindf(stmt, "is", wcroot->wc_id, local_relpath));
SVN_ERR(svn_sqlite__step(&have_row, stmt));
if (have_row)
if (!have_row)
return svn_error_createf(SVN_ERR_WC_PATH_NOT_FOUND,
svn_sqlite__reset(stmt),
_("The node '%s' was not found."),
path_for_error_message(wcroot, local_relpath,
scratch_pool));
op_depth = svn_sqlite__column_int(stmt, 0);
if (op_depth > 0)
{
SVN_ERR_ASSERT(!svn_sqlite__column_is_null(stmt, 0));
SVN_ERR_ASSERT(!svn_sqlite__column_is_null(stmt, 1));
svn_wc__db_status_t presence = svn_sqlite__column_token(stmt, 3,
presence_map);
*repos_id = svn_sqlite__column_int64(stmt, 0);
*repos_relpath = svn_sqlite__column_text(stmt, 1, result_pool);
if (presence == svn_wc__db_status_base_deleted)
{
SVN_ERR(svn_sqlite__step_row(stmt)); /* There must be a row */
op_depth = svn_sqlite__column_int(stmt, 0);
}
else
{
const char *parent_repos_relpath;
const char *parent_relpath;
const char *name;
return svn_error_trace(svn_sqlite__reset(stmt));
SVN_ERR(svn_sqlite__reset(stmt));
/* The repository relative path of an add/copy is based on its
ancestor, not on the shadowed base layer.
As this function is only used from the commit processing we know
the parent directory has only a BASE row, so we can just obtain
the information directly by recursing (once!) */
svn_relpath_split(&parent_relpath, &name, local_relpath,
scratch_pool);
SVN_ERR(determine_commit_repos_info(repos_id, &parent_repos_relpath,
wcroot, parent_relpath,
scratch_pool, scratch_pool));
*repos_relpath = svn_relpath_join(parent_repos_relpath, name,
result_pool);
return SVN_NO_ERROR;
}
}
SVN_ERR(svn_sqlite__reset(stmt));
/* This was a child node within this wcroot. We want to look at the
BASE node of the directory. */
svn_relpath_split(&local_parent_relpath, &name, local_relpath, scratch_pool);
SVN_ERR_ASSERT(op_depth == 0); /* And that row must be BASE */
/* The REPOS_ID will be the same (### until we support mixed-repos) */
SVN_ERR(svn_wc__db_base_get_info_internal(NULL, NULL, NULL,
&repos_parent_relpath, repos_id,
NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL, NULL,
wcroot, local_parent_relpath,
scratch_pool, scratch_pool));
SVN_ERR_ASSERT(!svn_sqlite__column_is_null(stmt, 1));
SVN_ERR_ASSERT(!svn_sqlite__column_is_null(stmt, 2));
*repos_relpath = svn_relpath_join(repos_parent_relpath, name, result_pool);
*repos_id = svn_sqlite__column_int64(stmt, 1);
*repos_relpath = svn_sqlite__column_text(stmt, 2, result_pool);
return SVN_NO_ERROR;
return svn_error_trace(svn_sqlite__reset(stmt));
}
/* Helper for svn_wc__db_global_commit()
@ -11103,9 +11162,9 @@ commit_node(svn_wc__db_wcroot_t *wcroot,
For existing nodes, we should retain the (potentially-switched)
repository information. */
SVN_ERR(determine_repos_info(&repos_id, &repos_relpath,
wcroot, local_relpath,
scratch_pool, scratch_pool));
SVN_ERR(determine_commit_repos_info(&repos_id, &repos_relpath,
wcroot, local_relpath,
scratch_pool, scratch_pool));
/* ### is it better to select only the data needed? */
SVN_ERR(svn_sqlite__get_statement(&stmt_info, wcroot->sdb,

View File

@ -1361,8 +1361,8 @@ svn_wc__db_op_copy_dir(svn_wc__db_t *db,
const char *original_uuid,
svn_revnum_t original_revision,
const apr_array_header_t *children,
svn_boolean_t is_move,
svn_depth_t depth,
svn_boolean_t is_move,
const svn_skel_t *conflict,
const svn_skel_t *work_items,
apr_pool_t *scratch_pool);
@ -1403,6 +1403,7 @@ svn_wc__db_op_copy_symlink(svn_wc__db_t *db,
const char *original_uuid,
svn_revnum_t original_revision,
const char *target,
svn_boolean_t is_move,
const svn_skel_t *conflict,
const svn_skel_t *work_items,
apr_pool_t *scratch_pool);

View File

@ -358,6 +358,18 @@ svn_wc__db_with_txn(svn_wc__db_wcroot_t *wcroot,
SVN_SQLITE__WITH_LOCK(expr, (wcroot)->sdb)
/* Evaluate the expressions EXPR1..EXPR4 within a transaction, returning the
* first error if an error occurs.
*
* Begin a transaction in WCROOT's DB; evaluate the expressions, which would
* typically be function calls that do some work in DB; finally commit
* the transaction if EXPR evaluated to SVN_NO_ERROR, otherwise roll back
* the transaction.
*/
#define SVN_WC__DB_WITH_TXN4(expr1, expr2, expr3, expr4, wcroot) \
SVN_SQLITE__WITH_LOCK4(expr1, expr2, expr3, expr4, (wcroot)->sdb)
/* Return CHILDREN mapping const char * names to svn_node_kind_t * for the
children of LOCAL_RELPATH at OP_DEPTH. */
svn_error_t *

View File

@ -262,7 +262,7 @@ svn_wc__db_pdh_create_wcroot(svn_wc__db_wcroot_t **wcroot,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
{
if (sdb != NULL)
if (sdb && format == FORMAT_FROM_SDB)
SVN_ERR(svn_sqlite__read_schema_version(&format, sdb, scratch_pool));
/* If we construct a wcroot, then we better have a format. */
@ -414,6 +414,56 @@ read_link_target(const char **link_target_abspath,
return SVN_NO_ERROR;
}
/* Verify if the sqlite_stat1 table exists and if not tries to add
this table (but ignores errors on adding the schema) */
static svn_error_t *
verify_stats_table(svn_sqlite__db_t *sdb,
int format,
apr_pool_t *scratch_pool)
{
svn_sqlite__stmt_t *stmt;
svn_boolean_t have_row;
if (format != SVN_WC__ENSURE_STAT1_TABLE)
return SVN_NO_ERROR;
SVN_ERR(svn_sqlite__get_statement(&stmt, sdb,
STMT_HAVE_STAT1_TABLE));
SVN_ERR(svn_sqlite__step(&have_row, stmt));
SVN_ERR(svn_sqlite__reset(stmt));
if (!have_row)
{
svn_error_clear(
svn_wc__db_install_schema_statistics(sdb, scratch_pool));
}
return SVN_NO_ERROR;
}
/* Sqlite transaction helper for opening the db in
svn_wc__db_wcroot_parse_local_abspath() to avoid multiple
db operations that each obtain and release a lock */
static svn_error_t *
fetch_sdb_info(apr_int64_t *wc_id,
int *format,
svn_sqlite__db_t *sdb,
apr_pool_t *scratch_pool)
{
*wc_id = -1;
*format = -1;
SVN_SQLITE__WITH_LOCK4(
svn_wc__db_util_fetch_wc_id(wc_id, sdb, scratch_pool),
svn_sqlite__read_schema_version(format, sdb, scratch_pool),
verify_stats_table(sdb, *format, scratch_pool),
SVN_NO_ERROR,
sdb);
return SVN_NO_ERROR;
}
svn_error_t *
svn_wc__db_wcroot_parse_local_abspath(svn_wc__db_wcroot_t **wcroot,
const char **local_relpath,
@ -654,9 +704,10 @@ svn_wc__db_wcroot_parse_local_abspath(svn_wc__db_wcroot_t **wcroot,
/* We finally found the database. Construct a wcroot_t for it. */
apr_int64_t wc_id;
int format;
svn_error_t *err;
err = svn_wc__db_util_fetch_wc_id(&wc_id, sdb, scratch_pool);
err = fetch_sdb_info(&wc_id, &format, sdb, scratch_pool);
if (err)
{
if (err->apr_err == SVN_ERR_WC_CORRUPT)
@ -677,7 +728,7 @@ svn_wc__db_wcroot_parse_local_abspath(svn_wc__db_wcroot_t **wcroot,
symlink_wcroot_abspath
? symlink_wcroot_abspath
: local_abspath),
sdb, wc_id, FORMAT_FROM_SDB,
sdb, wc_id, format,
db->verify_format, db->enforce_empty_wq,
db->state_pool, scratch_pool);
if (err && (err->apr_err == SVN_ERR_WC_UNSUPPORTED_FORMAT ||

View File

@ -509,24 +509,6 @@ static const resolver_option_t prop_conflict_options[] =
{ NULL }
};
/* Resolver options for an obstructued addition */
static const resolver_option_t obstructed_add_options[] =
{
{ "mf", N_("my version"), N_("accept pre-existing item (ignore "
"upstream addition) [mine-full]"),
svn_wc_conflict_choose_mine_full },
{ "tf", N_("their version"), N_("accept incoming item (overwrite "
"pre-existing item) [theirs-full]"),
svn_wc_conflict_choose_theirs_full },
{ "p", N_("postpone"), N_("mark the conflict to be resolved later"
" [postpone]"),
svn_wc_conflict_choose_postpone },
{ "q", N_("quit resolution"), N_("postpone all remaining conflicts"),
svn_wc_conflict_choose_postpone },
{ "h", N_("help"), N_("show this help (also '?')"), -1 },
{ NULL }
};
/* Resolver options for a tree conflict */
static const resolver_option_t tree_conflict_options[] =
{
@ -1132,56 +1114,6 @@ handle_tree_conflict(svn_wc_conflict_result_t *result,
return SVN_NO_ERROR;
}
/* Ask the user what to do about the obstructed add described by DESC.
* Return the answer in RESULT. B is the conflict baton for this
* conflict resolution session.
* SCRATCH_POOL is used for temporary allocations. */
static svn_error_t *
handle_obstructed_add(svn_wc_conflict_result_t *result,
const svn_wc_conflict_description2_t *desc,
svn_cl__interactive_conflict_baton_t *b,
apr_pool_t *scratch_pool)
{
apr_pool_t *iterpool;
SVN_ERR(svn_cmdline_fprintf(
stderr, scratch_pool,
_("Conflict discovered when trying to add '%s'.\n"
"An object of the same name already exists.\n"),
svn_cl__local_style_skip_ancestor(b->path_prefix,
desc->local_abspath,
scratch_pool)));
iterpool = svn_pool_create(scratch_pool);
while (1)
{
const resolver_option_t *opt;
svn_pool_clear(iterpool);
SVN_ERR(prompt_user(&opt, obstructed_add_options, NULL, b->pb,
iterpool));
if (! opt)
continue;
if (strcmp(opt->code, "q") == 0)
{
result->choice = opt->choice;
b->accept_which = svn_cl__accept_postpone;
b->quit = TRUE;
break;
}
else if (opt->choice != -1)
{
result->choice = opt->choice;
break;
}
}
svn_pool_destroy(iterpool);
return SVN_NO_ERROR;
}
/* The body of svn_cl__conflict_func_interactive(). */
static svn_error_t *
conflict_func_interactive(svn_wc_conflict_result_t **result,
@ -1330,29 +1262,6 @@ conflict_func_interactive(svn_wc_conflict_result_t **result,
SVN_ERR(handle_text_conflict(*result, desc, b, scratch_pool));
else if (desc->kind == svn_wc_conflict_kind_property)
SVN_ERR(handle_prop_conflict(*result, desc, b, result_pool, scratch_pool));
/*
Dealing with obstruction of additions can be tricky. The
obstructing item could be unversioned, versioned, or even
schedule-add. Here's a matrix of how the caller should behave,
based on results we return.
Unversioned Versioned Schedule-Add
choose_mine skip addition, skip addition skip addition
add existing item
choose_theirs destroy file, schedule-delete, revert add,
add new item. add new item. rm file,
add new item
postpone [ bail out ]
*/
else if ((desc->action == svn_wc_conflict_action_add)
&& (desc->reason == svn_wc_conflict_reason_obstructed))
SVN_ERR(handle_obstructed_add(*result, desc, b, scratch_pool));
else if (desc->kind == svn_wc_conflict_kind_tree)
SVN_ERR(handle_tree_conflict(*result, desc, b, scratch_pool));

View File

@ -49,6 +49,12 @@ struct print_baton {
svn_boolean_t in_external;
};
/* Field flags required for this function */
static const apr_uint32_t print_dirent_fields = SVN_DIRENT_KIND;
static const apr_uint32_t print_dirent_fields_verbose = (
SVN_DIRENT_KIND | SVN_DIRENT_SIZE | SVN_DIRENT_TIME |
SVN_DIRENT_CREATED_REV | SVN_DIRENT_LAST_AUTHOR);
/* This implements the svn_client_list_func2_t API, printing a single
directory entry in text format. */
static svn_error_t *
@ -161,7 +167,10 @@ print_dirent(void *baton,
}
}
/* Field flags required for this function */
static const apr_uint32_t print_dirent_xml_fields = (
SVN_DIRENT_KIND | SVN_DIRENT_SIZE | SVN_DIRENT_TIME |
SVN_DIRENT_CREATED_REV | SVN_DIRENT_LAST_AUTHOR);
/* This implements the svn_client_list_func2_t API, printing a single dirent
in XML format. */
static svn_error_t *
@ -313,10 +322,12 @@ svn_cl__list(apr_getopt_t *os,
"mode"));
}
if (opt_state->verbose || opt_state->xml)
dirent_fields = SVN_DIRENT_ALL;
if (opt_state->xml)
dirent_fields = print_dirent_xml_fields;
else if (opt_state->verbose)
dirent_fields = print_dirent_fields_verbose;
else
dirent_fields = SVN_DIRENT_KIND; /* the only thing we actually need... */
dirent_fields = print_dirent_fields;
pb.ctx = ctx;
pb.verbose = opt_state->verbose;

View File

@ -1327,6 +1327,13 @@ const svn_opt_subcommand_desc2_t svn_cl__cmd_table[] =
" directory:\n"
" svn:ignore - A list of file glob patterns to ignore, one per line.\n"
" svn:global-ignores - Like svn:ignore, but inheritable.\n"
" svn:auto-props - Automatically set properties on files when they are\n"
" added or imported. Contains key-value pairs, one per line, in the format:\n"
" PATTERN = PROPNAME=VALUE[;PROPNAME=VALUE ...]\n"
" Example (where a literal ';' is escaped by adding another ';'):\n"
" *.html = svn:eol-style=native;svn:mime-type=text/html;; charset=UTF8\n"
" Applies recursively to all files added or imported under the directory\n"
" it is set on. See also [auto-props] in the client configuration file.\n"
" svn:externals - A list of module specifiers, one per line, in the\n"
" following format similar to the syntax of 'svn checkout':\n"
" [-r REV] URL[@PEG] LOCALPATH\n"
@ -2514,9 +2521,15 @@ sub_main(int argc, const char *argv[], apr_pool_t *pool)
if (APR_STATUS_IS_EACCES(err->apr_err)
|| SVN__APR_STATUS_IS_ENOTDIR(err->apr_err))
{
svn_config_t *empty_cfg;
svn_handle_warning2(stderr, err, "svn: ");
svn_error_clear(err);
cfg_hash = NULL;
cfg_hash = apr_hash_make(pool);
SVN_INT_ERR(svn_config_create2(&empty_cfg, FALSE, FALSE, pool));
svn_hash_sets(cfg_hash, SVN_CONFIG_CATEGORY_CONFIG, empty_cfg);
SVN_INT_ERR(svn_config_create2(&empty_cfg, FALSE, FALSE, pool));
svn_hash_sets(cfg_hash, SVN_CONFIG_CATEGORY_SERVERS, empty_cfg);
}
else
return EXIT_ERROR(err);

View File

@ -91,8 +91,7 @@
/* Define to 1 if you have the <zlib.h> header file. */
#undef HAVE_ZLIB_H
/* Define to the sub-directory in which libtool stores uninstalled libraries.
*/
/* Define to the sub-directory where libtool stores uninstalled libraries. */
#undef LT_OBJDIR
/* Define to the address where bug reports for this package should be sent. */
@ -116,6 +115,9 @@
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Defined to allow building against httpd 2.4 with broken auth */
#undef SVN_ALLOW_BROKEN_HTTPD_AUTH
/* Define to the Python/C API format character suitable for apr_int64_t */
#undef SVN_APR_INT64_T_PYCFMT

View File

@ -1084,7 +1084,7 @@ subcommand_freeze(apr_getopt_t *os, void *baton, apr_pool_t *pool)
}
b.command = APR_ARRAY_IDX(args, 0, const char *);
b.args = apr_palloc(pool, sizeof(char *) * args->nelts + 1);
b.args = apr_palloc(pool, sizeof(char *) * (args->nelts + 1));
for (i = 0; i < args->nelts; ++i)
b.args[i] = APR_ARRAY_IDX(args, i, const char *);
b.args[args->nelts] = NULL;

View File

@ -791,8 +791,7 @@ adjust_mergeinfo(svn_string_t **final_val, const svn_string_t *initial_val,
start of all history. E.g. if we dump -r100:400 then dumpfilter the
result with --skip-missing-merge-sources, any mergeinfo with revision
100 implies a change of -r99:100, but r99 is part of the history we
want filtered. This is analogous to how r1 is always meaningless as
a merge source revision.
want filtered.
If the oldest rev is r0 then there is nothing to filter. */
if (rb->pb->skip_missing_merge_sources && rb->pb->oldest_original_rev > 0)
@ -852,7 +851,7 @@ adjust_mergeinfo(svn_string_t **final_val, const svn_string_t *initial_val,
svn_hash_sets(final_mergeinfo, merge_source, rangelist);
}
SVN_ERR(svn_mergeinfo_sort(final_mergeinfo, subpool));
SVN_ERR(svn_mergeinfo__canonicalize_ranges(final_mergeinfo, subpool));
SVN_ERR(svn_mergeinfo_to_string(final_val, final_mergeinfo, pool));
svn_pool_destroy(subpool);

View File

@ -41,6 +41,8 @@
#define SVNRDUMP_PROP_LOCK SVN_PROP_PREFIX "rdump-lock"
#define ARE_VALID_COPY_ARGS(p,r) ((p) && SVN_IS_VALID_REVNUM(r))
#if 0
#define LDR_DBG(x) SVN_DBG(x)
#else
@ -102,6 +104,15 @@ struct directory_baton
void *baton;
const char *relpath;
int depth;
/* The copy-from source of this directory, no matter whether it is
copied explicitly (the root node of a copy) or implicitly (being an
existing child of a copied directory). For a node that is newly
added (without history), even inside a copied parent, these are
NULL and SVN_INVALID_REVNUM. */
const char *copyfrom_path;
svn_revnum_t copyfrom_rev;
struct directory_baton *parent;
};
@ -115,12 +126,20 @@ struct node_baton
svn_node_kind_t kind;
enum svn_node_action action;
/* Is this directory explicitly added? If not, then it already existed
or is a child of a copy. */
svn_boolean_t is_added;
svn_revnum_t copyfrom_rev;
const char *copyfrom_path;
const char *copyfrom_url;
void *file_baton;
const char *base_checksum;
/* (const char *name) -> (svn_prop_t *) */
apr_hash_t *prop_changes;
struct revision_baton *rb;
};
@ -316,16 +335,7 @@ renumber_mergeinfo_revs(svn_string_t **final_val,
subpool, subpool));
}
SVN_ERR(svn_mergeinfo_sort(final_mergeinfo, subpool));
/* Mergeinfo revision sources for r0 and r1 are invalid; you can't merge r0
or r1. However, svndumpfilter can be abused to produce r1 merge source
revs. So if we encounter any, then strip them out, no need to put them
into the load target. */
SVN_ERR(svn_mergeinfo__filter_mergeinfo_by_ranges(&final_mergeinfo,
final_mergeinfo,
1, 0, FALSE,
subpool, subpool));
SVN_ERR(svn_mergeinfo__canonicalize_ranges(final_mergeinfo, subpool));
SVN_ERR(svn_mergeinfo_to_string(final_val, final_mergeinfo, pool));
svn_pool_destroy(subpool);
@ -550,6 +560,7 @@ new_revision_record(void **revision_baton,
pb = parse_baton;
rb->pool = svn_pool_create(pool);
rb->pb = pb;
rb->db = NULL;
for (hi = apr_hash_first(pool, headers); hi; hi = apr_hash_next(hi))
{
@ -601,6 +612,47 @@ uuid_record(const char *uuid,
return SVN_NO_ERROR;
}
/* Push information about another directory onto the linked list RB->db.
*
* CHILD_BATON is the baton returned by the commit editor. RELPATH is the
* repository-relative path of this directory. IS_ADDED is true iff this
* directory is being added (with or without history). If added with
* history then COPYFROM_PATH/COPYFROM_REV are the copyfrom source, else
* are NULL/SVN_INVALID_REVNUM.
*/
static void
push_directory(struct revision_baton *rb,
void *child_baton,
const char *relpath,
svn_boolean_t is_added,
const char *copyfrom_path,
svn_revnum_t copyfrom_rev)
{
struct directory_baton *child_db = apr_pcalloc(rb->pool, sizeof (*child_db));
SVN_ERR_ASSERT_NO_RETURN(
is_added || (copyfrom_path == NULL && copyfrom_rev == SVN_INVALID_REVNUM));
/* If this node is an existing (not newly added) child of a copied node,
calculate where it was copied from. */
if (!is_added
&& ARE_VALID_COPY_ARGS(rb->db->copyfrom_path, rb->db->copyfrom_rev))
{
const char *name = svn_relpath_basename(relpath, NULL);
copyfrom_path = svn_relpath_join(rb->db->copyfrom_path, name,
rb->pool);
copyfrom_rev = rb->db->copyfrom_rev;
}
child_db->baton = child_baton;
child_db->relpath = relpath;
child_db->copyfrom_path = copyfrom_path;
child_db->copyfrom_rev = copyfrom_rev;
child_db->parent = rb->db;
rb->db = child_db;
}
static svn_error_t *
new_node_record(void **node_baton,
apr_hash_t *headers,
@ -611,17 +663,17 @@ new_node_record(void **node_baton,
const struct svn_delta_editor_t *commit_editor = rb->pb->commit_editor;
void *commit_edit_baton = rb->pb->commit_edit_baton;
struct node_baton *nb;
struct directory_baton *child_db;
apr_hash_index_t *hi;
void *child_baton;
char *relpath_compose;
const char *nb_dirname;
nb = apr_pcalloc(rb->pool, sizeof(*nb));
nb->rb = rb;
nb->is_added = FALSE;
nb->copyfrom_path = NULL;
nb->copyfrom_url = NULL;
nb->copyfrom_rev = SVN_INVALID_REVNUM;
nb->prop_changes = apr_hash_make(rb->pool);
/* If the creation of commit_editor is pending, create it now and
open_root on it; also create a top-level directory baton. */
@ -654,13 +706,9 @@ new_node_record(void **node_baton,
LDR_DBG(("Opened root %p\n", child_baton));
/* child_db corresponds to the root directory baton here */
child_db = apr_pcalloc(rb->pool, sizeof(*child_db));
child_db->baton = child_baton;
child_db->depth = 0;
child_db->relpath = "";
child_db->parent = NULL;
rb->db = child_db;
/* child_baton corresponds to the root directory baton here */
push_directory(rb, child_baton, "", TRUE /*is_added*/,
NULL, SVN_INVALID_REVNUM);
}
for (hi = apr_hash_first(rb->pool, headers); hi; hi = apr_hash_next(hi))
@ -730,7 +778,7 @@ new_node_record(void **node_baton,
for (i = 0; i < residual_open_path->nelts; i ++)
{
relpath_compose =
char *relpath_compose =
svn_relpath_join(rb->db->relpath,
APR_ARRAY_IDX(residual_open_path, i, const char *),
rb->pool);
@ -739,12 +787,8 @@ new_node_record(void **node_baton,
rb->rev - rb->rev_offset - 1,
rb->pool, &child_baton));
LDR_DBG(("Opened dir %p\n", child_baton));
child_db = apr_pcalloc(rb->pool, sizeof(*child_db));
child_db->baton = child_baton;
child_db->depth = rb->db->depth + 1;
child_db->relpath = relpath_compose;
child_db->parent = rb->db;
rb->db = child_db;
push_directory(rb, child_baton, relpath_compose, TRUE /*is_added*/,
NULL, SVN_INVALID_REVNUM);
}
}
@ -772,7 +816,7 @@ new_node_record(void **node_baton,
if (rb->pb->parent_dir)
nb->copyfrom_path = svn_relpath_join(rb->pb->parent_dir,
nb->copyfrom_path, rb->pool);
nb->copyfrom_path = svn_path_url_add_component2(rb->pb->root_url,
nb->copyfrom_url = svn_path_url_add_component2(rb->pb->root_url,
nb->copyfrom_path,
rb->pool);
}
@ -783,18 +827,20 @@ new_node_record(void **node_baton,
case svn_node_action_delete:
case svn_node_action_replace:
LDR_DBG(("Deleting entry %s in %p\n", nb->path, rb->db->baton));
SVN_ERR(commit_editor->delete_entry(nb->path, rb->rev - rb->rev_offset,
SVN_ERR(commit_editor->delete_entry(nb->path,
rb->rev - rb->rev_offset - 1,
rb->db->baton, rb->pool));
if (nb->action == svn_node_action_delete)
break;
else
/* FALL THROUGH */;
case svn_node_action_add:
nb->is_added = TRUE;
switch (nb->kind)
{
case svn_node_file:
SVN_ERR(commit_editor->add_file(nb->path, rb->db->baton,
nb->copyfrom_path,
nb->copyfrom_url,
nb->copyfrom_rev,
rb->pool, &(nb->file_baton)));
LDR_DBG(("Added file %s to dir %p as %p\n",
@ -802,17 +848,13 @@ new_node_record(void **node_baton,
break;
case svn_node_dir:
SVN_ERR(commit_editor->add_directory(nb->path, rb->db->baton,
nb->copyfrom_path,
nb->copyfrom_url,
nb->copyfrom_rev,
rb->pool, &child_baton));
LDR_DBG(("Added dir %s to dir %p as %p\n",
nb->path, rb->db->baton, child_baton));
child_db = apr_pcalloc(rb->pool, sizeof(*child_db));
child_db->baton = child_baton;
child_db->depth = rb->db->depth + 1;
child_db->relpath = apr_pstrdup(rb->pool, nb->path);
child_db->parent = rb->db;
rb->db = child_db;
push_directory(rb, child_baton, nb->path, TRUE /*is_added*/,
nb->copyfrom_path, nb->copyfrom_rev);
break;
default:
break;
@ -830,12 +872,8 @@ new_node_record(void **node_baton,
SVN_ERR(commit_editor->open_directory(nb->path, rb->db->baton,
rb->rev - rb->rev_offset - 1,
rb->pool, &child_baton));
child_db = apr_pcalloc(rb->pool, sizeof(*child_db));
child_db->baton = child_baton;
child_db->depth = rb->db->depth + 1;
child_db->relpath = apr_pstrdup(rb->pool, nb->path);
child_db->parent = rb->db;
rb->db = child_db;
push_directory(rb, child_baton, nb->path, FALSE /*is_added*/,
NULL, SVN_INVALID_REVNUM);
break;
}
break;
@ -887,8 +925,8 @@ set_node_property(void *baton,
const svn_string_t *value)
{
struct node_baton *nb = baton;
const struct svn_delta_editor_t *commit_editor = nb->rb->pb->commit_editor;
apr_pool_t *pool = nb->rb->pool;
svn_prop_t *prop;
if (value && strcmp(name, SVN_PROP_MERGEINFO) == 0)
{
@ -938,21 +976,11 @@ set_node_property(void *baton,
SVN_ERR(svn_repos__validate_prop(name, value, pool));
switch (nb->kind)
{
case svn_node_file:
LDR_DBG(("Applying properties on %p\n", nb->file_baton));
SVN_ERR(commit_editor->change_file_prop(nb->file_baton, name,
value, pool));
break;
case svn_node_dir:
LDR_DBG(("Applying properties on %p\n", nb->rb->db->baton));
SVN_ERR(commit_editor->change_dir_prop(nb->rb->db->baton, name,
value, pool));
break;
default:
break;
}
prop = apr_palloc(nb->rb->pool, sizeof (*prop));
prop->name = apr_pstrdup(pool, name);
prop->value = value ? svn_string_dup(value, pool) : NULL;
svn_hash_sets(nb->prop_changes, prop->name, prop);
return SVN_NO_ERROR;
}
@ -961,44 +989,84 @@ delete_node_property(void *baton,
const char *name)
{
struct node_baton *nb = baton;
const struct svn_delta_editor_t *commit_editor = nb->rb->pb->commit_editor;
apr_pool_t *pool = nb->rb->pool;
svn_prop_t *prop;
SVN_ERR(svn_repos__validate_prop(name, NULL, pool));
if (nb->kind == svn_node_file)
SVN_ERR(commit_editor->change_file_prop(nb->file_baton, name,
NULL, pool));
else
SVN_ERR(commit_editor->change_dir_prop(nb->rb->db->baton, name,
NULL, pool));
prop = apr_palloc(pool, sizeof (*prop));
prop->name = apr_pstrdup(pool, name);
prop->value = NULL;
svn_hash_sets(nb->prop_changes, prop->name, prop);
return SVN_NO_ERROR;
}
/* Delete all the properties of the node, if any.
*
* The commit editor doesn't have a method to delete a node's properties
* without knowing what they are, so we have to first find out what
* properties the node would have had. If it's copied (explicitly or
* implicitly), we look at the copy source. If it's only being changed,
* we look at the node's current path in the head revision.
*/
static svn_error_t *
remove_node_props(void *baton)
{
struct node_baton *nb = baton;
struct revision_baton *rb = nb->rb;
apr_pool_t *pool = nb->rb->pool;
apr_hash_index_t *hi;
apr_hash_t *props;
const char *orig_path;
svn_revnum_t orig_rev;
/* Find the path and revision that has the node's original properties */
if (ARE_VALID_COPY_ARGS(nb->copyfrom_path, nb->copyfrom_rev))
{
LDR_DBG(("using nb->copyfrom %s@%ld", nb->copyfrom_path, nb->copyfrom_rev));
orig_path = nb->copyfrom_path;
orig_rev = nb->copyfrom_rev;
}
else if (!nb->is_added
&& ARE_VALID_COPY_ARGS(rb->db->copyfrom_path, rb->db->copyfrom_rev))
{
/* If this is a dir, then it's described by rb->db;
if this is a file, then it's a child of the dir in rb->db. */
LDR_DBG(("using rb->db->copyfrom (k=%d) %s@%ld",
nb->kind, rb->db->copyfrom_path, rb->db->copyfrom_rev));
orig_path = (nb->kind == svn_node_dir)
? rb->db->copyfrom_path
: svn_relpath_join(rb->db->copyfrom_path,
svn_relpath_basename(nb->path, NULL),
rb->pool);
orig_rev = rb->db->copyfrom_rev;
}
else
{
LDR_DBG(("using self.path@head %s@%ld", nb->path, SVN_INVALID_REVNUM));
/* ### Should we query at a known, fixed, "head" revision number
instead of passing SVN_INVALID_REVNUM and getting a moving target? */
orig_path = nb->path;
orig_rev = SVN_INVALID_REVNUM;
}
LDR_DBG(("Trying %s@%ld", orig_path, orig_rev));
if ((nb->action == svn_node_action_add
|| nb->action == svn_node_action_replace)
&& ! SVN_IS_VALID_REVNUM(nb->copyfrom_rev))
&& ! ARE_VALID_COPY_ARGS(orig_path, orig_rev))
/* Add-without-history; no "old" properties to worry about. */
return SVN_NO_ERROR;
if (nb->kind == svn_node_file)
{
SVN_ERR(svn_ra_get_file(nb->rb->pb->aux_session, nb->path,
SVN_INVALID_REVNUM, NULL, NULL, &props, pool));
SVN_ERR(svn_ra_get_file(nb->rb->pb->aux_session,
orig_path, orig_rev, NULL, NULL, &props, pool));
}
else /* nb->kind == svn_node_dir */
{
SVN_ERR(svn_ra_get_dir2(nb->rb->pb->aux_session, NULL, NULL, &props,
nb->path, SVN_INVALID_REVNUM, 0, pool));
orig_path, orig_rev, 0, pool));
}
for (hi = apr_hash_first(pool, props); hi; hi = apr_hash_next(hi))
@ -1052,6 +1120,29 @@ close_node(void *baton)
{
struct node_baton *nb = baton;
const struct svn_delta_editor_t *commit_editor = nb->rb->pb->commit_editor;
apr_pool_t *pool = nb->rb->pool;
apr_hash_index_t *hi;
for (hi = apr_hash_first(pool, nb->prop_changes);
hi; hi = apr_hash_next(hi))
{
const char *name = svn__apr_hash_index_key(hi);
svn_prop_t *prop = svn__apr_hash_index_val(hi);
switch (nb->kind)
{
case svn_node_file:
SVN_ERR(commit_editor->change_file_prop(nb->file_baton,
name, prop->value, pool));
break;
case svn_node_dir:
SVN_ERR(commit_editor->change_dir_prop(nb->rb->db->baton,
name, prop->value, pool));
break;
default:
break;
}
}
/* Pass a file node closure through to the editor *unless* we
deleted the file (which doesn't require us to open it). */

View File

@ -1710,6 +1710,11 @@ static svn_error_t *get_dir(svn_ra_svn_conn_t *conn, apr_pool_t *pool,
&ab, root, full_path,
pool));
/* Fetch the directories' entries before starting the response, to allow
proper error handling in cases like when FULL_PATH doesn't exist */
if (want_contents)
SVN_CMD_ERR(svn_fs_dir_entries(&entries, root, full_path, pool));
/* Begin response ... */
SVN_ERR(svn_ra_svn__write_tuple(conn, pool, "w(r(!", "success", rev));
SVN_ERR(svn_ra_svn__write_proplist(conn, pool, props));
@ -1721,8 +1726,6 @@ static svn_error_t *get_dir(svn_ra_svn_conn_t *conn, apr_pool_t *pool,
/* Use epoch for a placeholder for a missing date. */
const char *missing_date = svn_time_to_cstring(0, pool);
SVN_CMD_ERR(svn_fs_dir_entries(&entries, root, full_path, pool));
/* Transform the hash table's FS entries into dirents. This probably
* belongs in libsvn_repos. */
subpool = svn_pool_create(pool);
@ -2465,9 +2468,30 @@ static svn_error_t *get_location_segments(svn_ra_svn_conn_t *conn,
abs_path = svn_fspath__join(b->fs_path->data, relative_path, pool);
if (SVN_IS_VALID_REVNUM(start_rev)
&& SVN_IS_VALID_REVNUM(end_rev)
&& (end_rev > start_rev))
SVN_ERR(trivial_auth_request(conn, pool, b));
SVN_ERR(log_command(baton, conn, pool, "%s",
svn_log__get_location_segments(abs_path, peg_revision,
start_rev, end_rev,
pool)));
/* No START_REV or PEG_REVISION? We'll use HEAD. */
if (!SVN_IS_VALID_REVNUM(start_rev) || !SVN_IS_VALID_REVNUM(peg_revision))
{
svn_revnum_t youngest;
SVN_CMD_ERR(svn_fs_youngest_rev(&youngest, b->fs, pool));
if (!SVN_IS_VALID_REVNUM(start_rev))
start_rev = youngest;
if (!SVN_IS_VALID_REVNUM(peg_revision))
peg_revision = youngest;
}
/* No END_REV? We'll use 0. */
if (!SVN_IS_VALID_REVNUM(end_rev))
end_rev = 0;
if (end_rev > start_rev)
{
err = svn_error_createf(SVN_ERR_INCORRECT_PARAMS, NULL,
"Get-location-segments end revision must not be "
@ -2475,9 +2499,7 @@ static svn_error_t *get_location_segments(svn_ra_svn_conn_t *conn,
return log_fail_and_flush(err, b, conn, pool);
}
if (SVN_IS_VALID_REVNUM(peg_revision)
&& SVN_IS_VALID_REVNUM(start_rev)
&& (start_rev > peg_revision))
if (start_rev > peg_revision)
{
err = svn_error_createf(SVN_ERR_INCORRECT_PARAMS, NULL,
"Get-location-segments start revision must not "
@ -2485,12 +2507,6 @@ static svn_error_t *get_location_segments(svn_ra_svn_conn_t *conn,
return log_fail_and_flush(err, b, conn, pool);
}
SVN_ERR(trivial_auth_request(conn, pool, b));
SVN_ERR(log_command(baton, conn, pool, "%s",
svn_log__get_location_segments(abs_path, peg_revision,
start_rev, end_rev,
pool)));
/* All the parameters are fine - let's perform the query against the
* repository. */

View File

@ -34,6 +34,8 @@
#include "svn_subst.h"
#include "svn_string.h"
#include "private/svn_string_private.h"
#include "sync.h"
#include "svn_private_config.h"
@ -83,6 +85,92 @@ normalize_string(const svn_string_t **str,
return SVN_NO_ERROR;
}
/* Remove r0 references from the mergeinfo string *STR.
*
* r0 was never a valid mergeinfo reference and cannot be committed with
* recent servers, but can be committed through a server older than 1.6.18
* for HTTP or older than 1.6.17 for the other protocols. See issue #4476
* "Mergeinfo containing r0 makes svnsync and dump and load fail".
*
* Set *WAS_CHANGED to TRUE if *STR was changed, otherwise to FALSE.
*/
static svn_error_t *
remove_r0_mergeinfo(const svn_string_t **str,
svn_boolean_t *was_changed,
apr_pool_t *result_pool,
apr_pool_t *scratch_pool)
{
svn_stringbuf_t *new_str = svn_stringbuf_create_empty(result_pool);
apr_array_header_t *lines;
int i;
SVN_ERR_ASSERT(*str && (*str)->data);
*was_changed = FALSE;
/* for each line */
lines = svn_cstring_split((*str)->data, "\n", FALSE, scratch_pool);
for (i = 0; i < lines->nelts; i++)
{
char *line = APR_ARRAY_IDX(lines, i, char *);
char *colon;
char *rangelist;
/* split at the last colon */
colon = strrchr(line, ':');
if (! colon)
return svn_error_createf(SVN_ERR_MERGEINFO_PARSE_ERROR, NULL,
_("Missing colon in svn:mergeinfo "
"property"));
rangelist = colon + 1;
/* remove r0 */
if (colon[1] == '0')
{
if (strncmp(rangelist, "0*,", 3) == 0)
{
rangelist += 3;
}
else if (strcmp(rangelist, "0*") == 0
|| strncmp(rangelist, "0,", 2) == 0
|| strncmp(rangelist, "0-1*", 4) == 0
|| strncmp(rangelist, "0-1,", 4) == 0
|| strcmp(rangelist, "0-1") == 0)
{
rangelist += 2;
}
else if (strcmp(rangelist, "0") == 0)
{
rangelist += 1;
}
else if (strncmp(rangelist, "0-", 2) == 0)
{
rangelist[0] = '1';
}
}
/* reassemble */
if (rangelist[0])
{
if (new_str->len)
svn_stringbuf_appendbyte(new_str, '\n');
svn_stringbuf_appendbytes(new_str, line, colon + 1 - line);
svn_stringbuf_appendcstr(new_str, rangelist);
}
}
if (strcmp((*str)->data, new_str->data) != 0)
{
*was_changed = TRUE;
}
*str = svn_stringbuf__morph_into_string(new_str);
return SVN_NO_ERROR;
}
/* Normalize the encoding and line ending style of the values of properties
* in REV_PROPS that "need translation" (according to
@ -153,6 +241,7 @@ typedef struct edit_baton_t {
svn_boolean_t got_textdeltas;
svn_revnum_t base_revision;
svn_boolean_t quiet;
svn_boolean_t mergeinfo_tweaked; /* Did we tweak svn:mergeinfo? */
svn_boolean_t strip_mergeinfo; /* Are we stripping svn:mergeinfo? */
svn_boolean_t migrate_svnmerge; /* Are we converting svnmerge.py data? */
svn_boolean_t mergeinfo_stripped; /* Did we strip svn:mergeinfo? */
@ -414,8 +503,19 @@ change_file_prop(void *file_baton,
if (svn_prop_needs_translation(name))
{
svn_boolean_t was_normalized;
svn_boolean_t mergeinfo_tweaked = FALSE;
/* Normalize encoding to UTF-8, and EOL style to LF. */
SVN_ERR(normalize_string(&value, &was_normalized,
eb->source_prop_encoding, pool, pool));
/* Correct malformed mergeinfo. */
if (value && strcmp(name, SVN_PROP_MERGEINFO) == 0)
{
SVN_ERR(remove_r0_mergeinfo(&value, &mergeinfo_tweaked,
pool, pool));
if (mergeinfo_tweaked)
eb->mergeinfo_tweaked = TRUE;
}
if (was_normalized)
(*(eb->normalized_node_props_counter))++;
}
@ -513,8 +613,19 @@ change_dir_prop(void *dir_baton,
if (svn_prop_needs_translation(name))
{
svn_boolean_t was_normalized;
svn_boolean_t mergeinfo_tweaked = FALSE;
/* Normalize encoding to UTF-8, and EOL style to LF. */
SVN_ERR(normalize_string(&value, &was_normalized, eb->source_prop_encoding,
pool, pool));
/* Maybe adjust svn:mergeinfo. */
if (value && strcmp(name, SVN_PROP_MERGEINFO) == 0)
{
SVN_ERR(remove_r0_mergeinfo(&value, &mergeinfo_tweaked,
pool, pool));
if (mergeinfo_tweaked)
eb->mergeinfo_tweaked = TRUE;
}
if (was_normalized)
(*(eb->normalized_node_props_counter))++;
}
@ -548,6 +659,10 @@ close_edit(void *edit_baton,
{
if (eb->got_textdeltas)
SVN_ERR(svn_cmdline_printf(pool, "\n"));
if (eb->mergeinfo_tweaked)
SVN_ERR(svn_cmdline_printf(pool,
"NOTE: Adjusted Subversion mergeinfo in "
"this revision.\n"));
if (eb->mergeinfo_stripped)
SVN_ERR(svn_cmdline_printf(pool,
"NOTE: Dropped Subversion mergeinfo "

View File

@ -25,7 +25,7 @@
"""
# $HeadURL: http://svn.apache.org/repos/asf/subversion/branches/1.8.x/win-tests.py $
# $LastChangedRevision: 1492044 $
# $LastChangedRevision: 1692801 $
import os, sys, subprocess
import filecmp
@ -481,6 +481,7 @@ def __init__(self, abs_httpd_dir, abs_objdir, abs_builddir, httpd_port,
self.httpd_config = os.path.join(self.root, 'httpd.conf')
self.httpd_users = os.path.join(self.root, 'users')
self.httpd_mime_types = os.path.join(self.root, 'mime.types')
self.httpd_groups = os.path.join(self.root, 'groups')
self.abs_builddir = abs_builddir
self.abs_objdir = abs_objdir
self.service_name = 'svn-test-httpd-' + str(httpd_port)
@ -494,6 +495,7 @@ def __init__(self, abs_httpd_dir, abs_objdir, abs_builddir, httpd_port,
create_target_dir(self.root_dir)
self._create_users_file()
self._create_groups_file()
self._create_mime_types_file()
self._create_dontdothat_file()
@ -540,6 +542,8 @@ def __init__(self, abs_httpd_dir, abs_objdir, abs_builddir, httpd_port,
if self.httpd_ver >= 2.2:
fp.write(self._sys_module('auth_basic_module', 'mod_auth_basic.so'))
fp.write(self._sys_module('authn_file_module', 'mod_authn_file.so'))
fp.write(self._sys_module('authz_groupfile_module', 'mod_authz_groupfile.so'))
fp.write(self._sys_module('authz_host_module', 'mod_authz_host.so'))
else:
fp.write(self._sys_module('auth_module', 'mod_auth.so'))
fp.write(self._sys_module('alias_module', 'mod_alias.so'))
@ -562,6 +566,7 @@ def __init__(self, abs_httpd_dir, abs_objdir, abs_builddir, httpd_port,
# Define two locations for repositories
fp.write(self._svn_repo('repositories'))
fp.write(self._svn_repo('local_tmp'))
fp.write(self._svn_authz_repo())
# And two redirects for the redirect tests
fp.write('RedirectMatch permanent ^/svn-test-work/repositories/'
@ -592,6 +597,17 @@ def _create_users_file(self):
'jrandom', 'rayjandom'])
os.spawnv(os.P_WAIT, htpasswd, ['htpasswd.exe', '-bp', self.httpd_users,
'jconstant', 'rayjandom'])
os.spawnv(os.P_WAIT, htpasswd, ['htpasswd.exe', '-bp', self.httpd_users,
'JRANDOM', 'rayjandom'])
os.spawnv(os.P_WAIT, htpasswd, ['htpasswd.exe', '-bp', self.httpd_users,
'JCONSTANT', 'rayjandom'])
def _create_groups_file(self):
"Create groups for mod_authz_svn tests"
fp = open(self.httpd_groups, 'w')
fp.write('random: jrandom\n')
fp.write('constant: jconstant\n')
fp.close()
def _create_mime_types_file(self):
"Create empty mime.types file"
@ -652,6 +668,153 @@ def _svn_repo(self, name):
' DontDoThatConfigFile ' + self._quote(self.dontdothat_file) + '\n' \
'</Location>\n'
def _svn_authz_repo(self):
local_tmp = os.path.join(self.abs_builddir,
CMDLINE_TEST_SCRIPT_NATIVE_PATH,
'svn-test-work', 'local_tmp')
return \
'<Location /authz-test-work/anon>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' <IfModule mod_authz_core.c>' + '\n' \
' Require all granted' + '\n' \
' </IfModule>' + '\n' \
' <IfModule !mod_authz_core.c>' + '\n' \
' Allow from all' + '\n' \
' </IfModule>' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/mixed>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' Satisfy Any' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/mixed-noauthwhenanon>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' AuthzSVNNoAuthWhenAnonymousAllowed On' + '\n' \
' SVNPathAuthz On' + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/authn>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/authn-anonoff>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' AuthzSVNAnonymous Off' + '\n' \
' SVNPathAuthz On' + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/authn-lcuser>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' AuthzForceUsernameCase Lower' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/authn-lcuser>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' Require valid-user' + '\n' \
' AuthzForceUsernameCase Lower' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/authn-group>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' AuthGroupFile ' + self._quote(self.httpd_groups) + '\n' \
' Require group random' + '\n' \
' AuthzSVNAuthoritative Off' + '\n' \
' SVNPathAuthz On' + '\n' \
'</Location>' + '\n' \
'<IfModule mod_authz_core.c>' + '\n' \
'<Location /authz-test-work/sallrany>' + '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' AuthzSendForbiddenOnFailure On' + '\n' \
' Satisfy All' + '\n' \
' <RequireAny>' + '\n' \
' Require valid-user' + '\n' \
' Require expr req(\'ALLOW\') == \'1\'' + '\n' \
' </RequireAny>' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'<Location /authz-test-work/sallrall>'+ '\n' \
' DAV svn' + '\n' \
' SVNParentPath ' + local_tmp + '\n' \
' AuthzSVNAccessFile ' + self._quote(self.authz_file) + '\n' \
' SVNAdvertiseV2Protocol ' + self.httpv2_option + '\n' \
' SVNListParentPath On' + '\n' \
' AuthType Basic' + '\n' \
' AuthName "Subversion Repository"' + '\n' \
' AuthUserFile ' + self._quote(self.httpd_users) + '\n' \
' AuthzSendForbiddenOnFailure On' + '\n' \
' Satisfy All' + '\n' \
' <RequireAll>' + '\n' \
' Require valid-user' + '\n' \
' Require expr req(\'ALLOW\') == \'1\'' + '\n' \
' </RequireAll>' + '\n' \
' SVNPathAuthz ' + self.path_authz_option + '\n' \
'</Location>' + '\n' \
'</IfModule>' + '\n' \
def start(self):
if self.service:
self._start_service()
@ -786,6 +949,10 @@ def _stop_daemon(self):
log_file = os.path.join(abs_builddir, log)
fail_log_file = os.path.join(abs_builddir, faillog)
if run_httpd:
httpd_version = "%.1f" % daemon.httpd_ver
else:
httpd_version = None
th = run_tests.TestHarness(abs_srcdir, abs_builddir,
log_file,
fail_log_file,
@ -795,6 +962,7 @@ def _stop_daemon(self):
fsfs_sharding, fsfs_packing,
list_tests, svn_bin, mode_filter,
milestone_filter,
httpd_version=httpd_version,
set_log_level=log_level, ssl_cert=ssl_cert)
old_cwd = os.getcwd()
try:

View File

@ -105,7 +105,7 @@
#define PACKAGE_NAME "subversion"
/* Define to the full name and version of this package. */
#define PACKAGE_STRING "subversion 1.8.10"
#define PACKAGE_STRING "subversion 1.8.14"
/* Define to the one symbol short name of this package. */
#define PACKAGE_TARNAME "subversion"
@ -114,11 +114,14 @@
#define PACKAGE_URL ""
/* Define to the version of this package. */
#define PACKAGE_VERSION "1.8.10"
#define PACKAGE_VERSION "1.8.14"
/* Define to 1 if you have the ANSI C header files. */
#define STDC_HEADERS 1
/* Defined to allow building against httpd 2.4 with broken auth */
/* #undef SVN_ALLOW_BROKEN_HTTPD_AUTH */
/* Define to the Python/C API format character suitable for apr_int64_t */
#define SVN_APR_INT64_T_PYCFMT "l"
@ -129,10 +132,10 @@
#define SVN_BINDIR "/usr/bin"
/* Defined to the config.guess name of the build system */
#define SVN_BUILD_HOST "bikeshed-malachite-topaz-amber-freebsd"
#define SVN_BUILD_HOST "bikeshed-rgb-freebsd"
/* Defined to the config.guess name of the build target */
#define SVN_BUILD_TARGET "bikeshed-malachite-topaz-amber-freebsd"
#define SVN_BUILD_TARGET "bikeshed-rgb-freebsd"
/* The path of a default editor for the client. */
/* #undef SVN_CLIENT_EDITOR */
@ -153,7 +156,7 @@
#define SVN_FS_WANT_DB_PATCH 14
/* Define if compiler provides atomic builtins */
#define SVN_HAS_ATOMIC_BUILTINS 0
/* #undef SVN_HAS_ATOMIC_BUILTINS */
/* Is GNOME Keyring support enabled? */
/* #undef SVN_HAVE_GNOME_KEYRING */