git/t/perf/perf-lib.sh
Jeff King aad1d1c0d5 t/perf/perf-lib: fix assignment of TEST_OUTPUT_DIRECTORY
Using the perf suite's "run" helper in a vanilla build fails like this:

  $ make && (cd t/perf && ./run p0000-perf-lib-sanity.sh)
  === Running 1 tests in this tree ===
  perf 1 - test_perf_default_repo works: 1 2 3 ok
  perf 2 - test_checkout_worktree works: 1 2 3 ok
  ok 3 - test_export works
  perf 4 - export a weird var: 1 2 3 ok
  perf 5 - éḿíẗ ńöń-ÁŚĆÍÍ ćḧáŕáćẗéŕś: 1 2 3 ok
  ok 6 - test_export works with weird vars
  perf 7 - important variables available in subshells: 1 2 3 ok
  perf 8 - test-lib-functions correctly loaded in subshells: 1 2 3 ok
  # passed all 8 test(s)
  1..8
  cannot open test-results/p0000-perf-lib-sanity.subtests: No such file or directory at ./aggregate.perl line 159.

It is trying to aggregate results written into t/perf/test-results, but
the p0000 script did not write anything there.

The "run" script looks in $TEST_OUTPUT_DIRECTORY/test-results, or if
that variable is not set, in test-results in the current working
directory (which should be t/perf itself). It pulls the value of
$TEST_OUTPUT_DIRECTORY from the GIT-BUILD-OPTIONS file.

But that doesn't quite match the setup in perf-lib.sh (which is what
scripts like p0000 use). There we do this at the top of the script:

  TEST_OUTPUT_DIRECTORY=$(pwd)

and then let test-lib.sh append "/test-results" to that. Historically,
that made the vanilla case work: we'd always use t/perf/test-results.
But when $TEST_OUTPUT_DIRECTORY was set, it would break.

Commit 5756ccd181 (t/perf: fix benchmarks with out-of-tree builds,
2025-04-28) fixed that second case by loading GIT-BUILD-OPTIONS
ourselves. But that broke the vanilla case!

Now our setting of $TEST_OUTPUT_DIRECTORY in perf-lib.sh is ignored,
because it is overwritten by GIT-BUILD-OPTIONS. And when test-lib.sh
sees that the output directory is empty, it defaults to t/test-results,
rather than t/perf/test-results.

Nobody seems to have noticed, probably for two reasons:

  1. It only matters if you're trying to aggregate results (like the
     "run" script does). Just running "./p0000-perf-lib-sanity.sh"
     manually still produces useful output; the stored result files are
     just in an unexpected place.

  2. There might be leftover files in t/perf/test-results from previous
     runs (before 5756ccd181). In particular, the ".subtests" files
     don't tend to change, and the lack of that file is what causes it
     to barf completely. So it's possible that the aggregation could
     have been showing stale results that did not match the run that
     just happened.

We can fix it by setting TEST_OUTPUT_DIRECTORY only after we've loaded
GIT-BUILD-OPTIONS, so that we override its value and not the other way
around. And we'll do so only when the variable is not set, which should
retain the fix for that case from 5756ccd181.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2026-01-07 08:56:36 +09:00

374 lines
9.8 KiB
Bash

# Performance testing framework. Each perf script starts much like
# a normal test script, except it sources this library instead of
# test-lib.sh. See t/perf/README for documentation.
#
# Copyright (c) 2011 Thomas Rast
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see https://www.gnu.org/licenses/ .
# These variables must be set before the inclusion of test-lib.sh below,
# because it will change our working directory.
TEST_DIRECTORY=$(pwd)/..
perf_dir=$(pwd)
TEST_NO_CREATE_REPO=t
TEST_NO_MALLOC_CHECK=t
# GIT-BUILD-OPTIONS, sourced by test-lib.sh, overwrites the `GIT_PERF_*`
# values that are set by the user (if any). Let's stash them away as
# `eval`-able assignments.
git_perf_settings="$(env |
sed -n "/^GIT_PERF_/{
# escape all single-quotes in the value
s/'/'\\\\''/g
# turn this into an eval-able assignment
s/^\\([^=]*=\\)\\(.*\\)/\\1'\\2'/p
}")"
# While test-lib.sh computes the build directory for us, we also have to do the
# same thing in order to locate the script via GIT-BUILD-OPTIONS in the first
# place.
GIT_BUILD_DIR="${GIT_BUILD_DIR:-$TEST_DIRECTORY/..}"
if test -f "$GIT_BUILD_DIR/GIT-BUILD-DIR"
then
GIT_BUILD_DIR="$(cat "$GIT_BUILD_DIR/GIT-BUILD-DIR")" || exit 1
# On Windows, we must convert Windows paths lest they contain a colon
case "$(uname -s)" in
*MINGW*)
GIT_BUILD_DIR="$(cygpath -au "$GIT_BUILD_DIR")"
;;
esac
fi
if test ! -f "$GIT_BUILD_DIR"/GIT-BUILD-OPTIONS
then
echo >&2 'error: GIT-BUILD-OPTIONS missing (has Git been built?).'
exit 1
fi
. "$GIT_BUILD_DIR"/GIT-BUILD-OPTIONS
: ${TEST_OUTPUT_DIRECTORY:=$perf_dir}
. "$GIT_SOURCE_DIR"/t/test-lib.sh
# Then restore GIT_PERF_* settings.
eval "$git_perf_settings"
unset GIT_CONFIG_NOSYSTEM
GIT_CONFIG_SYSTEM="$TEST_DIRECTORY/perf/config"
export GIT_CONFIG_SYSTEM
if test -n "$GIT_TEST_INSTALLED" && test -z "$PERF_SET_GIT_TEST_INSTALLED"
then
error "Do not use GIT_TEST_INSTALLED with the perf tests.
Instead use:
./run <path-to-git> -- <tests>
See t/perf/README for details."
fi
# Variables from test-lib that are normally internal to the tests; we
# need to export them for test_perf subshells
export TEST_DIRECTORY TRASH_DIRECTORY GIT_BUILD_DIR GIT_TEST_CMP
MODERN_GIT=$GIT_BUILD_DIR/bin-wrappers/git
export MODERN_GIT
MODERN_SCALAR=$GIT_BUILD_DIR/bin-wrappers/scalar
export MODERN_SCALAR
perf_results_dir=$TEST_RESULTS_DIR
test -n "$GIT_PERF_SUBSECTION" && perf_results_dir="$perf_results_dir/$GIT_PERF_SUBSECTION"
mkdir -p "$perf_results_dir"
rm -f "$perf_results_dir"/$(basename "$0" .sh).subtests
die_if_build_dir_not_repo () {
if ! ( cd "$TEST_DIRECTORY/.." &&
git rev-parse --build-dir >/dev/null 2>&1 ); then
error "No $1 defined, and your build directory is not a repo"
fi
}
if test -z "$GIT_PERF_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_REPO'
GIT_PERF_REPO=$TEST_DIRECTORY/..
fi
if test -z "$GIT_PERF_LARGE_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_LARGE_REPO'
GIT_PERF_LARGE_REPO=$TEST_DIRECTORY/..
fi
test_perf_do_repo_symlink_config_ () {
test_have_prereq SYMLINKS || git config core.symlinks false
}
test_perf_copy_repo_contents () {
for stuff in "$1"/*
do
case "$stuff" in
*/objects|*/hooks|*/config|*/commondir|*/gitdir|*/worktrees|*/fsmonitor--daemon*)
;;
*)
cp -R "$stuff" "$repo/.git/" || exit 1
;;
esac
done
}
test_perf_create_repo_from () {
test "$#" = 2 ||
BUG "not 2 parameters to test-create-repo"
repo="$1"
source="$2"
source_git="$("$MODERN_GIT" -C "$source" rev-parse --git-dir)"
objects_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-path objects)"
common_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-common-dir)"
refformat="$("$MODERN_GIT" -C "$source" rev-parse --show-ref-format)"
objectformat="$("$MODERN_GIT" -C "$source" rev-parse --show-object-format)"
mkdir -p "$repo/.git"
(
cd "$source" &&
{ cp -Rl "$objects_dir" "$repo/.git/" 2>/dev/null ||
cp -R "$objects_dir" "$repo/.git/"; } &&
# common_dir must come first here, since we want source_git to
# take precedence and overwrite any overlapping files
test_perf_copy_repo_contents "$common_dir"
if test "$source_git" != "$common_dir"
then
test_perf_copy_repo_contents "$source_git"
fi
) &&
(
cd "$repo" &&
"$MODERN_GIT" init -q --ref-format="$refformat" --object-format="$objectformat" &&
test_perf_do_repo_symlink_config_ &&
mv .git/hooks .git/hooks-disabled 2>/dev/null &&
if test -f .git/index.lock
then
# We may be copying a repo that can't run "git
# status" due to a locked index. Since we have
# a copy it's fine to remove the lock.
rm .git/index.lock
fi &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
) || error "failed to copy repository '$source' to '$repo'"
}
# call at least one of these to establish an appropriately-sized repository
test_perf_fresh_repo () {
repo="${1:-$TRASH_DIRECTORY}"
"$MODERN_GIT" init -q "$repo" &&
(
cd "$repo" &&
test_perf_do_repo_symlink_config_ &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
)
}
test_perf_default_repo () {
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_REPO"
}
test_perf_large_repo () {
if test "$GIT_PERF_LARGE_REPO" = "$GIT_BUILD_DIR"; then
echo "warning: \$GIT_PERF_LARGE_REPO is \$GIT_BUILD_DIR." >&2
echo "warning: This will work, but may not be a sufficiently large repo" >&2
echo "warning: for representative measurements." >&2
fi
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_LARGE_REPO"
}
test_checkout_worktree () {
git checkout-index -u -a ||
error "git checkout-index failed"
}
# Performance tests should never fail. If they do, stop immediately
immediate=t
# Perf tests require GNU time
case "$(uname -s)" in Darwin) GTIME="${GTIME:-gtime}";; esac
GTIME="${GTIME:-/usr/bin/time}"
test_run_perf_ () {
test_cleanup=:
test_export_="test_cleanup"
export test_cleanup test_export_
"$GTIME" -f "%E %U %S" -o test_time.$i "$TEST_SHELL_PATH" -c '
. '"$TEST_DIRECTORY"/test-lib-functions.sh'
test_export () {
test_export_="$test_export_ $*"
}
'"$1"'
ret=$?
needles=
for v in $test_export_
do
needles="$needles;s/^$v=/export $v=/p"
done
set | sed -n "s'"/'/'\\\\''/g"'$needles" >test_vars
exit $ret' >&3 2>&4
eval_ret=$?
if test $eval_ret = 0 || test -n "$expecting_failure"
then
test_eval_ "$test_cleanup"
. ./test_vars || error "failed to load updated environment"
fi
if test "$verbose" = "t" && test -n "$HARNESS_ACTIVE"; then
echo ""
fi
return "$eval_ret"
}
test_wrapper_ () {
local test_wrapper_func_="$1"; shift
local test_title_="$1"; shift
test_start_
test_prereq=
test_perf_setup_=
while test $# != 0
do
case $1 in
--prereq)
test_prereq=$2
shift
;;
--setup)
test_perf_setup_=$2
shift
;;
*)
break
;;
esac
shift
done
test "$#" = 1 || BUG "test_wrapper_ needs 2 positional parameters"
export test_prereq
export test_perf_setup_
if ! test_skip "$test_title_" "$@"
then
base=$(basename "$0" .sh)
echo "$test_count" >>"$perf_results_dir"/$base.subtests
echo "$test_title_" >"$perf_results_dir"/$base.$test_count.descr
base="$perf_results_dir"/"$PERF_RESULTS_PREFIX$(basename "$0" .sh)"."$test_count"
"$test_wrapper_func_" "$test_title_" "$@"
fi
test_finish_
}
test_perf_ () {
if test -z "$verbose"; then
printf "%s" "perf $test_count - $1:"
else
echo "perf $test_count - $1:"
fi
for i in $(test_seq 1 $GIT_PERF_REPEAT_COUNT); do
if test -n "$test_perf_setup_"
then
say >&3 "setup: $test_perf_setup_"
if ! test_eval_ $test_perf_setup_
then
test_failure_ "$test_perf_setup_"
break
fi
fi
say >&3 "running: $2"
if test_run_perf_ "$2"
then
if test -z "$verbose"; then
printf " %s" "$i"
else
echo "* timing run $i/$GIT_PERF_REPEAT_COUNT:"
fi
else
test -z "$verbose" && echo
test_failure_ "$@"
break
fi
done
if test -z "$verbose"; then
echo " ok"
else
test_ok_ "$1"
fi
"$PERL_PATH" "$TEST_DIRECTORY"/perf/min_time.perl test_time.* >"$base".result
rm test_time.*
}
# Usage: test_perf 'title' [options] 'perf-test'
# Run the performance test script specified in perf-test with
# optional prerequisite and setup steps.
# Options:
# --prereq prerequisites: Skip the test if prerequisites aren't met
# --setup "setup-steps": Run setup steps prior to each measured iteration
#
test_perf () {
test_wrapper_ test_perf_ "$@"
}
test_size_ () {
if test -n "$test_perf_setup_"
then
say >&3 "setup: $test_perf_setup_"
test_eval_ $test_perf_setup_
fi
say >&3 "running: $2"
if test_eval_ "$2" 3>"$base".result; then
test_ok_ "$1"
else
test_failure_ "$@"
fi
}
# Usage: test_size 'title' [options] 'size-test'
# Run the size test script specified in size-test with optional
# prerequisites and setup steps. Returns the numeric value
# returned by size-test.
# Options:
# --prereq prerequisites: Skip the test if prerequisites aren't met
# --setup "setup-steps": Run setup steps prior to the size measurement
test_size () {
test_wrapper_ test_size_ "$@"
}
# We extend test_done to print timings at the end (./run disables this
# and does it after running everything)
test_at_end_hook_ () {
if test -z "$GIT_PERF_AGGREGATING_LATER"; then
(
cd "$TEST_DIRECTORY"/perf &&
"$PERL_PATH" "$GIT_SOURCE_DIR"/t/perf/aggregate.perl --results-dir="$TEST_RESULTS_DIR" $(basename "$0")
)
fi
}
test_export () {
export "$@"
}
test_lazy_prereq PERF_EXTRA 'test_bool_env GIT_PERF_EXTRA false'