The code to maintain mapping between object names in multiple hash
functions is being added, written in Rust.
* bc/sha1-256-interop-02:
object-file-convert: always make sure object ID algo is valid
rust: add a small wrapper around the hashfile code
rust: add a new binary object map format
rust: add functionality to hash an object
rust: add a build.rs script for tests
hash: expose hash context functions to Rust
write-or-die: add an fsync component for the object map
csum-file: define hashwrite's count as a uint32_t
rust: add additional helpers for ObjectID
hash: add a function to look up hash algo structs
rust: add a hash algorithm abstraction
rust: add a ObjectID struct
hash: use uint32_t for object_id algorithm
conversion: don't crash when no destination algo
repository: require Rust support for interoperability
"git replay" is taught to drop commits that become empty (not the
ones that are empty in the original).
* pw/replay-drop-empty:
replay: drop commits that become empty
"git history" history rewriting UI.
Comments?
* ps/history:
builtin/history: implement "reword" subcommand
builtin: add new "history" command
wt-status: provide function to expose status for trees
replay: yield the object ID of the final rewritten commit
replay: small set of cleanups
builtin/replay: move core logic into "libgit.a"
builtin/replay: extract core logic to replay revisions
* ps/ref-consistency-checks:
builtin/fsck: drop `fsck_head_link()`
builtin/fsck: move generic HEAD check into `refs_fsck()`
builtin/fsck: move generic object ID checks into `refs_fsck()`
refs/reftable: introduce generic checks for refs
refs/reftable: fix consistency checks with worktrees
refs/reftable: extract function to retrieve backend for worktree
refs/reftable: adapt includes to become consistent
refs/files: introduce function to perform normal ref checks
refs/files: extract generic symref target checks
fsck: drop unused fields from `struct fsck_ref_report`
refs/files: perform consistency checks for root refs
refs/files: improve error handling when verifying symrefs
refs/files: extract function to check single ref
refs/files: remove useless indirection
refs/files: remove `refs_check_dir` parameter
refs/files: move fsck functions into global scope
refs/files: simplify iterating through root refs
The function `fsck_head_link()` was historically used to perform a
couple of consistency checks for refs. (Almost) all of these checks have
now been moved into the refs subsystem. There's only a single check
remaining that verifies whether `refs_resolve_ref_unsafe()` returns a
`NULL` pointer. This may happen in a couple of cases:
- When `refs_is_safe()` declares the ref to be unsafe. We already have
checks for this as we verify refnames with `check_refname_format()`.
- When the ref doesn't exist. A repository without "HEAD" is
completely broken though, and we would notice this error ahead of
time already.
- In case the caller passes `RESOLVE_REF_READING` and the ref is a
symref that doesn't resolve. We don't pass this flag though.
As such, this check doesn't cover anything anymore that isn't already
covered by `refs_fsck()`. Drop it, which also allows us to inline the
call to `refs_resolve_ref_unsafe()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the check that detects "HEAD" refs that do not point at a branch
into `refs_fsck()`. This follows the same motivation as the preceding
commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While most of the logic that verifies the consistency of refs is
driven by `refs_fsck()`, we still have a small handful of checks in
`fsck_head_link()`. These checks don't use the git-fsck(1) reporting
infrastructure, and as such it's impossible to for example disable
some of those checks.
One such check detects refs that point to the all-zeroes object ID.
Extract this check into the generic `refs_fsck_ref()` function that is
used by both the "files" and "reftable" backends.
Note that this will cause us to not return an error code from
`fsck_head_link()` anymore in case this error was detected. This is fine
though: the only caller of this function does not check the error code
anyway. To demonstrate this, adapt the function to drop its return value
altogether. The function will be removed in a subsequent commit anyway.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a preceding commit we have extracted generic checks for both direct
and symbolic refs that apply for all backends. Wire up those checks for
the "reftable" backend.
Note that this is done by iterating through all refs manually with the
low-level reftable ref iterator. We explicitly don't want to use the
higher-level iterator that is exposed to users of the reftable backend
as that iterator may swallow for example broken refs.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ref consistency checks are driven via `cmd_refs_verify()`. That
function loops through all worktrees (including the main worktree) and
then checks the ref store for each of them individually. It follows that
the backend is expected to only verify refs that belong to the specified
worktree.
While the "files" backend handles this correctly, the "reftable" backend
doesn't. In fact, it completely ignores the passed worktree and instead
verifies refs of _all_ worktrees. The consequence is that we'll end up
every ref store N times, where N is the number of worktrees.
Or rather, that would be the case if we actually iterated through the
worktree reftable stacks correctly. But we use `strmap_for_each_entry()`
to iterate through the stacks, but the map is in fact not even properly
populated. So instead of checking stacks N^2 times, we actually only end
up checking the reftable stack of the main worktree.
Fix this bug by only verifying the stack of the passed-in worktree and
constructing the backends via `backend_for_worktree()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pull out the logic to retrieve a backend for a given worktree. This
function will be used in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Adapt the includes to be sorted and to use include paths that are
relative to the "refs/" directory.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a subsequent commit we'll introduce new generic checks for direct
refs. These checks will be independent of the actual backend.
Introduce a new function `refs_fsck_ref()` that will be used for this
purpose. At the current point in time it's still empty, but it will get
populated in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The consistency checks for the "files" backend contain a couple of
verifications for symrefs that verify generic properties of the target
reference. These properties need to hold for every backend, no matter
whether it's using the "files" or "reftable" backend.
Reimplementing these checks for every single backend doesn't really make
sense. Extract it into a generic `refs_fsck_symref()` function that can
be used my other backends, as well. The "reftable" backend will be wired
up in a subsequent commit.
While at it, improve the consistency checks so that we don't complain
about refs pointing to a non-ref target in case the target refname
format does not verify. Otherwise it's very likely that we'll generate
both error messages, which feels somewhat redundant in this case.
Note that the function has a couple of `UNUSED` parameters. These will
become referenced in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `struct fsck_ref_report` has a couple fields that are intended to
improve the error reporting for broken ref reports by showing which
object ID or target reference the ref points to. These fields are never
set though and are thus essentially unused.
Remove them.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While the "files" backend already knows to perform consistency checks
for the "refs/" hierarchy, it doesn't verify any of its root refs. Plug
this omission.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The error handling when verifying symbolic refs is a bit on the wild
side:
- `fsck_report_ref()` can be told to ignore specific errors. If an
error has been ignored and a previous check raised an unignored
error, then assigning `ret = fsck_report_ref()` will cause us to
swallow the previous error.
- When the target reference is not valid we bail out early without
checking for other errors.
Fix both of these issues by consistently or'ing the return value and not
bailing out early.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When checking the consistency of references we create a directory
iterator and then verify each single reference in a loop. The logic to
perform the actual checks is embedded into that loop, which makes it
hard to reuse. But In a subsequent commit we're about to introduce a
second path that wants to verify references.
Prepare for this by extracting the logic to check a single reference
into a standalone function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `files_fsck_refs()` only has a single callsite and forwards
all of its arguments as-is, so it's basically a useless indirection.
Inline the function call.
While at it, also remove the bitwise or that we have for return values.
We don't really want to or them at all, but rather just want to return
an error in case either of the functions has failed.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The parameter `refs_check_dir` determines which directory we want to
check references for. But as we always want to check the complete
refs hierarchy, this parameter is always set to "refs".
Drop the parameter and hardcode it.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When performing consistency checks we pass the functions that perform
the verification down the calling stack. This is somewhat unnecessary
though, as the set of functions doesn't ever change.
Simplify the code by moving the array into global scope and remove the
parameter.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When iterating through root refs we first need to determine the
directory in which the refs live. This is done by retrieving the root of
the loose refs via `refs->loose->root->name`, and putting it through
`files_ref_path()` to derive the final path.
This is somewhat redundant though: the root name of the loose files
cache is always going to be the empty string. As such, we always end up
passing that empty string to `files_ref_path()` as the ref hierarchy we
want to start. And this actually makes sense: `files_ref_path()` already
computes the location of the root directory, so of course we need to
pass the empty string for the ref hierarchy itself. So going via the
loose ref cache to figure out that the root of a ref hierarchy is empty
is only causing confusion.
But next to the added confusion, it can also lead to a segfault. The
loose ref cache is populated lazily, so it may not always be set. It
seems to be sheer luck that this is a condition we do not currently hit.
The right thing to do would be to call `get_loose_ref_cache()`, which
knows to populate the cache if required.
Simplify the code and fix the potential segfault by simply removing the
indirection via the loose ref cache completely.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"auto filter" logic for large-object promisor remote.
Comments?
* cc/lop-filter-auto:
fetch-pack: wire up and enable auto filter logic
promisor-remote: keep advertised filter in memory
list-objects-filter-options: implement auto filter resolution
list-objects-filter-options: support 'auto' mode for --filter
doc: fetch: document `--filter=<filter-spec>` option
fetch: make filter_options local to cmd_fetch()
clone: make filter_options local to cmd_clone()
promisor-remote: allow a client to store fields
promisor-remote: refactor initialising field lists
"git status" learned to show comparison between the current branch
and its push destination as well as its upstream, when the two are
different (i.e., triangular workflow).
Comments?
* hn/status-compare-with-push:
status: show comparison with push remote tracking branch
refactor format_branch_comparison in preparation
Allow recording process ID of the process that holds the lock next
to a lockfile for diagnosis.
Comments?
* pc/lockfile-pid:
lockfile: add PID file for debugging stale locks
Invalidate control characters in sideband messages, to avoid
terminal state getting messed up.
Comments?
cf. <aS-D5lD2Kk6BHNIl@fruit.crustytoothpaste.net>
* js/neuter-sideband:
sideband: add options to allow more control sequences to be passed through
sideband: do allow ANSI color sequences by default
sideband: introduce an "escape hatch" to allow control characters
sideband: mask control characters
"git repo info" learns "--keys" action to list known keys.
Comments?
* lo/repo-info-keys:
repo: add new flag --keys to git-repo-info
repo: add a default output format to enum output_format
The set of shallow boundary "git clone --shallow-since" leaves
contained commits that are not on the boundary, which has been
corrected.
Comments?
* sp/shallow-time-boundary:
shallow: set borders which are all reachable after clone shallow since
"git config --list --global", unlike "git config --list", did not
consult both of the two possible per-user sources of the
configuration files, i.e. $HOME/.gitconfig and the XDG one, which
has been corrected.
* dw/config-global-list:
config: keep bailing on unreadable global files
config: read global scope via config_sequence
config: test home and xdg files in `list --global`
cleanup_path: force forward slashes on Windows
Refactor code paths to run "interpret-trailers" from "git
commit/tag" and use it in "git rebase".
* lc/rebase-trailer:
rebase: support --trailer
trailer: append trailers in-process and drop the fork to `interpret-trailers`
trailer: move process_trailers to trailer.h
interpret-trailers: factor out buffer-based processing to process_trailers()
Document "git worktree add" and use of out-of-tree worktrees with
examples.
* ms/doc-worktree-side-by-side:
doc: git-worktree: Add side by side branch checkout example
doc: git-worktree: Link to examples
"git add ':(exclude)foo.o'" is clearly a request not to add 'foo.o',
but the command complained about listing an ignored path foo.o on
the command line, which has been corrected.
Comments?
* jc/exclude-with-gitignore:
dir.c: do not be fooled by :(exclude) pathspec elements
Further work on incremental repacking using MIDX/bitmap
* tb/incremental-midx-part-3.2:
midx: enable reachability bitmaps during MIDX compaction
midx: implement MIDX compaction
t/helper/test-read-midx.c: plug memory leak when selecting layer
midx-write.c: factor fanout layering from `compute_sorted_entries()`
midx-write.c: enumerate `pack_int_id` values directly
midx-write.c: extract `fill_pack_from_midx()`
midx-write.c: introduce `midx_pack_perm()` helper
git-compat-util.h: introduce `u32_add()`
midx: do not require packs to be sorted in lexicographic order
midx-write.c: introduce `struct write_midx_opts`
midx-write.c: don't use `pack_perm` when assigning `bitmap_pos`
t/t5319-multi-pack-index.sh: fix copy-and-paste error in t5319.39
git-multi-pack-index(1): align SYNOPSIS with 'git multi-pack-index -h'
git-multi-pack-index(1): remove non-existent incompatibility
builtin/multi-pack-index.c: make '--progress' a common option
midx: split `get_midx_checksum()` by adding `get_midx_hash()`
midx: mark `get_midx_checksum()` arguments as const
Preparation of xdiff/ codebase to work with Rust
Comments?
* en/xdiff-cleanup-3:
SQUASH??? cocci
xdiff: move xdl_cleanup_records() from xprepare.c to xdiffi.c
xdiff: remove dependence on xdlclassifier from xdl_cleanup_records()
xdiff: replace xdfile_t.dend with xdfenv_t.delta_end
xdiff: replace xdfile_t.dstart with xdfenv_t.delta_start
xdiff: cleanup xdl_trim_ends()
xdiff: use xdfenv_t in xdl_trim_ends() and xdl_cleanup_records()
xdiff: let patience and histogram benefit from xdl_trim_ends()
xdiff: don't waste time guessing the number of lines
xdiff: make classic diff explicit by creating xdl_do_classic_diff()
ivec: introduce the C side of ivec
The core.attributesfile is intended to be set per repository, but
were kept track of by a single global variable in-core, which has
been corrected by moving it to per-repository data structure.
Comments?
* ob/core-attributesfile-in-repository:
environment: move "core.attributesFile" into repo-setting
* kh/doc-patch-id:
doc: patch-id: --verbatim locks in --stable
doc: patch-id: spell out the git-diff-tree(1) form
doc: patch-id: use definite article for the result
patch-id: use “patch ID” throughout
doc: patch-id: capitalize Git version
doc: patch-id: don’t use semicolon between bullet points
"git add -p" and friends notes what the current status of the hunk
being shown is.
Comments?
* aa/add-p-previous-decisions:
add -p: show user's hunk decision when selecting hunks
The iconv library on macOS fails to correctly handle stateful
ISO/IEC 2022 encoded strings. Work it around instead of replacing
it wholesale from homebrew.
RFC.
needs to be debased from older rs/macos-iconv-workaround topic.
* tb/macos-iconv-workarounds:
utf8.c: Enable workaround for iconv under macOS 14/15
utf8.c: Prepare workaround for iconv under macOS 14/15
Additional tests were introduced to see the interaction with netrc
auth with auth failure on the http transport.
Comments?
* ag/http-netrc-tests:
t5550: add netrc tests for http 401/403
The object-info API has been cleaned up.
Comments?
* ps/read-object-info-improvements:
packfile: drop repository parameter from `packed_object_info()`
packfile: skip unpacking object header for disk size requests
packfile: disentangle return value of `packed_object_info()`
packfile: always populate pack-specific info when reading object info
packfile: extend `is_delta` field to allow for "unknown" state
packfile: always declare object info to be OI_PACKED
object-file: always set OI_LOOSE when reading object info
"git fsck" used inconsistent set of refs to show a confused
warning, which has been corrected.
* en/fsck-snapshot-ref-state:
fsck: snapshot default refs before object walk
"git receive-pack", when namespace is involved, segfaulted when a
symbolic ref cross the namespace boundary.
Comments?
* tt/receive-pack-oo-namespace-symref-fix:
receive-pack: fix crash on out-of-namespace symref
The help text and the documentation for the "--expire" option of
"git worktree [list|prune]" have been improved.
* sb/doc-worktree-prune-expire-improvement:
worktree: use 'prune' instead of 'expire' in help text
worktree: clarify --expire applies to missing worktrees
Upstream symbolic link support on Windows from Git-for-Windows.
* js/symlink-windows:
mingw: special-case index entries for symlinks with buggy size
mingw: emulate `stat()` a little more faithfully
mingw: try to create symlinks without elevated permissions
mingw: add support for symlinks to directories
mingw: implement basic `symlink()` functionality (file symlinks only)
mingw: implement `readlink()`
mingw: allow `mingw_chdir()` to change to symlink-resolved directories
mingw: support renaming symlinks
mingw: handle symlinks to directories in `mingw_unlink()`
mingw: add symlink-specific error codes
mingw: change default of `core.symlinks` to false
mingw: factor out the retry logic
mingw: compute the correct size for symlinks in `mingw_lstat()`
mingw: teach dirent about symlinks
mingw: let `mingw_lstat()` error early upon problems with reparse points
mingw: drop the separate `do_lstat()` function
mingw: implement `stat()` with symlink support
mingw: don't call `GetFileAttributes()` twice in `mingw_lstat()`
Further preparation to upstream symbolic link support on Windows.
* js/prep-symlink-windows:
trim_last_path_component(): avoid hard-coding the directory separator
strbuf_readlink(): support link targets that exceed PATH_MAX
strbuf_readlink(): avoid calling `readlink()` twice in corner-cases
init: do parse _all_ core.* settings early
mingw: do resolve symlinks in `getcwd()`
The final clean-up phase of the diff output could turn the result of
histogram diff algorithm suboptimal, which has been corrected.
Comments?
* yc/histogram-hunk-shift-fix:
xdiff: re-diff shifted change groups when using histogram algorithm
Introduce a more robust way to parse a decimal integer stored in a
piece of memory that is not necessarily terminated with NUL (which
Asan strict-string-check complains even when use of strtol() is
safe due to varified existence of whitespace after the digits).
* jk/parse-int:
fsck: use parse_unsigned_from_buf() for parsing timestamp
cache-tree: use parse_int_from_buf()
parse: add functions for parsing from non-string buffers
parse: prefer bool to int for boolean returns
The "-z" and "--max-depth" documentation (and implementation of
"-z") in the "git last-modified" command have been updated.
* tc/last-modified-options-cleanup:
fixup! last-modified: document option --max-depth
last-modified: document how depth is handled better
last-modified: document option --max-depth
last-modified: handle and document NUL termination
A mechanism to specify what reference backend to use and store
references in which directory is introduced, which would likely to
be useful during ref migration.
Comments?
* kn/ref-location:
refs: add GIT_REF_URI to specify reference backend and directory
refs: support obtaining ref_store for given dir
The command line completion script (in contrib/) learned to
complete "git -<TAB>" to give single-letter options like "-C".
* wm/complete-git-short-opts:
completion: complete "git -<TAB>" with short options
Avoid local submodule repository directory paths overlapping with
each other by encoding submodule names before using them as path
components.
Comments?
* ar/submodule-gitdir-tweak:
submodule: detect conflicts with existing gitdir configs
submodule: hash the submodule name for the gitdir path
submodule: fix case-folding gitdir filesystem collisions
submodule--helper: fix filesystem collisions by encoding gitdir paths
builtin/credential-store: move is_rfc3986_unreserved to url.[ch]
submodule--helper: add gitdir migration command
submodule: allow runtime enabling extensions.submodulePathConfig
submodule: introduce extensions.submodulePathConfig
builtin/submodule--helper: add gitdir command
submodule: always validate gitdirs inside submodule_name_to_gitdir
submodule--helper: use submodule_name_to_gitdir in add_submodule
The packfile_store data structure is moved from object store to odb
source.
* ps/packfile-store-in-odb-source:
packfile: move MIDX into packfile store
packfile: refactor `find_pack_entry()` to work on the packfile store
packfile: inline `find_kept_pack_entry()`
packfile: only prepare owning store in `packfile_store_prepare()`
packfile: only prepare owning store in `packfile_store_get_packs()`
packfile: move packfile store into object source
packfile: refactor misleading code when unusing pack windows
packfile: refactor kept-pack cache to work with packfile stores
packfile: pass source to `prepare_pack()`
packfile: create store via its owning source
Import newer version of "clar", unit testing framework.
* ps/clar-integers:
gitattributes: disable blank-at-eof errors for clar test expectations
t/unit-tests: demonstrate use of integer comparison assertions
t/unit-tests: update clar to 39f11fe
Improve the error message when a bad argument is given to the
`--onto` option of "git replay". Test coverage of "git replay" has
been improved.
* kh/replay-invalid-onto-advance:
t3650: add more regression tests for failure conditions
replay: die if we cannot parse object
replay: improve code comment and die message
replay: die descriptively when invalid commit-ish is given
replay: find *onto only after testing for ref name
replay: remove dead code and rearrange
Miscellaneous fixes on object database layer.
* ps/odb-misc-fixes:
odb: properly close sources before freeing them
builtin/gc: fix condition for whether to write commit graphs
Code clean-up, unifying various hand-rolled "list of commit
objects" and use the commit_stack API.
* rs/commit-stack:
commit-reach: use commit_stack
commit-graph: use commit_stack
commit: add commit_stack_grow()
shallow: use commit_stack
pack-bitmap-write: use commit_stack
commit: add commit_stack_init()
test-reach: use commit_stack
remote: use commit_stack for src_commits
remote: use commit_stack for sent_tips
remote: use commit_stack for local_commits
name-rev: use commit_stack
midx: use commit_stack
log: use commit_stack
revision: export commit_stack
Diagnose invalid bundle-URI that lack the URI entry, instead of
crashing.
* sb/bundle-uri-without-uri:
bundle-uri: validate that bundle entries have a uri
More doc style updates.
* ja/doc-synopsis-style-more:
doc: convert git-remote to synopsis style
doc: convert git stage to use synopsis block
doc: convert git-status tables to AsciiDoc format
doc: convert git-status to synopsis style
doc: fix t0450-txt-doc-vs-help to select only first synopsis block
If the changes in a commit being replayed are already in the branch
that the commits are being replayed onto, then "git replay" creates an
empty commit. This is confusing because the commit message no longer
matches the contents of the commit. Drop the commit instead. Commits
that start off empty are not dropped. This matches the behavior of
"git rebase --reapply-cherry-pick --empty=drop" and "git cherry-pick
--empty-drop".
If a branch points to a commit that is dropped it will be updated
to point to the last commit that was not dropped. This can be seen
in the new test where "topic1" is updated to point to the rebased
"C" as "F" is dropped because it is already upstream. While this is
a breaking change, "git replay" is marked as experimental to allow
improvements like this that change the behavior.
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ps/history: (185 commits)
builtin/history: implement "reword" subcommand
builtin: add new "history" command
wt-status: provide function to expose status for trees
replay: yield the object ID of the final rewritten commit
replay: small set of cleanups
builtin/replay: move core logic into "libgit.a"
builtin/replay: extract core logic to replay revisions
The 15th batch
t3650: add more regression tests for failure conditions
replay: die if we cannot parse object
replay: improve code comment and die message
replay: die descriptively when invalid commit-ish is given
replay: find *onto only after testing for ref name
replay: remove dead code and rearrange
The 14th batch
The 13th batch
config: use git_parse_int() in git_config_get_expiry_in_days()
receive-pack: convert receive hooks to hook API
receive-pack: convert update hooks to new API
hooks: allow callers to capture output
...
Implement a new "reword" subcommand for git-history(1). This subcommand
is similar to the user performing an interactive rebase with a single
commit changed to use the "reword" instruction.
The "reword" subcommand is built on top of the replay subsystem
instead of the sequencer. This leads to some major differences compared
to git-rebase(1):
- We do not check out the commit that is to be reworded and instead
perform the operation in-memory. This has the obvious benefit of
being significantly faster compared to git-rebase(1), but even more
importantly it allows the user to rewrite history even if there are
local changes in the working tree or in the index.
- We do not execute any hooks, even though we leave some room for
changing this in the future.
- By default, all local branches that contain the commit will be
rewritten. This especially helps with workflows that use stacked
branches.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When rewriting history via git-rebase(1) there are a few very common use
cases:
- The ordering of two commits should be reversed.
- A commit should be split up into two commits.
- A commit should be dropped from the history completely.
- Multiple commits should be squashed into one.
- Editing an existing commit that is not the tip of the current
branch.
While these operations are all doable, it often feels needlessly kludgey
to do so by doing an interactive rebase, using the editor to say what
one wants, and then perform the actions. Also, some operations like
splitting up a commit into two are way more involved than that and
require a whole series of commands.
Rebases also do not update dependent branches. The use of stacked
branches has grown quite common with competing version control systems
like Jujutsu though, so it clearly is a need that users have. While
rebases _can_ serve this use case if one always works on the latest
stacked branch, it is somewhat awkward and very easy to get wrong.
Add a new "history" command to plug these gaps. This command will have
several different subcommands to imperatively rewrite history for common
use cases like the above.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "wt-status" subsystem is responsible for printing status information
around the current state of the working tree. This most importantly
includes information around whether the working tree or the index have
any changes.
We're about to introduce a new command where the changes in neither of
them are actually relevant to us. Instead, what we want is to format the
changes between two different trees. While it is a little bit of a
stretch to add this as functionality to _working tree_ status, it
doesn't make any sense to open-code this functionality, either.
Implement a new function `wt_status_collect_changes_trees()` that diffs
two trees and formats the status accordingly. This function is not yet
used, but will be in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a subsequent commit we'll introduce a new git-history(1) command that
uses the replay machinery to rewrite commits. One of its supported modes
will only want to update the "HEAD" reference, but that is not currently
supported by the replay machinery.
Allow implementing this use case by exposing a `final_oid` field for the
reference updates. This field will be set to the last commit that was
rewritten, which is sufficient information for us to implement this mode
in git-history(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Perform a small set of cleanups so that the "replay" logic compiles with
"-Wsign-compare" and doesn't use `the_repository` anymore. Note that
there are still some implicit dependencies on `the_repository`, e.g.
because we use `get_commit_output_encoding()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the core logic used to replay commits into "libgit.a" so that it
can be easily reused by other commands. It will be used in a subsequent
commit where we're about to introduce a new git-history(1) command.
Note that with this change we have no sign-comparison warnings anymore,
and neither do we depend on `the_repository`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We're about to move the core logic used to replay revisions onto a new
base into the "libgit.a" library. Prepare for this by pulling out the
logic into a new function `replay_revisions()` that:
1. Takes a set of revisions to replay and some options that tell it how
it ought to replay the revisions.
2. Replays the commits.
3. Records any reference updates that would be caused by replaying the
commits in a structure that is owned by the caller.
The logic itself will be moved into a separate file in the next commit.
This change is not expected to cause user-visible change in behaviour.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The multi-pack index still is tracked as a member of the object database
source, but ultimately the MIDX is always tied to one specific packfile
store.
Move the structure into `struct packfile_store` accordingly. This
ensures that the packfile store now keeps track of all data related to
packfiles.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `find_pack_entry()` doesn't work on a specific packfile
store, but instead works on the whole repository. This causes a bit of a
conceptual mismatch in its callers:
- `packfile_store_freshen_object()` supposedly acts on a store, and
its callers know to iterate through all sources already.
- `packfile_store_read_object_info()` behaves likewise.
The only exception that doesn't know to handle iteration through sources
is `has_object_pack()`, but that function is trivial to adapt.
Refactor the code so that `find_pack_entry()` works on the packfile
store level instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `find_kept_pack_entry()` function is only used in
`has_object_kept_pack()`, which is only a trivial wrapper itself. Inline
the latter into the former.
Furthermore, reorder the code so that we can drop the declaration of the
function in "packfile.h". This allows us to make the function file-local.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When calling `packfile_store_prepare()` we prepare not only the provided
packfile store, but also all those of all other sources part of the same
object database. This was required when the store was still sitting on
the object database level. But now that it sits on the source level it's
not anymore.
Refactor the code so that we only prepare the single packfile store
passed by the caller. Adapt callers accordingly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When calling `packfile_store_get_packs()` we prepare not only the
provided packfile store, but also all those of all other sources part of
the same object database. This was required when the store was still
sitting on the object database level. But now that it sits on the source
level it's not anymore.
Adapt the code so that we only prepare the MIDX of the provided store.
All callers only work in the context of a single store or call the
function in a loop over all sources, so this change shouldn't have any
practical effects.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The packfile store is a member of `struct object_database`, which means
that we have a single store per database. This doesn't really make much
sense though: each source connected to the database has its own set of
packfiles, so there is a conceptual mismatch here. This hasn't really
caused much of a problem in the past, but with the advent of pluggable
object databases this is becoming more of a problem because some of the
sources may not even use packfiles in the first place.
Move the packfile store down by one level from the object database into
the object database source. This ensures that each source now has its
own packfile store, and we can eventually start to abstract it away
entirely so that the caller doesn't even know what kind of store it
uses.
Note that we only need to adjust a relatively small number of callers,
way less than one might expect. This is because most callers are using
`repo_for_each_pack()`, which handles enumeration of all packfiles that
exist in the repository. So for now, none of these callers need to be
adapted. The remaining callers that iterate through the packfiles
directly and that need adjustment are those that are a bit more tangled
with packfiles. These will be adjusted over time.
Note that this patch only moves the packfile store, and there is still a
bunch of functions that seemingly operate on a packfile store but that
end up iterating over all sources. These will be adjusted in subsequent
commits.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `unuse_one_window()` is responsible for unmapping one of
the packfile windows, which is done when we have exceeded the allowed
number of window.
The function receives a `struct packed_git` as input, which serves as an
additional packfile that should be considered to be closed. If not
given, we seemingly skip that and instead go through all of the
repository's packfiles. The conditional that checks whether we have a
packfile though does not make much sense anymore, as we dereference the
packfile regardless of whether or not it is a `NULL` pointer to derive
the repository's packfile store.
The function was originally introduced via f0e17e86e1 (pack: move
release_pack_memory(), 2017-08-18), and here we indeed had a caller that
passed a `NULL` pointer. That caller was later removed via 9827d4c185
(packfile: drop release_pack_memory(), 2019-08-12), so starting with
that commit we always pass a `struct packed_git`. In 9c5ce06d74
(packfile: use `repository` from `packed_git` directly, 2024-12-03) we
then inadvertently started to rely on the fact that the pointer is never
`NULL` because we use it now to identify the repository.
Arguably, it didn't really make sense in the first place that the caller
provides a packfile, as the selected window would have been overridden
anyway by the subsequent loop over all packfiles if there was an older
window. So the overall logic is quite misleading overall. The only case
where it _could_ make a difference is when there were two packfiles with
the same `last_used` value, but that case doesn't ever happen because
the `pack_used_ctr` is strictly increasing.
Refactor the code so that we instead pass in the object database to
help make the code less misleading.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The kept pack cache is a cache of packfiles that are marked as kept
either via an accompanying ".kept" file or via an in-memory flag. The
cache can be retrieved via `kept_pack_cache()`, where one needs to pass
in a repository.
Ultimately though the kept-pack cache is a property of the packfile
store, and this causes problems in a subsequent commit where we want to
move down the packfile store to be a per-object-source entity.
Prepare for this and refactor the kept-pack cache to work on top of a
packfile store instead. While at it, rename both the function and flags
specific to the kept-pack cache so that they can be properly attributed
to the respective subsystems.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When preparing a packfile we pass various pieces attached to the pack's
object database source via the `struct prepare_pack_data`. Refactor this
code to instead pass in the source directly. This reduces the number of
variables we need to pass and allows for a subsequent refactoring where
we start to prepare the pack via the source.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In subsequent patches we're about to move the packfile store from the
object database layer into the object database source layer. Once done,
we'll have one packfile store per source, where the source is owning the
store.
Prepare for this future and refactor `packfile_store_new()` to be
initialized via an object database source instead of via the object
database itself.
This refactoring leads to a weird in-between state where the store is
owned by the object database but created via the source. But this makes
subsequent refactorings easier because we can now start to access the
owning source of a given store.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation for the builtin API was moved from the technical
documentation and into a comment in builtin.h by ec14d4ecb5 (builtin.h: take
over documentation from api-builtin.txt, 2017-08-02). This documentation
wasn't updated as part of the major overhaul to include a repository struct
in 9b1cb5070f (builtin: add a repository parameter for builtin functions,
2024-09-13).
There was a brief update regarding the move from *.txt to *.adoc by
e8015223c7 (builtin.h: *.txt -> *.adoc fixes, 2025-03-03).
I noticed that there was quite a bit missing from the old documentation,
which is still visible on git-scm.com [1].
[1] https://github.com/git/git-scm.com/issues/2124
This change updates the documentation in the following ways:
1. Updates the cmd_foo() prototype to include a repository.
2. Adds some newlines to have uniformity in the list of flags.
3. Adds a description of the NO_PARSEOPT flag.
4. Describes the tests that perform checks on all builtins, which may trip
up a contributor working on a new builtin.
I double-checked these instructions against a toy example in my local branch
to be sure that it was complete.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace old-style `test -[df]` and `! test -[df]` assertions with
the modern `test_path_is_file`, `test_path_is_dir`, and
`test_path_is_missing` helpers.
These helpers provide more informative error messages in case of
failure (e.g., "File 'foo' is missing" instead of just exit code 1).
While at it, fix a typo and an incorrect path
reference in one of the test descriptions.
Signed-off-by: K Jayatheerth <jayatheerthkulkarni2005@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git 2.51 learned how to import and export stashes. This is a
secure and robust way to transfer working tree states across machines
and comes with almost none of the pitfalls of rsync or other tools.
Recommend this as an alternative in the FAQ.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit introduced a workaround in utf8.c to deal
with broken iconv implementations.
It is enabled when
A MacOS version is used that has a buggy iconv library and
there is no external library provided (and linked against)
from neither MacPorts nor Homebrew.
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
MacOS14 (Sonoma) has started to ship an iconv library with bugs.
The same bugs exists even in MacOS 15 (Sequoia)
A bug report running the Git test suite says:
three tests of t3900 fail on macOS 26.1 for me:
not ok 17 - ISO-2022-JP should be shown in UTF-8 now
not ok 25 - ISO-2022-JP should be shown in UTF-8 now
not ok 38 - commit --fixup into ISO-2022-JP from UTF-8
Here's the verbose output of the first one:
----- snip! -----
expecting success of 3900.17 'ISO-2022-JP should be shown in UTF-8 now':
compare_with ISO-2022-JP "$TEST_DIRECTORY"/t3900/2-UTF-8.t xt
--- /Users/x/src/git/t/t3900/2-UTF-8.txt 2024-10-01 19:43:24.605230684 + 0000
+++ current 2025-12-08 21:52:45.786161909 +0000
@@ -1,4 +1,4 @@
はれひほふ
しているのが、い るので。
-濱浜ほれぷりぽれ まびぐりろへ。
+濱浜ほれぷりぽれ まび$0$j$m$X!#
not ok 17 - ISO-2022-JP should be shown in UTF-8 now
1..17
----- snap! -----
compare_with runs git show to display a commit message, which in this
case here was encoded using ISO-2022-JP and is supposed to be reencoded
to UTF-8, but git show only does that half-way -- the "$0$j$m$X!#" part
is from the original ISO-2022-JP representation.
That botched conversion is done by utf8.c::reencode_string_iconv(). It
calls iconv(3) to do the actual work, initially with an output buffer of
the same size as the input. If the output needs more space the function
enlarges the buffer and calls iconv(3) again.
iconv(3) won't tell us how much space it needs, but it will report what
part it already managed to convert, so we can increase the buffer and
continue from there. ISO-2022-JP has escape codes for switching between
character sets, so it's a stateful encoding. I guess the iconv(3) on my
machine forgets the state at the end of part one and then messes up part
two.
[end of citation]
Working around the buggy iconv shipped with the OS can be done in
two ways:
a) Link Git against a different version of iconv
b) Improve the handling when iconv needs a larger output buffer
a) is already done by default when either Fink [1] or MacPorts [2]
or Homebrew [3] is installed.
b) is implemented here, in case that no fixed iconv is available:
When the output buffer is too short, increase it (as before)
and start from scratch (this is new).
This workound needs to be enabled with
'#define ICONV_RESTART_RESET'
and a makefile knob will be added in the next commit
Suggested-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
[1] https://www.finkproject.org/
[2] https://www.macports.org/
[3] https://brew.sh/
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- Use _<placeholder>_ instead of <placeholder> in the description
- Use _underscores_ around math associated with <placeholders>
- Use `backticks` for keywords and more complex option
descriptions. The new rendering engine will apply synopsis rules to
these spans.
Signed-off-by: Michael Lyons <git@michael.lyo.nz>
Acked-by: Jean-Noël AVILA <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- Use _<placeholder>_ instead of <placeholder> in the description
- Modify some samples to use <placeholders>
- Use `backticks` for keywords and more complex option
descriptions. The new rendering engine will apply synopsis rules to
these spans.
Signed-off-by: Michael Lyons <git@michael.lyo.nz>
Acked-by: Jean-Noël AVILA <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a user is interactively deciding which hunks to use or skip for
staging, unstaging, stashing etc, there is no way to know the
decision previously chosen for a hunk when navigating through the
previous and next hunks using K/J respectively.
Improve the UI to explicitly show if a user has previously decided to
use a hunk (by pressing 'y') or skip the hunk (by pressing 'n').
This will improve clarity when and aid the navigation process for the
user.
Reported-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Abraham Samuel Adekunle <abrahamadekunle50@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The default `--unstable` is a legacy format that predates `--stable`.
That’s why 2871f4d4 (builtin: patch-id: add --verbatim as a command mode,
2022-10-24) made `--verbatim` lock in[1] `--stable`:
Users of --unstable mainly care about compatibility with old git
versions, which unstripping the whitespace would break. Thus there
isn't a usecase for the combination of --verbatim and --unstable,
and we don't expose this so as to not add maintainence burden.
† 1: imply `--stable`, disallow `--unstable`
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
You specifically need `--patch` since the default output is a raw diff.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The “Description” section decided to introduce and use the term “patch
ID” for the ID value itself. Let’s use the same term on the options as
well.
Also make to sure to use bare “ID” instead of “id”.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These bullet points are full-fledged paragraphs with sentences. It’s
best to restrict semicolon-termination to the case when the bullet list
amounts to a list of items.[1]
† 1: Like “List: ... • first; ... • second; and ... • third.”
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Credit goes to Emily and Josh for testing and noticing a corner-case
which caused conflicts with existing gitdir configs to silently pass
validation, then fail later in add_submodule() with a cryptic error:
fatal: A git directory for 'nested%2fsub' is found locally with remote(s):
origin /.../trash directory.t7425-submodule-gitdir-path-extension/sub
This change ensures the validation step checks existing gitdirs for
conflicts. We only have to do this for submodules having gitdirs,
because those without submodule.%s.gitdir need to be migrated and
will throw an error earlier in the submodule codepath.
Quoting Josh:
My testing setup has been as follows:
* Using our locally-built Git with our downstream patch of [1] included:
* create a repo "sub"
* create a repo "super"
* In "super":
* mkdir nested
* git submodule add ../sub nested/sub
* Verify that the submodule's gitdir is .git/modules/nested%2fsub
* Using a build of git from upstream `next` plus this series:
* git config set --global extensions.submodulepathconfig true
* git clone --recurse-submodules super super2
* create a repo "nested%2fsub"
* In "super2":
* git submodule add ../nested%2fsub
At this point I'd expect the collision detection / encoding to take
effect, but instead I get the error listed above.
End quote
Suggested-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If none of the previous plain-text / encoding / derivation steps work
and case 2.4 is reached, then try a hash of the submodule name to see
if that can be a valid gitdir before giving up and throwing an error.
This is a "last resort" type of measure to avoid conflicts since it
loses the human readability of the gitdir path. This logic will be
reached in rare cases, as can be seen in the test we added.
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new check when extension.submodulePathConfig is enabled, to
detect and prevent case-folding filesystem colisions. When this
new check is triggered, a stricter casefolding aware URI encoding
is used to percent-encode uppercase characters.
By using this check/retry mechanism the uppercase encoding is
only applied when necessary, so case-sensitive filesystems are
not affected.
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix nested filesystem collisions by url-encoding gitdir paths stored
in submodule.%s.gitdir, when extensions.submodulePathConfig is enabled.
Credit goes to Junio and Patrick for coming up with this design: the
encoding is only applied when necessary, to newly added submodules.
Existing modules don't need the encoding because git already errors
out when detecting nested gitdirs before this patch.
This commit adds the basic url-encoding and some tests. Next commits
extend the encode -> validate -> retry loop to fix more conflicts.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Suggested-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
is_rfc3986_unreserved() was moved to credential-store.c and was made
static by f89854362c (credential-store: move related functions to
credential-store file, 2023-06-06) under a correct assumption, at the
time, that it was the only place using it.
However now we need it to apply URL-encoding to submodule names when
constructing gitdir paths, to avoid conflicts, so bring it back as a
public function exposed via url.h, instead of the old helper path
(strbuf), which has nothing to do with 3986 encoding/decoding anymore.
This function will be used in subsequent commits which do the encoding.
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Manually running
"git config submodule.<name>.gitdir .git/modules/<name>"
for each submodule can be impractical, so add a migration command to
submodule--helper to automatically create configs for all submodules
as required by extensions.submodulePathConfig.
The command calls create_default_gitdir_config() which validates the
gitdir paths before adding the configs.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Suggested-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new config `init.defaultSubmodulePathConfig` which allows
enabling `extensions.submodulePathConfig` for new submodules by
default (those created via git init or clone).
Important: setting init.defaultSubmodulePathConfig = true does
not globally enable `extensions.submodulePathConfig`. Existing
repositories will still have the extension disabled and will
require migration (for example via git submodule--helper command
added in the next commit).
Suggested-by: Patrick Steinhardt <ps@pks.im>
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The idea of this extension is to abstract away the submodule gitdir
path implementation: everyone is expected to use the config and not
worry about how the path is computed internally, either in git or
other implementations.
With this extension enabled, the submodule.<name>.gitdir repo config
becomes the single source of truth for all submodule gitdir paths.
The submodule.<name>.gitdir config is added automatically for all new
submodules when this extension is enabled.
Git will throw an error if the extension is enabled and a config is
missing, advising users how to migrate. Migration is manual for now.
E.g. to add a missing config entry for an existing "foo" module:
git config submodule.foo.gitdir .git/modules/foo
Suggested-by: Junio C Hamano <gitster@pobox.com>
Suggested-by: Phillip Wood <phillip.wood123@gmail.com>
Suggested-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This exposes the gitdir name computed by submodule_name_to_gitdir()
internally, to make it easier for users and tests to interact with it.
Next commit will add a gitdir configuration, so this helper can also be
used to easily query that config or validate any gitdir path the user
sets (submodule_name_to_git_dir now runs the validation logic, since
our previous commit).
Based-on-patch-by: Brandon Williams <bwilliams.eng@gmail.com>
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the ad-hoc validation checks sprinkled across the source tree,
after calling submodule_name_to_gitdir() into the function proper,
which now always validates the gitdir before returning it.
This simplifies the API and helps to:
1. Avoid redundant validation calls after submodule_name_to_gitdir().
2. Avoid the risk of callers forgetting to validate.
3. Ensure gitdir paths provided by users via configs are always valid
(config gitdir paths are added in a subsequent commit).
The validation function can still be called as many times as needed
outside submodule_name_to_gitdir(), for example we keep two calls
which are still required, to avoid parallel clone races by re-running
the validation in builtin/submodule-helper.c.
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While testing submodule gitdir path encoding, I noticed submodule--helper
is still using a hardcoded modules gitdir path leading to test failures.
Call the submodule_name_to_gitdir() helper instead, which was invented
exactly for this purpose and is already used by all the other locations
which work on gitdirs.
Also narrow the scope of the submod_gitdir_path variable which is not
used anymore in the updated "else" branch.
Signed-off-by: Adrian Ratiu <adrian.ratiu@collabora.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `packed_object_info()` takes a packfile and offset and
returns the object info for the corresponding object. Despite these two
parameters though it also takes a repository pointer. This is redundant
information though, as `struct packed_git` already has a repository
pointer that is always populated.
Drop the redundant parameter.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While most of the object info requests for a packed object require us to
unpack its headers, reading its disk size doesn't. We still unpack the
object header in that case though, which is unnecessary work.
Skip reading the header if only the disk size is requested. This leads
to a small speedup when reading disk size, only. The following benchmark
was done in the Git repository:
Benchmark 1: ./git rev-list --disk-usage HEAD (rev = HEAD~)
Time (mean ± σ): 105.2 ms ± 0.6 ms [User: 91.4 ms, System: 13.3 ms]
Range (min … max): 103.7 ms … 106.0 ms 27 runs
Benchmark 2: ./git rev-list --disk-usage HEAD (rev = HEAD)
Time (mean ± σ): 96.7 ms ± 0.4 ms [User: 86.2 ms, System: 10.0 ms]
Range (min … max): 96.2 ms … 98.1 ms 30 runs
Summary
./git rev-list --disk-usage HEAD (rev = HEAD) ran
1.09 ± 0.01 times faster than ./git rev-list --disk-usage HEAD (rev = HEAD~)
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `packed_object_info()` function returns the type of the packed
object. While we use an `enum object_type` to store the return value,
this type is not to be confused with the actual object type. It _may_
contain the object type, but it may just as well encode that the given
packed object is stored as a delta.
We have removed the only caller that relied on this returned object type
in the preceding commit, so let's simplify semantics and return either 0
on success or a negative error code otherwise.
This unblocks a small optimization where we can skip reading the object
type altogether.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When reading object information via `packed_object_info()` we may not
populate the object info's packfile-specific fields. This leads to
inconsistent object info depending on whether the info was populated via
`packfile_store_read_object_info()` or `packed_object_info()`.
Fix this inconsistency so that we can always assume the pack info to be
populated when reading object info from a pack.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `struct object_info::u::packed::is_delta` field determines whether
or not a specific object is stored as a delta. It only stores whether or
not the object is stored as delta, so it is treated as a boolean value.
This boolean is insufficient though: when reading a packed object via
`packfile_store_read_object_info()` we know to skip parsing the actual
object when the user didn't request any object-specific data. In that
case we won't read the object itself, but will only look up its position
in the packfile. Consequently, we do not know whether it is a delta or
not.
This isn't really an issue right now, as the check for an empty request
is broken. But a subsequent commit will fix it, and once we do we will
have the need to also represent an "unknown" delta state.
Prepare for this change by introducing a new enum that encodes the
object type. We don't use the "unknown" state just yet, but will start
to do so in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When reading object info via a packfile we yield one of two types:
- The object can either be OI_PACKED, which is what a caller would
typically expect.
- Or it can be OI_DBCACHED if it is stored in the delta base cache.
The latter really is an implementation detail though, and callers
typically don't care at all about the difference. Furthermore, the
information whether or not it is part of the delta base cache can
already be derived via the `is_delta` field, so the fact that we discern
between OI_PACKED and OI_DBCACHED only further complicates the
interface.
There aren't all that many callers that care about the `whence` field in
the first place. In fact, there's only three:
- `packfile_store_read_object_info()` checks for `whence == OI_PACKED`
and then populates the packfile information of the object info
structure. We now start to do this also for deltified objects, which
gives its callers strictly more information.
- `repack_local_links()` wants to determine whether the object is part
of a promisor pack and checks for `whence == OI_PACKED`. If so, it
verifies that the packfile is a promisor pack. It's arguably wrong
to declare that an object is not part of a promisor pack only
because it is stored in the delta base cache.
- `is_not_in_promisor_pack_obj()` does the same, but checks that a
specific object is _not_ part of a promisor pack. The same reasoning
as above applies.
Drop the OI_DBCACHED enum completely. None of the callers seem to care
about the distinction.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are some early returns in `odb_source_loose_read_object_info()`
in cases where we don't have to open the loose object. These return
paths do not set `struct object_info::whence` to `OI_LOOSE` though, so
it becomes impossible for the caller to tell the format of such an
object.
The root cause of this really is that we have so many different return
paths in the function. As a consequence, it's harder than necessary to
make sure that all successful exit paths sot up the `whence` field as
expected.
Address this by refactoring the function to have a single exit path.
Like this, we can trivially set up the `whence` field when we exit
successfully from the function.
Note that we also:
- Rename `status` to `ret` to match our usual coding style, but also
to show that the old `status` variable is now always getting the
expected value. Furthermore, the value is not initialized anymore,
which has the consequence that most compilers will warn for exit
paths where we forgot to set it.
- Move the setup of scratch pointers closer to `parse_loose_header()`
to show where it's needed.
- Guard a couple of variables on cleanup so that they only get
released in case they have been set up.
- Reset `oi->delta_base_oid` towards the end of the function, together
with all the other object info pointers.
Overall, all these changes result in a diff that is somewhat hard to
read. But the end result is significantly easier to read and reason
about, so I'd argue this one-time churn is worth it.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git allows using .netrc file to supply credentials for HTTP auth.
Three test cases are added in this patch to provide missing coverage
when cloning over HTTP using .netrc file:
- First test case checks that the git clone is successful when credentials
are provided via .netrc file
- Second test case checks that the git clone fails when the .netrc file
provides invalid credentials. The HTTP server is expected to return
401 Unauthorized in such a case. The test checks that the user is
provided with a prompt for username/password on 401 to provide
the valid ones.
- Third test case checks that the git clone fails when the .netrc file
provides credentials that are valid but do not have permission for
this user. For example one may have multiple tokens in GitHub
and uses the one which was not authorized for cloning this repo.
In such a case the HTTP server returns 403 Forbidden.
For this test, the apache.conf is modified to return a 403
on finding a forbidden-user. No prompt for username/password is
expected after the 403 (unlike 401). This is because prompting may wipe
out existing credentials or conflict with custom credential helpers.
Signed-off-by: Ashlesh Gawande <git@ashlesh.me>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* kh/replay-invalid-onto-advance:
t3650: add more regression tests for failure conditions
replay: die if we cannot parse object
replay: improve code comment and die message
replay: die descriptively when invalid commit-ish is given
replay: find *onto only after testing for ref name
replay: remove dead code and rearrange
When a lock file is held, it can be helpful to know which process owns
it, especially when debugging stale locks left behind by crashed
processes. Add an optional feature that creates a companion PID file
alongside each lock file, containing the PID of the lock holder.
For a lock file "foo.lock", the PID file is named "foo~pid.lock". The
tilde character is forbidden in refnames and allowed in Windows
filenames, which guarantees no collision with the refs namespace
(e.g., refs "foo" and "foo~pid" cannot both exist). The file contains
a single line in the format "pid <value>" followed by a newline.
The PID file is created when a lock is acquired (if enabled), and
automatically cleaned up when the lock is released (via commit or
rollback). The file is registered as a tempfile so it gets cleaned up
by signal and atexit handlers if the process terminates abnormally.
When a lock conflict occurs, the code checks for an existing PID file
and, if found, uses kill(pid, 0) to determine if the process is still
running. This allows providing context-aware error messages:
Lock is held by process 12345. Wait for it to finish, or remove
the lock file to continue.
Or for a stale lock:
Lock was held by process 12345, which is no longer running.
Remove the stale lock file to continue.
The feature is controlled via core.lockfilePid configuration (boolean).
Defaults to false. When enabled, PID files are created for all lock
operations.
Existing PID files are always read when displaying lock errors,
regardless of the core.lockfilePid setting. This ensures helpful
diagnostics even when the feature was previously enabled and later
disabled.
Signed-off-by: Paulo Casaretto <pcasaretto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fsck has a race when operating on live repositories; consider the
following simple script that writes new commits as fsck runs:
#!/bin/bash
git fsck &
PID=$!
while ps -p $PID >/dev/null; do
sleep 3
git commit -q --allow-empty -m "Another commit"
done
Since fsck walks objects for connectivity and then reads the refs at the
end to check, this can cause fsck to get confused and think that the new
refs refer to missing commits and that new reflog entries are invalid.
Running the above script in a clone of git.git results in the following
(output ellipsized to remove additional errors of the same type):
$ ./fsck-while-writing.sh
Checking ref database: 100% (1/1), done.
Checking object directories: 100% (256/256), done.
warning in tag d6602ec5194c87b0fc87103ca4d67251c76f233a: missingTaggerEntry: invalid format - expected 'tagger' line
Checking objects: 100% (835091/835091), done.
error: HEAD: invalid reflog entry 2aac9f9286e2164fbf8e4f1d1df53044ace2b310
error: HEAD: invalid reflog entry 2aac9f9286e2164fbf8e4f1d1df53044ace2b310
error: HEAD: invalid reflog entry da0f5b80d61844a6f0ad2ddfd57e4fdfa246ea68
error: HEAD: invalid reflog entry da0f5b80d61844a6f0ad2ddfd57e4fdfa246ea68
[...]
error: HEAD: invalid reflog entry 87c8a5c2f6b79d9afa9e941590b9a097b6f7ac09
error: HEAD: invalid reflog entry d80887a48865e6ad165274b152cbbbed29f8a55a
error: HEAD: invalid reflog entry d80887a48865e6ad165274b152cbbbed29f8a55a
error: HEAD: invalid reflog entry 6724f2dfede88bfa9445a333e06e78536c0c6c0d
error: refs/heads/mybranch invalid reflog entry 2aac9f9286e2164fbf8e4f1d1df53044ace2b310
error: refs/heads/mybranch: invalid reflog entry 2aac9f9286e2164fbf8e4f1d1df53044ace2b310
error: refs/heads/mybranch: invalid reflog entry da0f5b80d61844a6f0ad2ddfd57e4fdfa246ea68
error: refs/heads/mybranch: invalid reflog entry da0f5b80d61844a6f0ad2ddfd57e4fdfa246ea68
[...]
error: refs/heads/mybranch: invalid reflog entry 87c8a5c2f6b79d9afa9e941590b9a097b6f7ac09
error: refs/heads/mybranch: invalid reflog entry d80887a48865e6ad165274b152cbbbed29f8a55a
error: refs/heads/mybranch: invalid reflog entry d80887a48865e6ad165274b152cbbbed29f8a55a
error: refs/heads/mybranch: invalid reflog entry 6724f2dfede88bfa9445a333e06e78536c0c6c0d
Checking connectivity: 833846, done.
missing commit 6724f2dfede88bfa9445a333e06e78536c0c6c0d
Verifying commits in commit graph: 100% (242243/242243), done.
We can minimize the race opportunities by taking a snapshot of refs at
program invocation, doing the connectivity check, and then checking the
snapshotted refs afterward. This avoids races with regular refs between
fsck and adding objects to the database, though it still leaves a race
between a gc and fsck. We are less concerned about folks simultaneously
running gc with fsck; though, if it becomes an issue, we could lock fsck
during gc. We definitely do not want to lock fsck during operations
that may add objects to the object store; that would be problematic for
forges.
Note that refs aren't the only problem, though; reflog entries and index
entries could be problematic as well. For now we punt on index entries
just leaving a TODO comment, and for reflogs we use a coarse solution of
taking the time at the beginning of the program and ignoring reflog
entries newer than that time. That may be imperfect if dealing with a
network filesystem, so we leave TODO comment for those that want to
improve that handling as well.
As a high level overview:
* In addition to fsck_handle_ref(), which now is only a few lines long
to process a ref, there's also a snapshot_ref() which is called
early in the program for each ref and takes all the error checking
logic.
* The iterating over refs that used to be in get_default_heads() plus
a loop over the arguments now appears in shapshot_refs().
* There's a new process_refs() as well that kind of looks like the old
get_default_heads() though it is streamlined due to the work done by
snapshot_refs().
This combination of changes modifies the output of running the script
(from the beginning of this commit message) to:
$ ./fsck-while-writing.sh
Checking ref database: 100% (1/1), done.
Checking object directories: 100% (256/256), done.
warning in tag d6602ec5194c87b0fc87103ca4d67251c76f233a: missingTaggerEntry: invalid format - expected 'tagger' line
Checking objects: 100% (835091/835091), done.
Checking connectivity: 833846, done.
Verifying commits in commit graph: 100% (242243/242243), done.
While worries about live updates while running fsck is likely of most
interest for forge operators, it may also benefit those with
automated jobs (such as git maintenance) or even casual users who want
to do other work in their clone while fsck is running.
Originally-based-on-a-patch-by: Matthew John Cheetham <mjcheetham@outlook.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test indirectly checks that the lost-found folder has 2 files in it
and then checks that the expected two files exist. Make this more
deliberate by removing the old test -f and compare the actual ls of the
lost-found directory with the expected files.
Signed-off-by: Andrew Chitester <andchi@fastmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is possible to hit a memory leak when reading data from a submodule
via git-grep(1):
Direct leak of 192 byte(s) in 1 object(s) allocated from:
#0 0x55555562e726 in calloc (git+0xda726)
#1 0x555555964734 in xcalloc ../wrapper.c:154:8
#2 0x555555835136 in load_multi_pack_index_one ../midx.c:135:2
#3 0x555555834fd6 in load_multi_pack_index ../midx.c:382:6
#4 0x5555558365b6 in prepare_multi_pack_index_one ../midx.c:716:17
#5 0x55555586c605 in packfile_store_prepare ../packfile.c:1103:3
#6 0x55555586c90c in packfile_store_reprepare ../packfile.c:1118:2
#7 0x5555558546b3 in odb_reprepare ../odb.c:1106:2
#8 0x5555558539e4 in do_oid_object_info_extended ../odb.c:715:4
#9 0x5555558533d1 in odb_read_object_info_extended ../odb.c:862:8
#10 0x5555558540bd in odb_read_object ../odb.c:920:6
#11 0x55555580a330 in grep_source_load_oid ../grep.c:1934:12
#12 0x55555580a13a in grep_source_load ../grep.c:1986:10
#13 0x555555809103 in grep_source_is_binary ../grep.c:2014:7
#14 0x555555807574 in grep_source_1 ../grep.c:1625:8
#15 0x555555807322 in grep_source ../grep.c:1837:10
#16 0x5555556a5c58 in run ../builtin/grep.c:208:10
#17 0x55555562bb42 in void* ThreadStartFunc<false>(void*) lsan_interceptors.cpp.o
#18 0x7ffff7a9a979 in start_thread (/nix/store/xx7cm72qy2c0643cm1ipngd87aqwkcdp-glibc-2.40-66/lib/libc.so.6+0x9a979) (BuildId: cddea92d6cba8333be952b5a02fd47d61054c5ab)
#19 0x7ffff7b22d2b in __GI___clone3 (/nix/store/xx7cm72qy2c0643cm1ipngd87aqwkcdp-glibc-2.40-66/lib/libc.so.6+0x122d2b) (BuildId: cddea92d6cba8333be952b5a02fd47d61054c5ab)
The root caues of this leak is the way we set up and release the
submodule:
1. We use `repo_submodule_init()` to initialize a new repository. This
repository is stored in `repos_to_free`.
2. We now read data from the submodule repository.
3. We then call `repo_clear()` on the submodule repositories.
4. `repo_clear()` calls `odb_free()`.
5. `odb_free()` calls `odb_free_sources()` followed by `odb_close()`.
The issue here is the 5th step: we call `odb_free_sources()` _before_ we
call `odb_close()`. But `odb_free_sources()` already frees all sources,
so the logic that closes them in `odb_close()` now becomes a no-op. As a
consequence, we never explicitly close sources at all.
Fix the leak by closing the store before we free the sources.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When performing auto-maintenance we check whether commit graphs need to
be generated by counting the number of commits that are reachable by any
reference, but not covered by a commit graph. This search is performed
by iterating through all references and then doing a depth-first search
until we have found enough commits that are not present in the commit
graph.
This logic has a memory leak though:
Direct leak of 16 byte(s) in 1 object(s) allocated from:
#0 0x55555562e433 in malloc (git+0xda433)
#1 0x555555964322 in do_xmalloc ../wrapper.c:55:8
#2 0x5555559642e6 in xmalloc ../wrapper.c:76:9
#3 0x55555579bf29 in commit_list_append ../commit.c:1872:35
#4 0x55555569f160 in dfs_on_ref ../builtin/gc.c:1165:4
#5 0x5555558c33fd in do_for_each_ref_iterator ../refs/iterator.c:431:12
#6 0x5555558af520 in do_for_each_ref ../refs.c:1828:9
#7 0x5555558ac317 in refs_for_each_ref ../refs.c:1833:9
#8 0x55555569e207 in should_write_commit_graph ../builtin/gc.c:1188:11
#9 0x55555569c915 in maintenance_is_needed ../builtin/gc.c:3492:8
#10 0x55555569b76a in cmd_maintenance ../builtin/gc.c:3542:9
#11 0x55555575166a in run_builtin ../git.c:506:11
#12 0x5555557502f0 in handle_builtin ../git.c:779:9
#13 0x555555751127 in run_argv ../git.c:862:4
#14 0x55555575007b in cmd_main ../git.c:984:19
#15 0x5555557523aa in main ../common-main.c:9:11
#16 0x7ffff7a2a4d7 in __libc_start_call_main (/nix/store/xx7cm72qy2c0643cm1ipngd87aqwkcdp-glibc-2.40-66/lib/libc.so.6+0x2a4d7) (BuildId: cddea92d6cba8333be952b5a02fd47d61054c5ab)
#17 0x7ffff7a2a59a in __libc_start_main@GLIBC_2.2.5 (/nix/store/xx7cm72qy2c0643cm1ipngd87aqwkcdp-glibc-2.40-66/lib/libc.so.6+0x2a59a) (BuildId: cddea92d6cba8333be952b5a02fd47d61054c5ab)
#18 0x5555555f0934 in _start (git+0x9c934)
The root cause of this memory leak is our use of `commit_list_append()`.
This function expects as parameters the item to append and the _tail_ of
the list to append. This tail will then be overwritten with the new tail
of the list so that it can be used in subsequent calls. But we call it
with `commit_list_append(parent->item, &stack)`, so we end up losing
everything but the new item.
This issue only surfaces when counting merge commits. Next to being a
memory leak, it also shows that we're in fact miscounting as we only
respect children of the last parent. All previous parents are discarded,
so their children will be disregarded unless they are hit via another
reference.
While crafting a test case for the issue I was puzzled that I couldn't
establish the proper border at which the auto-condition would be
fulfilled. As it turns out, there's another bug: if an object is at the
tip of any reference we don't mark it as seen. Consequently, if it is
the tip of or reachable via another ref, we'd count that object multiple
times.
Fix both of these bugs so that we properly count objects without leaking
any memory.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 8002e8ee18 (builtin/cat-file: use bitmaps to efficiently filter
by object type, 2025-04-02) introduced a performance regression when we
are not filtering objects: it uses bitmaps even when they won't help,
incurring extra costs. For example, running the new perf tests from this
commit, which check the performance of listing objects by oid:
$ export GIT_PERF_LARGE_REPO=/path/to/linux.git
$ git -C "$GIT_PERF_LARGE_REPO" repack -adb
$ GIT_SKIP_TESTS=p1006.1 ./run 8002e8ee18^ 8002e8ee18 p1006-cat-file.sh
[...]
Test 8002e8ee18^ 8002e8ee18
-------------------------------------------------------------------------------
1006.2: list all objects (sorted) 1.48(1.44+0.04) 6.39(6.35+0.04) +331.8%
1006.3: list all objects (unsorted) 3.01(2.97+0.04) 3.40(3.29+0.10) +13.0%
1006.4: list blobs 4.85(4.67+0.17) 1.68(1.58+0.10) -65.4%
An invocation that filters, like listing all blobs (1006.4), does
benefit from using the bitmaps; it now doesn't have to check the type of
each object from the pack data, so the tradeoff is worth it.
But for listing all objects in sorted idx order (1006.2), we otherwise
would never open the bitmap nor the revindex file. Worse, our sorting
step gets much worse. Normally we append into an array in pack .idx
order, and the sort step is trivial. But with bitmaps, we get the
objects in pack order, which is apparently random with respect to oid,
and have to sort the whole thing. (Note that this freshly-packed state
represents the best case for .idx sorting; if we had two packs, then
we'd have their objects one after the other and qsort would have to
interleave them).
The unsorted test in 1006.3 is interesting: there we are going in pack
order, so we load the revindex for the pack anyway. And though we don't
sort the result, we do use an oidset to check for duplicates. So we can
see in the 8002e8ee18^ timings that those two things cost ~1.5s over the
sorted case (mostly the oidset hash cost). We also incur the extra cost
to open the bitmap file as of 8002e8ee18, which seems to be ~400ms.
(This would probably be faster with a bitmap lookup table, but writing
that out is not yet the default).
So we know that bitmaps help when there's filtering to be done, but
otherwise make things worse. Let's only use them when there's a filter.
The perf script shows that we've fixed the regressions without hurting
the bitmap case:
Test 8002e8ee18^ 8002e8ee18 HEAD
--------------------------------------------------------------------------------------------------------
1006.2: list all objects (sorted) 1.56(1.53+0.03) 6.44(6.37+0.06) +312.8% 1.62(1.54+0.06) +3.8%
1006.3: list all objects (unsorted) 3.04(2.98+0.06) 3.45(3.38+0.07) +13.5% 3.04(2.99+0.04) +0.0%
1006.4: list blobs 5.14(4.98+0.15) 1.76(1.68+0.06) -65.8% 1.73(1.64+0.09) -66.3%
Note that there's another related case: we might have a filter that
cannot be used with bitmaps. That check is handled already for us in
for_each_bitmapped_object(), though we'd still load the bitmap and
revindex files pointlessly in that case. I don't think it can happen in
practice for cat-file, though, since it allows only blob:none,
blob:limit, and object:type filters, all of which work with bitmaps.
It would be easy-ish to insert an extra check like:
can_filter_bitmap(&opt->objects_filter);
into the conditional, but I didn't bother here. It would be redundant
with the call in for_each_bitmapped_object(), and the can_filter helper
function is static local in the bitmap code (so we'd have to make it
public).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you run:
GIT_PERF_LARGE_REPO=/some/path ./p1006-cat-file.sh
it will use the repo in /some/path. But if you use the "run" helper
script to aggregate and compare results, like this:
GIT_PERF_LARGE_REPO=/some/path ./run HEAD^ HEAD p1006-cat-file.sh
it will ignore that variable. This is because the presence of the
LARGE_REPO variable in GIT-BUILD-OPTIONS overrides what's in the
environment. This started with 4638e8806e (Makefile: use common template
for GIT-BUILD-OPTIONS, 2024-12-06), which now writes even empty
variables (though arguably it was wrong even before with a non-empty
value, as we generally prefer the environment to take precedence over
on-disk config).
We had the same problem in perf-lib.sh itself, and we hacked around it
with 32b74b9809 (perf: do allow `GIT_PERF_*` to be overridden again,
2025-04-04). That's what lets the direct invocation of "./p1006" work
above.
And in fact that was sufficient for "./run", too, until it started
loading GIT-BUILD-OPTIONS itself in 5756ccd181 (t/perf: fix benchmarks
with out-of-tree builds, 2025-04-28). Now it has the same problem: it
clobbers any incoming GIT_PERF options from the environment.
We can use the same hack here in the "run" script. It's quite ugly, but
it's just short enough that I don't think it's worth trying to factor it
out into a common shell library.
In the long run, we might consider teaching GIT-BUILD-OPTIONS to be more
gentle in overwriting existing entries. There are probably other
GIT_TEST_* variables which would need the same treatment. And if and
when we come up with a more complete solution, we can use it in both
spots.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using the perf suite's "run" helper in a vanilla build fails like this:
$ make && (cd t/perf && ./run p0000-perf-lib-sanity.sh)
=== Running 1 tests in this tree ===
perf 1 - test_perf_default_repo works: 1 2 3 ok
perf 2 - test_checkout_worktree works: 1 2 3 ok
ok 3 - test_export works
perf 4 - export a weird var: 1 2 3 ok
perf 5 - éḿíẗ ńöń-ÁŚĆÍÍ ćḧáŕáćẗéŕś: 1 2 3 ok
ok 6 - test_export works with weird vars
perf 7 - important variables available in subshells: 1 2 3 ok
perf 8 - test-lib-functions correctly loaded in subshells: 1 2 3 ok
# passed all 8 test(s)
1..8
cannot open test-results/p0000-perf-lib-sanity.subtests: No such file or directory at ./aggregate.perl line 159.
It is trying to aggregate results written into t/perf/test-results, but
the p0000 script did not write anything there.
The "run" script looks in $TEST_OUTPUT_DIRECTORY/test-results, or if
that variable is not set, in test-results in the current working
directory (which should be t/perf itself). It pulls the value of
$TEST_OUTPUT_DIRECTORY from the GIT-BUILD-OPTIONS file.
But that doesn't quite match the setup in perf-lib.sh (which is what
scripts like p0000 use). There we do this at the top of the script:
TEST_OUTPUT_DIRECTORY=$(pwd)
and then let test-lib.sh append "/test-results" to that. Historically,
that made the vanilla case work: we'd always use t/perf/test-results.
But when $TEST_OUTPUT_DIRECTORY was set, it would break.
Commit 5756ccd181 (t/perf: fix benchmarks with out-of-tree builds,
2025-04-28) fixed that second case by loading GIT-BUILD-OPTIONS
ourselves. But that broke the vanilla case!
Now our setting of $TEST_OUTPUT_DIRECTORY in perf-lib.sh is ignored,
because it is overwritten by GIT-BUILD-OPTIONS. And when test-lib.sh
sees that the output directory is empty, it defaults to t/test-results,
rather than t/perf/test-results.
Nobody seems to have noticed, probably for two reasons:
1. It only matters if you're trying to aggregate results (like the
"run" script does). Just running "./p0000-perf-lib-sanity.sh"
manually still produces useful output; the stored result files are
just in an unexpected place.
2. There might be leftover files in t/perf/test-results from previous
runs (before 5756ccd181). In particular, the ".subtests" files
don't tend to change, and the lack of that file is what causes it
to barf completely. So it's possible that the aggregation could
have been showing stale results that did not match the run that
just happened.
We can fix it by setting TEST_OUTPUT_DIRECTORY only after we've loaded
GIT-BUILD-OPTIONS, so that we override its value and not the other way
around. And we'll do so only when the variable is not set, which should
retain the fix for that case from 5756ccd181.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
From user feedback:
- Continued confusion about the terms "tree-ish" and "pathspec"
- The word "hunks" is confusing folks, use "changes" instead.
- On the part about `git restore`, there were a few comments to the
effect of "wait, this doesn't actually update any files? What? Why?"
Be more direct that `git reset` does not update files: there's no
obvious reason to suggest that folks use `git reset` followed by `git
restore`, instead suggest just using `git restore`.
Continue avoiding the use of the word "reset" to
describe what "git reset" does.
Signed-off-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
From user feedback, there was some confusion about the differences
between the modes, including:
1. Sometimes it says "index" and sometimes "index file".
Fix by replacing "index file" with "index".
2. Many comments about not being able to understand what `--merge` does.
Fix by mentioning obscure situations, since that seems to be what
it's for. Most folks will use `git <cmd> --abort`.
3. Issues telling the difference between --soft and --mixed, as well as
--keep. Leave --keep alone because I couldn't understand its use case,
but change `--soft` / `--mixed` / `--hard` as follows:
--mixed is the default, so put it first.
Describe --soft/--mixed/--hard with the following structure:
* Start by saying what happens to the files in the working directory,
because the thing users want to avoid most is irretrievably losing
changes to their working directory files.
* Then describe what happens to the staging area. Right now it seems to
frame leaving the index alone as being a sort of neutral action.
I think this is part of what's confusing users, because in Git when
you update HEAD, Git almost always updates the index to match HEAD.
So leaving the index unchanged while updating HEAD is actually quite
unusual, and it deserves to be flagged.
* Finally, give an example for --soft to explain a common use case.
Signed-off-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
From user feedback, there were several points of confusion:
- What "tree-ish", "entries", "working tree", "HEAD", and "index" mean
("I have no clue what the index is", "I've been using git for 20 years
and still don't know what a tree-ish is"). Avoid using these terms
where it makes sense.
- What "optionally modifying index and working tree to match" means
("to match what?" "optionally based on what?")
Remove this from the intro, we can say it later when giving more
details.
- One user suggested that "The <tree-ish>/<commit> defaults to HEAD
in all forms." should be repeated later on, since it's easy to miss.
Instead say that HEAD is the default in each case later.
Another issue is that `git reset` consistently describes the action
it does as "Reset ...", commands should not use their name to describe
themselves, and that the word "mode" is used to mean several different
things on this page.
Address these by being more clear about two use cases for `git reset`
("to undo operations" and "to update staged files"), and explaining what
the conditions are for each case instead of forcing the user to figure
out the pattern is in first form vs the other 3 forms.
Signed-off-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
From user feedback: three users commented that the `git reset [mode]`
form is the one that they primarily use, and that they were suprised to
see it listed last.
("I've never used git reset in any mode other than --hard").
Move it to be first, since the `git reset [mode]` form is what
"Reset current HEAD to the specified state" at the beginning refers
to, and because the `git reset [mode]` form is the only thing that
`git reset` uniquely does, the others could also be done with
`git restore`.
Signed-off-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There isn’t much test coverage for basic failure conditions. Let’s add
a few more since these are simple to write and remove if they become
obsolete.
Helped-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`parse_object` can return `NULL`. That will in turn make
`repo_peel_to_type` return the same.
Let’s die fast and descriptively with the `*_or_die` variant.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Giving an invalid commit-ish to `--onto` makes git-replay(1) fail with:
fatal: Replaying down to root commit is not supported yet!
Going backwards from this point:
1. `onto` is `NULL` from `set_up_replay_mode`;
2. that function in turn calls `peel_committish`; and
3. here we return `NULL` if `repo_get_oid` fails.
Let’s die immediately with a descriptive error message instead.
Doing this also provides us with a descriptive error if we “forget” to
provide an argument to `--onto` (but we really do unintentionally):[1]
$ git replay --onto ^main topic1
fatal: '^main' is not a valid commit-ish
Note that the `--advance` case won’t be triggered in practice because
of the “argument to --advance must be a reference” check (see the
previous test, and commit).
† 1: The argument to `--onto` is mandatory and the option parser accepts
both `--onto=<name>` (stuck form) and `--onto name`. The latter
form makes it easy to unintentionally pass something to the option
when you really meant to pass a positional argument.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We are about to make `peel_committish` die when it cannot find
a commit-ish instead of returning `NULL`. But that would make e.g.
`git replay --advance=refs/non-existent` die with a less descriptive
error message; the highest-level error message is that the name does
not exist as a ref, not that we cannot find a commit-ish based on
the name.
Let’s try to find the ref and only after that try to peel to
as a commit-ish.
Also add a regression test to protect this error order from future
modifications.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
22d99f01 (replay: add --advance or 'cherry-pick' mode, 2023-11-24) both
added `--advance` and made one of `--onto` or `--advance` mandatory.
But `determine_replay_mode` claims that there is a third alternative;
neither of `--onto` or `--advance` were given:
if (onto_name) {
...
} else if (*advance_name) {
...
} else {
...
}
But this is false—the fallthrough else-block is dead code.
Commit 22d99f01 was iterated upon by several people.[1] The initial
author wrote code for a sort of *guess mode*, allowing for shorter
commands when that was possible. But the next person instead made one
of the aforementioned options mandatory. In turn this code was dead on
arrival in git.git.
[1]: https://lore.kernel.org/git/CABPp-BEcJqjD4ztsZo2FTZgWT5ZOADKYEyiZtda+d0mSd1quPQ@mail.gmail.com/
Let’s remove this code. We can also join the if-block with the
condition `!*advance_name` into the `*onto` block since we do not set
`*advance_name` in this function. It only looked like we might set it
since the dead code has this line:
*advance_name = xstrdup_or_null(last_key);
Let’s also rename the function since we do not determine the
replay mode here. We just set up `*onto` and refs to update.
Note that there might be more dead code caused by this *guess mode*.
We only concern ourselves with this function for now.
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git status" on a branch that follows a remote branch compares
commits on the current branch and the remote-tracking branch it
builds upon, to show "ahead", "behind", or "diverged" status.
When working on a feature branch that tracks a remote feature branch,
but you also want to track progress relative to the push destination
tracking branch (which may differ from the upstream branch), git status
now shows an additional comparison.
When the upstream tracking branch differs from the push destination
tracking branch, git status shows both the comparison with the upstream
tracking branch (as before) and an additional comparison with the push
destination tracking branch. The push branch comparison appears on a
separate line after the upstream branch status, using the same format.
Example output when tracking origin/main but push destination is
origin/feature:
On branch feature
Your branch and 'origin/main' have diverged,
and have 3 and 1 different commits each, respectively.
(use "git pull" if you want to integrate the remote branch with yours)
Your branch is ahead of 'origin/feature' by 1 commit.
(use "git push" to publish your local commits)
The comparison is only shown when the push destination tracking branch
differs from the upstream tracking branch, even if they are on the same
remote.
Signed-off-by: Harald Nordgren <haraldnordgren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor format_branch_comparison function in preparation for showing
comparison with push remote tracking branch.
Signed-off-by: Harald Nordgren <haraldnordgren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace `test -f` and `test -h` checks with `test_path_is_file` and
`test_path_is_symlink`. Using the test framework helpers provides
clearer diagnostics and keeps tests consistent across the suite.
Signed-off-by: Pushkar Singh <pushkarkumarsingh1970@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When handling multiple repositories within the same process, relying on
global state for accessing the "core.attributesFile" configuration can
lead to incorrect values being used. It also makes it harder to isolate
repositories and hinders the libification of git.
The functions `bootstrap_attr_stack()` and `git_attr_val_system()`
retrieve "core.attributesFile" via `git_attr_global_file()`
which reads from global state `git_attributes_file`.
Move the "core.attributesFile" configuration into the
`struct repo_settings` instead of relying on the global state.
A new function `repo_settings_get_attributesfile_path()` is added
and used to retrieve this setting in a repository-scoped manner.
The functions to retrieve "core.attributesFile" are replaced with
the new accessor function `repo_settings_get_attributesfile_path()`
This improves multi-repository behaviour and aligns with the goal of
libifying of Git.
Note that in `bootstrap_attr_stack()`, the `index_state` is used only
if it exists, else we default to `the_repository`.
Based-on-patch-by: Ayush Chandekar <ayu.chandekar@gmail.com>
Mentored-by: Christian Couder <christian.couder@gmail.com>
Mentored-by: Usman Akinyemi <usmanakinyemi202@gmail.com>
Signed-off-by: Olamide Caleb Bello <belkid98@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Only the classic diff uses xdl_cleanup_records(). Move it,
xdl_clean_mmatch(), and the macros to xdiffi.c and call
xdl_cleanup_records() inside of xdl_do_classic_diff(). This better
organizes the code related to the classic diff.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Disentangle xdl_cleanup_records() from the classifier so that it can be
moved from xprepare.c into xdiffi.c.
The classic diff is the only algorithm that needs to count the number
of times each line occurs in each file. Make xdl_cleanup_records()
count the number of lines instead of the classifier so it won't slow
down patience or histogram.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
View with --color-words. Same argument as delta_start.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Placing delta_start in xdfenv_t instead of xdfile_t provides a more
appropriate context since this variable only makes sense with a pair
of files. View with --color-words.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch is best viewed with a before and after of the whole
function.
Rather than using 2 pointers and walking them. Use direct indexing with
local variables of what is being compared to make it easier to follow
along.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
View with --color-words. Prepare these functions to use the fields:
delta_start, delta_end. A future patch will add these fields to
xdfenv_t.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The patience diff is set up the exact same way as histogram, see
xdl_do_historgram_diff() in xhistogram.c. xdl_optimize_ctxs() is
redundant now, delete it.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All lines must be read anyway, so classify them after they're read in.
Also move the memset() into xdl_init_classifier().
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Later patches will prepare xdl_cleanup_records() to be moved into xdiffi.c
since only the classic diff uses that function.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Trying to use Rust's Vec in C, or git's ALLOC_GROW() macros (via
wrapper functions) in Rust is painful because:
* C doesn't define its own vector type, and even though Rust does
have Vec its painful to use on the C side (more on that below).
However its still not viable to use Rust's Vec type because Git
needs to be able to compile without Rust. So ivec was created
expressley to be interoperable between C and Rust without needing
Rust.
* C doing vector things the Rust way would require wrapper functions,
and Rust doing vector things the C way would require wrapper
functions, so ivec was created to ensure a consistent contract
between the 2 languages for how to manipulate a vector.
* Currently, Rust defines its own 'Vec' type that is generic, but its
memory allocator and struct layout weren't designed for
interoperability with C (or any language for that matter), meaning
that the C side cannot push to or expand a 'Vec' without defining
wrapper functions in Rust that C can call. Without special care,
the two languages might use different allocators (malloc/free on
the C side, and possibly something else in Rust), which would make
it difficult for a function in one language to free elements
allocated by a call from a function in the other language.
* Similarly, git defines ALLOC_GROW() and related macros in
git-compat-util.h. While we could add functions allowing Rust to
invoke something similar to those macros, passing three variables
(pointer, length, allocated_size) instead of a single variable
(vector) across the language boundary requires more cognitive
overhead for readers to keep track of and makes it easier to make
mistakes. Further, for low-level components that we want to
eventually convert to pure Rust, such triplets would feel very out
of place.
To address these issue, introduce a new type, ivec -- short for
interoperable vector. (We refer to it as 'ivec' generally, though on
the Rust side the struct is called IVec to match Rust style.) This new
type is specifically designed for FFI purposes, so that both languages
handle the vector in the same way, though it could be used on either
side independently. This type is designed such that it can easily be
replaced by a Rust 'Vec' once interoperability is no longer a concern.
One particular item to note is that Git's macros to handle vec
operations infer the amount that a vec needs to grow from the size of
a pointer, but that makes it somewhat specific to the macros used in C.
To avoid defining every ivec function as a macro I opted to also
include an element_size field that allows concrete functions like
push() to know how much to grow the memory. This element_size also
helps in verifying that the ivec is correct when passing from C to
Rust.
Signed-off-by: Ezekiel Newren <ezekielnewren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "difftool --dir-diff syncs worktree without unstaged change" test
fails intermittently on Windows CI, as seen at:
https://github.com/git/git/actions/runs/20624095002/job/59231745784#step:5:416
The root cause is that the original file content and the replacement
content have identical sizes:
- Original: "main\ntest\na\n" = 12 bytes
- New: "new content\n" = 12 bytes
When difftool's sync-back mechanism checks for changes, it compares
stat data between the temporary index and the modified files. If the
modification happens within the same timestamp granularity window and
file size stays the same, the change goes undetected.
On Windows, this is more likely to manifest because Git relies on
inode changes as a fallback when other stat fields match, but Windows
filesystems lack inodes. This is a real bug that could affect users
scripting difftool similarly, as seen at:
https://github.com/git-for-windows/git/issues/5132
Fix the test by changing the replacement content to "modified content"
(17 bytes), ensuring the size difference is detected regardless of
timestamp resolution or platform-specific stat behavior.
Note: This fixes the test flakiness but not the underlying issue in
difftool's change detection. Other tests with same-size file patterns
(t0010-racy-git.sh, t2200-add-update.sh) are not affected because they
use normal index operations with proper racy-git detection.
Signed-off-by: Paul Tarjan <github@paulisageek.com>
Reviewed-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fsmonitor event tests (edit, create, delete, rename, etc.) were
flaky because there can be a race between the daemon writing events
to the trace file and the test's grep commands checking for them.
Add a retry_grep() helper function (similar to retry_until_success
in lib-git-p4.sh) that retries grep with a timeout, and use it in
all event-checking tests to wait for one expected event before
checking the rest.
Signed-off-by: Paul Tarjan <github@paulisageek.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Implement fsmonitor for Linux using the inotify API, bringing it to
feature parity with existing Windows and macOS implementations.
The Linux implementation uses inotify to monitor filesystem events.
Unlike macOS's FSEvents which can watch a single root directory,
inotify requires registering watches on every directory of interest.
The implementation carefully handles directory renames and moves
using inotify's cookie mechanism to track IN_MOVED_FROM/IN_MOVED_TO
event pairs.
Key implementation details:
- Uses inotify_init1(O_NONBLOCK) for non-blocking event monitoring
- Maintains bidirectional hashmaps between watch descriptors and paths
for efficient event processing
- Handles directory creation, deletion, and renames dynamically
- Detects remote filesystems (NFS, CIFS, SMB, etc.) via statfs()
- Falls back to $HOME/.git-fsmonitor-* for socket when .git is remote
- Creates batches lazily (only for actual file events, not cookies)
to avoid spurious sequence number increments
Build configuration:
- Enabled via FSMONITOR_DAEMON_BACKEND=linux and FSMONITOR_OS_SETTINGS=linux
- Requires NO_PTHREADS and NO_UNIX_SOCKETS to be unset
- Adds HAVE_LINUX_MAGIC_H for filesystem type detection
Documentation updated to note that fsmonitor.socketDir is now supported
on both Mac OS and Linux, and adds a section about inotify watch limits.
Testing performed:
- Build succeeds with standard flags and SANITIZE=address
- All t7527-builtin-fsmonitor.sh tests pass on local filesystems
- Remote filesystem detection correctly rejects network mounts
Issues addressed from PR #1352 (git/git) review comments:
- GPLv3 ME_REMOTE macro: Rewrote remote filesystem detection from
scratch using statfs() and linux/magic.h constants (no GPLv3 code)
- Memory leak on inotify_init1 failure: Added FREE_AND_NULL cleanup
- Unsafe hashmap iteration in dtor: Collect entries first, then modify
- Missing null checks in stop_async: Added proper guard conditions
- dirname() modifying argument: Create copy with xstrdup() first
- Non-portable f_fsid.__val: Use memcmp() for fsid comparison
- Missing worktree null check: Added BUG() for null worktree
- Header updates: Use git-compat-util.h, hash_to_hex_algop()
- Code style: Use xstrdup() not xmemdupz(), proper pointer style
Issues addressed from PR #1667 (git/git) review comments:
- EINTR handling: read() now handles both EAGAIN and EINTR
- Trailing pipe in log_mask_set: Added strbuf_strip_suffix()
- Unchecked add_watch return: Now logs failure in rename_dir()
- String building: Consolidated strbuf operations with strbuf_addf()
- Translation markers: Added _() to all error_errno() messages
Based on work from https://github.com/git/git/pull/1352 by Eric DeCosta,
and https://github.com/git/git/pull/1667 by Marziyeh Esipreh, updated
to work with the current codebase and address all review feedback.
Signed-off-by: Paul Tarjan <github@paulisageek.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`check_aliased_update_internal()` detects when a symbolic ref and its
target are being updated in the same push. It does this by building a
list of ref names without the optional namespace prefix. When a symbolic
ref within a namespace points to a ref outside the namespace,
`strip_namespace()` returns NULL which leads to a segfault.
A NULL check preventing this particular issue was repurposed in
ded8393610. Rather than reintroducing it, we can instead build a list of
fully qualified ref names. This prevents the crash, preserves the
consistency check from da3efdb17b, and allows updates to all symbolic
refs.
Signed-off-by: Troels Thomsen <troels@thomsen.io>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function for increasing the capacity of a commit_stack. It is
useful for reducing reallocations when the target size is known in
advance.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function for initializing a struct commit_stack, for when static
initialization is not possible or impractical.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Call commit_stack functions instead of effectively open-coding them.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simplify the code by using commit_stack instead of open-coding it.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simplify collection commits in a callback function by passing it a
commit_stack pointer all the way from the caller, instead of using
separate variables for array and item count and a bunch of intermediate
members in struct bitmap_commit_cb.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Calling commit_stack_push() to add commits is simpler and more efficient
than using REALLOC_ARRAY. Calling commit_stack_pop() to consume them in
LIFO order is also a tad simpler than calculating the array index from
the end.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Dynamic arrays of commit pointers are used in several places. Some of
them use a custom struct to hold array, item count and capacity, others
have them as separate variables linked by a common name part.
Pick one succinct, clean implementation -- commit_stack -- and convert
the different variants to it to reduce code duplication.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previous commits have set up an infrastructure for `--filter=auto` to
automatically prepare a partial clone filter based on what the server
advertised and the client accepted.
Using that infrastructure, let's now enable the `--filter=auto` option
in `git clone` and `git fetch` by setting `allow_auto_filter` to 1.
Note that these small changes mean that when `git clone --filter=auto`
or `git fetch --filter=auto` are used, "auto" is automatically saved
as the partial clone filter for the server on the client. Therefore
subsequent calls to `git fetch` on the client will automatically use
this "auto" mode even without `--filter=auto`.
Let's also set `allow_auto_filter` to 1 in `transport.c`, as the
transport layer must be able to accept the "auto" filter spec even if
the invoking command hasn't fully parsed it yet.
When an "auto" filter is requested, let's have the "fetch-pack.c" code
in `do_fetch_pack_v2()` compute a filter and send it to the server.
In `do_fetch_pack_v2()` the logic also needs to check for the
"promisor-remote" capability and call `promisor_remote_reply()` to
parse advertised remotes and populate the list of those accepted (and
their filters).
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, advertised filters are only kept in memory temporarily
during parsing, or persisted to disk if `promisor.storeFields`
contains 'partialCloneFilter'.
In a following commit though, we will add a `--filter=auto` option.
This option will enable the client to use the filters that the server
is suggesting for the promisor remotes the client accepts.
To use them even if `promisor.storeFields` is not configured, these
filters should be stored somewhere for the current session.
Let's add an `advertised_filter` field to `struct promisor_remote`
for that purpose.
To ensure that the filters are available in all cases,
filter_promisor_remote() captures them into a temporary list and
applies them to the `promisor_remote` structs after the potential
configuration reload.
Then the accepted remotes are marked as `accepted` in the repository
state. This ensures that subsequent calls to look up accepted remotes
(like in the filter construction below) actually find them.
In a following commit, we will add a `--filter=auto` option that will
enable a client to use the filters suggested by the server for the
promisor remotes the client accepted.
To enable the client to construct a filter spec based on these filters,
let's add a `promisor_remote_construct_filter(repo)` function.
This function:
- iterates over all accepted promisor remotes in the repository,
- collects the filters advertised for them (using `advertised_filter`
which a previous commit added to `struct promisor_remote`), and
- generates a single filter spec for them (using the
`list_objects_filter_combine()` function added by a previous commit).
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a following commit, we will need to aggregate filters from multiple
accepted promisor remotes into a single filter.
For that purpose, let's add a `list_objects_filter_combine()` helper
function that takes a list of filter specifications and combines them
into a single string. If multiple filters are provided, it constructs a
"combine:..." filter, ensuring that sub-filters are properly
URL-encoded using the existing `allow_unencoded` logic.
In a following commit, we will add a `--filter=auto` option that will
enable a client to use the filters suggested by the server for the
promisor remotes the client accepted.
To simplify the filter processing related to this new feature, let's
also add a small `list_objects_filter_resolve_auto()` function.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a following commit, we are going to allow passing "auto" as a
<filterspec> to the `--filter=<filterspec>` option, but only for some
commands. Other commands that support the `--filter=<filterspec>`
option should still die() when 'auto' is passed.
Let's set up the "list-objects-filter-options.{c,h}" infrastructure to
support that:
- Add a new `unsigned int allow_auto_filter : 1;` flag to
`struct list_objects_filter_options` which specifies if "auto" is
accepted or not.
- Change gently_parse_list_objects_filter() to parse "auto" if it's
accepted.
- Make sure we die() if "auto" is combined with another filter.
- Update list_objects_filter_release() to preserve the
allow_auto_filter flag, as this function is often called (via
opt_parse_list_objects_filter) to reset the struct before parsing a
new value.
Let's also update `list-objects-filter.c` to recognize the new
`LOFC_AUTO` choice. Since "auto" must be resolved to a concrete filter
before filtering actually begins, initializing a filter with
`LOFC_AUTO` is invalid and will trigger a BUG().
Note that ideally combining "auto" with "auto" could be allowed, but in
practice, it's probably not worth the added code complexity. And if we
really want it, nothing prevents us to allow it in future work.
If we ever want to give a meaning to combining "auto" with a different
filter too, nothing prevents us to do that in future work either.
While at it, let's add a new "u-list-objects-filter-options.c" file for
`struct list_objects_filter_options` related unit tests. For now it
only tests gently_parse_list_objects_filter() though.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `--filter=<filter-spec>` option is documented in most commands that
support it except `git fetch`.
Let's fix that and document that option properly in the same way as it
is already documented for `git clone`.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `struct list_objects_filter_options filter_options` variable used
in "builtin/fetch.c" to store the parsed filters specified by
`--filter=<filterspec>` is currently a static variable global to the
file.
As we are going to use it more in a following commit, it could become a
bit less easy to understand how it's managed.
To avoid that, let's make it clear that it's owned by cmd_fetch() by
moving its definition into that function and making it non-static.
This requires passing a pointer to it through the prepare_transport(),
do_fetch(), backfill_tags(), fetch_one_setup_partial(), and fetch_one()
functions, but it's quite straightforward.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `struct list_objects_filter_options filter_options` variable used
in "builtin/clone.c" to store the parsed filters specified by
`--filter=<filterspec>` is currently a static variable global to the
file.
As we are going to use it more in a following commit, it could become
a bit less easy to understand how it's managed.
To avoid that, let's make it clear that it's owned by cmd_clone() by
moving its definition into that function and making it non-static.
The only additional change to make this work is to pass it as an
argument to checkout(). So it's a small quite cheap cleanup anyway.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A previous commit allowed a server to pass additional fields through
the "promisor-remote" protocol capability after the "name" and "url"
fields, specifically the "partialCloneFilter" and "token" fields.
Another previous commit, c213820c51 (promisor-remote: allow a client
to check fields, 2025-09-08), has made it possible for a client to
decide if it accepts a promisor remote advertised by a server based
on these additional fields.
Often though, it would be interesting for the client to just store in
its configuration files these additional fields passed by the server,
so that it can use them when needed.
For example if a token is necessary to access a promisor remote, that
token could be updated frequently only on the server side and then
passed to all the clients through the "promisor-remote" capability,
avoiding the need to update it on all the clients manually.
Storing the token on the client side makes sure that the token is
available when the client needs to access the promisor remotes for a
lazy fetch.
In the same way, if it appears that it's better to use a different
filter to access a promisor remote, it could be helpful if the client
could automatically use it.
To allow this, let's introduce a new "promisor.storeFields"
configuration variable.
Like "promisor.checkFields" and "promisor.sendFields", it should
contain a comma or space separated list of field names. Only the
"partialCloneFilter" and "token" field names are supported for now.
When a server advertises a promisor remote, for example "foo", along
with for example "token=XXXXX" to a client, and on the client side
"promisor.storeFields" contains "token", then the client will store
XXXXX for the "remote.foo.token" variable in its configuration file
and reload its configuration so it can immediately use this new
configuration variable.
A message is emitted on stderr to warn users when the config is
changed.
Note that even if "promisor.acceptFromServer" is set to "all", a
promisor remote has to be already configured on the client side for
some of its config to be changed. In any case no new remote is
configured and no new URL is stored.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In "promisor-remote.c", the fields_sent() and fields_checked()
functions serve similar purposes and contain a small amount of
duplicated code.
As we are going to add a similar function in a following commit,
let's refactor this common code into a new initialize_fields_list()
function.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- Switch the synopsis to a synopsis block which will automatically
format placeholders in italics and keywords in monospace
- Use _<placeholder>_ instead of <placeholder> in the description
- Use `backticks` for keywords and more complex option
descriptions. The new rendering engine will apply synopsis rules to
these spans.
- also convert first sentences to imperative mood where applicable
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of plain text tables with hand formatting, take advantage of
asciidoc's table syntax to let the renderer do the heavy lifting and
make the tables more maintainable and translatable.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Also convert unformatted lists to proper AsciiDoc lists.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In case there are multiple synopsis blocks (declared with [synopsis]
or [verse] style) in the same file, the previous implementation was
incorrectly picking up text from all the blocks until the first empty
line. This commit modifies the sed command to stop processing upon
encountering the first empty line after the first block declaration,
thereby ensuring that only the intended block is captured.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use 'prune' instead of 'expire' when describing the --expire option's
effect on missing worktrees, since the terminology is clearer.
Signed-off-by: Sam Bostock <sam@sambostock.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `--expire` option for `git worktree list` and `git worktree prune`
only affects worktrees whose working directory path no longer exists.
The help text did not make this clear, and the documentation
inconsistently used "unused" for prune but "missing" for list.
This updates the help text and documentation to consistently describe
these as "missing worktrees".
Signed-off-by: Sam Bostock <sam@sambostock.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a bundle list config file has a typo like 'url' instead of 'uri',
or simply omits the uri field, the bundle entry is created but
bundle->uri remains NULL. This causes a segfault when copy_uri_to_file()
passes the NULL to starts_with().
Signed-off-by: Sam Bostock <sam@sambostock.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Even though control sequences that erase characters are quite juicy for
attack scenarios, where attackers are eager to hide traces of suspicious
activities, during the review of the side band sanitizing patch series
concerns were raised that there might be some legimitate scenarios where
Git server's `pre-receive` hooks use those sequences in a benign way.
Control sequences to move the cursor can likewise be used to hide tracks
by overwriting characters, and have been equally pointed out as having
legitimate users.
Let's add options to let users opt into passing through those ANSI
Escape sequences: `sideband.allowControlCharacters` now supports also
`cursor` and `erase`, and it parses the value as a comma-separated list.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The preceding two commits introduced special handling of the sideband
channel to neutralize ANSI escape sequences before sending the payload
to the terminal, and `sideband.allowControlCharacters` to override that
behavior.
However, as reported by brian m. carlson, some `pre-receive` hooks that
are actively used in practice want to color their messages and therefore
rely on the fact that Git passes them through to the terminal, even
though they have no way to determine whether the receiving side can
actually handle Escape sequences (think e.g. about the practice
recommended by Git that third-party applications wishing to use Git
functionality parse the output of Git commands).
In contrast to other ANSI escape sequences, it is highly unlikely that
coloring sequences can be essential tools in attack vectors that mislead
Git users e.g. by hiding crucial information.
Therefore we can have both: Continue to allow ANSI coloring sequences to
be passed to the terminal by default, and neutralize all other ANSI
Escape sequences.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The preceding commit fixed the vulnerability whereas sideband messages
(that are under the control of the remote server) could contain ANSI
escape sequences that would be sent to the terminal verbatim.
However, this fix may not be desirable under all circumstances, e.g.
when remote servers deliberately add coloring to their messages to
increase their urgency.
To help with those use cases, give users a way to opt-out of the
protections: `sideband.allowControlCharacters`.
Suggested-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The output of `git clone` is a vital component for understanding what
has happened when things go wrong. However, these logs are partially
under the control of the remote server (via the "sideband", which
typically contains what the remote `git pack-objects` process sends to
`stderr`), and is currently not sanitized by Git.
This makes Git susceptible to ANSI escape sequence injection (see
CWE-150, https://cwe.mitre.org/data/definitions/150.html), which allows
attackers to corrupt terminal state, to hide information, and even to
insert characters into the input buffer (i.e. as if the user had typed
those characters).
To plug this vulnerability, disallow any control character in the
sideband, replacing them instead with the common `^<letter/symbol>`
(e.g. `^[` for `\x1b`, `^A` for `\x01`).
There is likely a need for more fine-grained controls instead of using a
"heavy hammer" like this, which will be introduced subsequently.
Helped-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In https://github.com/git-for-windows/git/pull/2637, we fixed a bug
where symbolic links' target path sizes were recorded incorrectly in the
index. The downside of this fix was that every user with tracked
symbolic links in their checkouts would see them as modified in `git
status`, but not in `git diff`, and only a `git add <path>` (or `git add
-u`) would "fix" this.
Let's do better than that: we can detect that situation and simply
pretend that a symbolic link with a known bad size (or a size that just
happens to be that bad size, a _very_ unlikely scenario because it would
overflow our buffers due to the trailing NUL byte) means that it needs
to be re-checked as if we had just checked it out.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When creating directories via `safe_create_leading_directories()`, we
might encounter an already-existing directory which is not
readable by the current user. To handle that situation, Git's code calls
`stat()` to determine whether we're looking at a directory.
In such a case, `CreateFile()` will fail, though, no matter what, and
consequently `mingw_stat()` will fail, too. But POSIX semantics seem to
still allow `stat()` to go forward.
So let's call `mingw_lstat()` to the rescue if we fail to get a file
handle due to denied permission in `mingw_stat()`, and fill the stat
info that way.
We need to be careful to not allow this to go forward in case that we're
looking at a symbolic link: to resolve the link, we would still have to
create a file handle, and we just found out that we cannot. Therefore,
`stat()` still needs to fail with `EACCES` in that case.
This fixes https://github.com/git-for-windows/git/issues/2531.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As of Windows 10 Build 14972 in Developer Mode, a new flag is supported
by `CreateSymbolicLink()` to create symbolic links even when running
outside of an elevated session (which was previously required).
This new flag is called `SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE`
and has the numeric value 0x02.
Previous Windows 10 versions will not understand that flag and return
an `ERROR_INVALID_PARAMETER`, therefore we have to be careful to try
passing that flag only when the build number indicates that it is
supported.
For more information about the new flag, see this blog post:
https://blogs.windows.com/buildingapps/2016/12/02/symlinks-windows-10/
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Symlinks on Windows have a flag that indicates whether the target is a
file or a directory. Symlinks of wrong type simply don't work. This even
affects core Win32 APIs (e.g. `DeleteFile()` refuses to delete directory
symlinks).
However, `CreateFile()` with FILE_FLAG_BACKUP_SEMANTICS does work. Check
the target type by first creating a tentative file symlink, opening it,
and checking the type of the resulting handle. If it is a directory,
recreate the symlink with the directory flag set.
It is possible to create symlinks before the target exists (or in case
of symlinks to symlinks: before the target type is known). If this
happens, create a tentative file symlink and postpone the directory
decision: keep a list of phantom symlinks to be processed whenever a new
directory is created in `mingw_mkdir()`.
Limitations: This algorithm may fail if a link target changes from file
to directory or vice versa, or if the target directory is created in
another process. It's the best Git can do, though.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Implement `symlink()`. This implementation always creates _file_
symlinks (remember: Windows discerns between symlinks pointing to
directories and those pointing to files). Support for directory symlinks
will be added in a subseqeuent commit.
This implementation fails with `ENOSYS` if symlinks are disabled or
unsupported.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Implement `readlink()` by reading NTFS reparse points via the
`read_reparse_point()` function that was introduced earlier to determine
the length of symlink targets. Works for symlinks and directory
junctions. If symlinks are disabled, fail with `ENOSYS`.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If symlinks are enabled, resolve all symlinks when changing directories,
as required by POSIX.
Note: Git's `real_path()` function bases its link resolution algorithm
on this property of `chdir()`. Unfortunately, the current directory on
Windows is limited to only MAX_PATH (260) characters. Therefore using
symlinks and long paths in combination may be problematic.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Older MSVCRT's `_wrename()` function cannot rename symlinks over
existing files: it returns success without doing anything. Newer
MSVCR*.dll versions probably do not share this problem: according to CRT
sources, they just call `MoveFileEx()` with the `MOVEFILE_COPY_ALLOWED`
flag.
Avoid the `_wrename()` call, and go with directly calling
`MoveFileEx()`, with proper error handling of course.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `_wunlink()` and `DeleteFileW()` functions refuse to delete symlinks
to directories on Windows; The error code woutl be `ERROR_ACCESS_DENIED`
in that case. Take that error code as an indicator that we need to try
`_wrmdir()` as well. In the best case, it will remove a symlink. In the
worst case, it will fail with the same error code again.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Win32 API calls do not set `errno`; Instead, error codes for failed
operations must be obtained via the `GetLastError()` function. Git would
not know what to do with those error values, though, which is why Git's
Windows compatibility layer translates them to `errno` values.
Let's handle a couple of symlink-related error codes that will become
relevant with the upcoming support for symlinks on Windows.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Symlinks on Windows don't work the same way as on Unix systems. For
example, there are different types of symlinks for directories and
files, and unless using a recent-ish Windows version in Developer Mode,
creating symlinks requires administrative privileges.
By default, disable symlink support on Windows. That is, users
explicitly have to enable it with `git config [--system|--global]
core.symlinks true`; For convenience, `git init` (and `git clone`)
will perform a test whether the current setup allows creating symlinks
and will configure that setting in the repository config.
The test suite ignores system / global config files. Allow
testing *with* symlink support by checking if native symlinks are
enabled in MSYS2 (via setting the special environment variable
`MSYS=winsymlinks:nativestrict` to ask the MSYS2 runtime to enable
creating symlinks).
Note: This assumes that Git's test suite is run in MSYS2's Bash, which
is true for the time being (an experiment to switch to BusyBox-w32
failed due to the experimental nature of BusyBox-w32).
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In several places, Git's Windows-specific code follows the pattern where
it tries to perform an operation, and retries several times when that
operation fails, sleeping an increasing amount of time, before finally
giving up and asking the user whether to rety (after, say, closing an
editor that held a handle to a file, preventing the operation from
succeeding).
This logic is a bit hard to use, and inconsistent:
`mingw_unlink()` and `mingw_rmdir()` duplicate the code to retry,
and both of them do so incompletely. They also do not restore `errno` if the
user answers 'no'.
Introduce a `retry_ask_yes_no()` helper function that handles retry with
small delay, asking the user, and restoring `errno`.
Note that in `mingw_unlink()`, we include the `_wchmod()` call in the
retry loop (which may fail if the file is locked exclusively).
In `mingw_rmdir()`, we include special error handling in the retry loop.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
POSIX specifies that upon successful return from `lstat()`: "the
value of the st_size member shall be set to the length of the pathname
contained in the symbolic link not including any terminating null byte".
Git typically doesn't trust the `stat.st_size` member of symlinks (e.g.
see `strbuf_readlink()`). Therefore, it is tempting to save on the extra
overhead of opening and reading the reparse point merely to calculate
the exact size of the link target.
This is, in fact, what Git for Windows did, from May 2015 to May 2020.
At least almost: some functions take shortcuts if `st_size` is 0 (e.g.
`diff_populate_filespec()`), hence Git for Windows hard-coded the length
of all symlinks to MAX_PATH.
This did cause problems, though, specifically in Git repositories
that were also accessed by Git for Cygwin or Git for WSL. For example,
doing `git reset --hard` using Git for Windows would update the size of
symlinks in the index to be MAX_PATH; at a later time Git for Cygwin
or Git for WSL would find that symlinks have changed size during `git
status` and update the index. And then Git for Windows would think that
the index needs to be updated. Even if the symlinks did not, in fact,
change. To avoid that, the correct size must be determined.
Signed-off-by: Bill Zissimopoulos <billziss@navimatics.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the `S_IFLNK` detection to `file_attr_to_st_mode()`.
Implement `DT_LNK` detection in dirent.c's `readdir()` function.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When obtaining lstat information for reparse points, we need to call
`FindFirstFile()` in addition to `GetFileInformationEx()` to obtain
the type of the reparse point (symlink, mount point etc.). However,
currently there is no error handling whatsoever if `FindFirstFile()`
fails.
Call `FindFirstFile()` before modifying the `stat *buf` output parameter
and error out if the call fails.
Note: The `FindFirstFile()` return value includes all the data
that we get from `GetFileAttributesEx()`, so we could replace
`GetFileAttributesEx()` with `FindFirstFile()`. We don't do that because
`GetFileAttributesEx()` is about twice as fast for single files. I.e.
we only pay the extra cost of calling `FindFirstFile()` in the rare case
that we encounter a reparse point.
Please also note that the indentation the remaining reparse point
code changed, and hence the best way to look at this diff is with
`--color-moved -w`. That code was _not_ moved because a subsequent
commit will move it to an altogether different function, anyway.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the new `mingw_stat()` implementation, `do_lstat()` is only called
from `mingw_lstat()` (with the function parameter `follow == 0`). Remove
the extra function and the old `mingw_stat()`-specific (`follow == 1`)
logic.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With respect to symlinks, the current `mingw_stat()` implementation is
almost identical to `mingw_lstat()`: except for the file type (`st_mode
& S_IFMT`), it returns information about the link rather than the target.
Implement `mingw_stat()` by opening the file handle requesting minimal
permissions, and then calling `GetFileInformationByHandle()` on it. This
way, all links are resolved by the Windows file system layer.
If symlinks are disabled, use `mingw_lstat()` as before, but fail with
`ELOOP` if a symlink would have to be resolved.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Win32 API function `GetFileAttributes()` cannot handle paths with
trailing dir separators. The current `mingw_stat()`/`mingw_lstat()`
implementation calls `GetFileAttributes()` twice if the path has
trailing slashes (first with the original path that was passed as
function parameter, and and a second time with a path copy with trailing
'/' removed).
With the conversion to wide Unicode, we get the length of the path for
free, and also have a (wide char) buffer that can be modified. This
makes it easy to avoid that extraneous Win32 API call.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/prep-symlink-windows:
trim_last_path_component(): avoid hard-coding the directory separator
strbuf_readlink(): support link targets that exceed PATH_MAX
strbuf_readlink(): avoid calling `readlink()` twice in corner-cases
init: do parse _all_ core.* settings early
mingw: do resolve symlinks in `getcwd()`
t7800: work around the MSYS path conversion on Windows
t6423: introduce Windows-specific handling for symlinking to /dev/null
t1305: skip symlink tests that do not apply to Windows
t1006: accommodate for symlink support in MSYS2
t0600: fix incomplete prerequisite for a test case
t0301: another fix for Windows compatibility
t0001: handle `diff --no-index` gracefully
mingw: special-case `open(symlink, O_CREAT | O_EXCL)`
apply: symbolic links lack a "trustable executable bit"
t9700: accommodate for Windows paths
Currently, this function hard-codes the directory separator as the
forward slash.
However, on Windows the backslash character is valid, too. And we want
to call this function in the upcoming support for symlinks on Windows
with the symlink targets (which naturally use the canonical directory
separator on Windows, which is _not_ the forward slash).
Prepare that function to be useful also in that context.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `strbuf_readlink()` function refuses to read link targets that
exceed PATH_MAX (even if a sufficient size was specified by the caller).
As some platforms (*cough* Windows *cough*) support longer paths, remove
this restriction (similar to `strbuf_getcwd()`).
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `strbuf_readlink()` function calls `readlink()`` twice if the hint
argument specifies the exact size of the link target (e.g. by passing
stat.st_size as returned by `lstat()`). This is necessary because
`readlink(..., hint) == hint` could mean that the buffer was too small.
Use `hint + 1` as buffer size to prevent this.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In Git for Windows, `has_symlinks` is set to 0 by default. Therefore, we
need to parse the config setting `core.symlinks` to know if it has been
set to `true`. In `git init`, we must do that before copying the
templates because they might contain symbolic links.
Even if the support for symbolic links on Windows has not made it to
upstream Git yet, we really should make sure that all the `core.*`
settings are parsed before proceeding, as they might very well change
the behavior of `git init` in a way the user intended.
This fixes https://github.com/git-for-windows/git/issues/3414
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As pointed out in https://github.com/git-for-windows/git/issues/1676,
the `git rev-parse --is-inside-work-tree` command currently fails when
the current directory's path contains symbolic links.
The underlying reason for this bug is that `getcwd()` is supposed to
resolve symbolic links, but our `mingw_getcwd()` implementation did not.
We do have all the building blocks for that, though: the
`GetFinalPathByHandleW()` function will resolve symbolic links. However,
we only called that function if `GetLongPathNameW()` failed, for
historical reasons: the latter function was supported for a long time,
but the former API function was introduced only with Windows Vista, and
we used to support also Windows XP. With that support having been
dropped, we are free to call the symbolic link-resolving function right
away.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/test-symlink-windows:
t7800: work around the MSYS path conversion on Windows
t6423: introduce Windows-specific handling for symlinking to /dev/null
t1305: skip symlink tests that do not apply to Windows
t1006: accommodate for symlink support in MSYS2
t0600: fix incomplete prerequisite for a test case
t0301: another fix for Windows compatibility
t0001: handle `diff --no-index` gracefully
mingw: special-case `open(symlink, O_CREAT | O_EXCL)`
apply: symbolic links lack a "trustable executable bit"
t9700: accommodate for Windows paths
* ps/object-read-stream:
streaming: drop redundant type and size pointers
streaming: move into object database subsystem
streaming: refactor interface to be object-database-centric
streaming: move logic to read packed objects streams into backend
streaming: move logic to read loose objects streams into backend
streaming: make the `odb_read_stream` definition public
streaming: get rid of `the_repository`
streaming: rely on object sources to create object stream
packfile: introduce function to read object info from a store
streaming: move zlib stream into backends
streaming: create structure for filtered object streams
streaming: create structure for packed object streams
streaming: create structure for loose object streams
streaming: create structure for in-core object streams
streaming: allocate stream inside the backend-specific logic
streaming: explicitly pass packfile info when streaming a packed object
streaming: propagate final object type via the stream
streaming: drop the `open()` callback function
streaming: rename `git_istream` into `odb_read_stream`
If the user wants to find what are the available keys, they need to
either check the documentation or to ask for all the key-value pairs
by using --all.
Add a new flag --keys for listing only the available keys without
listing the values.
Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a `FORMAT_DEFAULT` value to `enum output_format`. Change the initial
value of `format` to `FORMAT_DEFAULT` in cmd_repo_info, indicating that
the initial value hasn't been changed. Also map the string "default" to
this new value in `parse_format_cb`, allowing future patches to add
support to --format=default.
Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* lo/repo-struct-z:
repo: add -z as an alias for --format=nul to git-repo-structure
repo: use [--format=... | -z] instead of [-z] in git-repo-info synopsis
repo: remove blank line from Documentation/git-repo.adoc
`symref-create` should be followed `::`, not `:`. The lack of second
colon (`:`) causes it to appear as regular text (`<p>`) instead of as a
description list term (`<dt>`) in the HTML documentation.
Signed-off-by: Sam Bostock <sam@sambostock.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After a diff algorithm has been run, the compaction phase
(xdl_change_compact()) shifts and merges change groups to produce a
cleaner output. However, this shifting could create a new matched group
where both sides now have matching lines. This results in a
wrong-looking diff output which contains redundant lines that are the
same on both files.
Fix this by detecting this situation, and re-diff the texts on each side
to find similar lines, using the fall-back Myer's diff. Only do this for
histogram diff as it's the only algorithm where this is relevant. Below
contains an example, and more details.
For an example, consider two files below:
file1:
A
A
A
A
A
A
A
file2:
A
A
x
A
A
A
A
When using Myer's diff, the algorithm finds that only the "x" has been
changed, and produces a final diff result (these are line diffs, but
using word-diff syntax for ease of presentation):
A A[-A-]{+x+}A AAA
When using histogram diff, the algorithm first discovers the LCS "A
AAA", which it uses as anchor, then produces an intermediate diff:
{+A Ax+}A AAA[- AAA-].
This is a longer diff than Myer's, but it's still self-consistent.
However, the compaction phase attempts to shift the first file's diff
group upwards (note that this shift crosses the anchor that histogram
had used), leading to the final results for histogram diff:
[-A AA-]{+A Ax+}A AAA
This is a technically correct patch but looks clearly redundant to a
human as the first 3 lines should not be in the diff.
The fix would detect that a shift has caused matching to a new group,
and re-diff the "A AA" and "A Ax" parts, which results in "A A"
correctly re-marked as unchanged. This creates the now correct histogram
diff:
A A[-A-]{+x+}A AAA
This issue is not applicable to Myer's diff algorithm as it already
generates a minimal diff, which means a shift cannot result in a smaller
diff output (the default Myer's diff in xdiff is not guaranteed to be
minimal for performance reasons, but it typically does a good enough
job).
It's also not applicable to patience diff, because it uses only unique
lines as anchor for its splits, and falls back to Myer's diff within
each split. Shifting requires both ends having the same lines, and
therefore cannot cross the unique line boundaries established by the
patience algorithm. In contrast histogram diff uses non-unique lines as
anchors, and therefore shifting can cross over them.
This issue is rare in a normal repository. Below is a table of
repositories (`git log --no-merges -p --histogram -1000`), showing how
many times a re-diff was done and how many times it resulted in finding
matching lines (therefore addressing this issue) with the fix. In
general it is fewer than 1% of diff's that exhibit this offending
behavior:
| Repo (1k commits) | Re-diff | Found matching lines |
|--------------------|---------|----------------------|
| llvm-project | 45 | 11 |
| vim | 110 | 9 |
| git | 18 | 2 |
| WebKit | 168 | 1 |
| ripgrep | 22 | 1 |
| cpython | 32 | 0 |
| vscode | 13 | 0 |
Signed-off-by: Yee Cheng Chin <ychin.git@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Enable callers to generate reachability bitmaps when performing MIDX
layer compaction by combining all existing bitmaps from the compacted
layers.
Note that the because of the object/pack ordering described by the
previous commit, the pseudo-pack order for the compacted MIDX is the
same as concatenating the individual pseudo-pack orderings for each
layer in the compaction range.
As a result, the only non-test or documentation change necessary is to
treat all objects as non-preferred during compaction so as not to
disturb the object ordering.
In the future, we may want to adjust which commit(s) receive
reachability bitmaps when compacting multiple .bitmap files into one, or
even generate new bitmaps (e.g., if the references have moved
significantly since the .bitmap was generated). This commit only
implements combining all existing bitmaps in range together in order to
demonstrate and lay the groundwork for more exotic strategies.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When managing a MIDX chain with many layers, it is convenient to combine
a sequence of adjacent layers into a single layer to prevent the chain
from growing too long.
While it is conceptually possible to "compact" a sequence of MIDX layers
together by running "git multi-pack-index write --stdin-packs", there
are a few drawbacks that make this less than desirable:
- Preserving the MIDX chain is impossible, since there is no way to
write a MIDX layer that contains objects or packs found in an earlier
MIDX layer already part of the chain. So callers would have to write
an entirely new (non-incremental) MIDX containing only the compacted
layers, discarding all other objects/packs from the MIDX.
- There is (currently) no way to write a MIDX layer outside of the MIDX
chain to work around the above, such that the MIDX chain could be
reassembled substituting the compacted layers with the MIDX that was
written.
- The `--stdin-packs` command-line option does not allow us to specify
the order of packs as they appear in the MIDX. Therefore, even if
there were workarounds for the previous two challenges, any bitmaps
belonging to layers which come after the compacted layer(s) would no
longer be valid.
This commit introduces a way to compact a sequence of adjacent MIDX
layers into a single layer while preserving the MIDX chain, as well as
any bitmap(s) in layers which are newer than the compacted ones.
Implementing MIDX compaction does not require a significant number of
changes to how MIDX layers are written. The main changes are as follows:
- Instead of calling `fill_packs_from_midx()`, we call a new function
`fill_packs_from_midx_range()`, which walks backwards along the
portion of the MIDX chain which we are compacting, and adds packs one
layer a time.
In order to preserve the pseudo-pack order, the concatenated pack
order is preserved, with the exception of preferred packs which are
always added first.
- After adding entries from the set of packs in the compaction range,
`compute_sorted_entries()` must adjust the `pack_int_id`'s for all
objects added in each fanout layer to match their original
`pack_int_id`'s (as opposed to the index at which each pack appears
in `ctx.info`).
- When writing out the new 'multi-pack-index-chain' file, discard any
layers in the compaction range, replacing them with the newly written
layer, instead of keeping them and placing the new layer at the end
of the chain.
This ends up being sufficient to implement MIDX compaction in such a way
that preserves bitmaps corresponding to more recent layers in the MIDX
chain.
The tests for MIDX compaction are so far fairly spartan, since the main
interesting behavior here is ensuring that the right packs/objects are
selected from each layer, and that the pack order is preserved despite
whether or not they are sorted in lexicographic order in the original
MIDX chain.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Though our 'read-midx' test tool is capable of printing information
about a single MIDX layer identified by its checksum, no caller in our
test suite exercises this path.
Unfortunately, there is a memory leak lurking in this (currently) unused
path that would otherwise be exposed by the following commit.
This occurs when providing a MIDX layer checksum other than the tip. As
we walk over the MIDX chain trying to find the matching layer, we drop
our reference to the top-most MIDX layer. Thus, our call to
'close_midx()' later on leaks memory between the top-most MIDX layer and
the MIDX layer immediately following the specified one.
Plug this leak by holding a reference to the tip of the MIDX chain, and
ensure that we call `close_midx()` before terminating the test tool.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When computing the set of objects to appear in a MIDX, we use
compute_sorted_entries(), which handles objects from various existing
sources one fanout layer at a time.
The process for computing this set is slightly different during MIDX
compaction, so factor out the existing functionality into its own
routine to prevent `compute_sorted_entries()` from becoming too
difficult to read.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our `midx-write.c::fill_packs_from_midx()` function currently enumerates
the range [0, m->num_packs), and then shifts its index variable up by
`m->num_packs_in_base` to produce a valid `pack_int_id`.
Instead, directly enumerate the range:
[m->num_packs_in_base, m->num_packs_in_base + m->num_packs)
, which are the original pack_int_ids themselves as opposed to the
indexes of those packs relative to the MIDX layer they are contained
within.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When filling packs from an existing MIDX, `fill_packs_from_midx()`
handles preparing a MIDX'd pack, and reading out its pack name from the
existing MIDX.
MIDX compaction will want to perform an identical operation, though the
caller will look quite different than `fill_packs_from_midx()`. To
reduce any future code duplication, extract `fill_pack_from_midx()`
from `fill_packs_from_midx()` to prepare to call our new helper function
in a future change.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `ctx->pack_perm` array can be considered as a permutation between
the original `pack_int_id` of some given pack to its position in the
`ctx->info` array containing all packs.
Today we can always index into this array with any known `pack_int_id`,
since there is never a `pack_int_id` which is greater than or equal to
the value `ctx->nr`.
That is not necessarily the case with MIDX compaction. For example,
suppose we have a MIDX chain with three layers, each containing three
packs. The base of the MIDX chain will have packs with IDs 0, 1, and 2,
the next layer 3, 4, and 5, and so on. If we are compacting the topmost
two layers, we'll have input `pack_int_id` values between [3, 8], but
`ctx->nr` will only be 6.
In that example, if we want to know where the pack whose original
`pack_int_id` value was, say, 7, we would compute `ctx->pack_perm[7]`,
leading to an uninitialized read, since there are only 6 entries
allocated in that array.
To address this, there are a couple of options:
- We could allocate enough entries in `ctx->pack_perm` to accommodate
the largest `orig_pack_int_id` value.
- Or, we could internally shift the input values by the number of packs
in the base layer of the lower end of the MIDX compaction range.
This patch prepare us to take the latter approach, since it does not
allocate more memory than strictly necessary. (In our above example, the
base of the lower end of the compaction range is the first MIDX layer
(having three packs), so we would end up indexing `ctx->pack_perm[7-3]`,
which is a valid read.)
Note that this patch does not actually implement that approach yet, but
merely performs a behavior-preserving refactoring which will make the
change easier to carry out in the future.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A future commit will want to add two 32-bit unsigned values together
while checking for overflow. Introduce a variant of the u64_add()
function for operating on 32-bit inputs.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The MIDX file format currently requires that pack files be identified by
the lexicographic ordering of their names (that is, a pack having a
checksum beginning with "abc" would have a numeric pack_int_id which is
smaller than the same value for a pack beginning with "bcd").
As a result, it is impossible to combine adjacent MIDX layers together
without permuting bits from bitmaps that are in more recent layer(s).
To see why, consider the following example:
| packs | preferred pack
--------+-------------+---------------
MIDX #0 | { X, Y, Z } | Y
MIDX #1 | { A, B, C } | B
MIDX #2 | { D, E, F } | D
, where MIDX #2's base MIDX is MIDX #1, and so on. Suppose that we want
to combine MIDX layers #0 and #1, to create a new layer #0' containing
the packs from both layers. With the original three MIDX layers, objects
are laid out in the bitmap in the order they appear in their source
pack, and the packs themselves are arranged according to the pseudo-pack
order. In this case, that ordering is Y, X, Z, B, A, C.
But recall that the pseudo-pack ordering is defined by the order that
packs appear in the MIDX, with the exception of the preferred pack,
which sorts ahead of all other packs regardless of its position within
the MIDX. In the above example, that means that pack 'Y' could be placed
anywhere (so long as it is designated as preferred), however, all other
packs must be placed in the location listed above.
Because that ordering isn't sorted lexicographically, it is impossible
to compact MIDX layers in the above configuration without permuting the
object-to-bit-position mapping. Changing this mapping would affect all
bitmaps belonging to newer layers, rendering the bitmaps associated with
MIDX #2 unreadable.
One of the goals of MIDX compaction is that we are able to shrink the
length of the MIDX chain *without* invalidating bitmaps that belong to
newer layers, and the lexicographic ordering constraint is at odds with
this goal.
However, packs do not *need* to be lexicographically ordered within the
MIDX. As far as I can gather, the only reason they are sorted lexically
is to make it possible to perform a binary search over the pack names in
a MIDX, necessary to make `midx_contains_pack()`'s performance
logarithmic in the number of packs rather than linear.
Relax this constraint by allowing MIDX writes to proceed with packs that
are not arranged in lexicographic order. `midx_contains_pack()` will
lazily instantiate a `pack_names_sorted` array on the MIDX, which will
be used to implement the binary search over pack names.
Note that this produces MIDXs which may be incompatible with earlier
versions of Git that have stricter requirements on the layout of packs
within a MIDX. This patch does *not* modify the version number of the
MIDX format, since existing versions of Git already know to gracefully
ignore a MIDX with packs that appear out-of-order.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the MIDX writing code, there are four functions which perform some
sort of MIDX write operation. They are:
- write_midx_file()
- write_midx_file_only()
- expire_midx_packs()
- midx_repack()
All of these functions are thin wrappers over `write_midx_internal()`,
which implements the bulk of these routines. As a result, the
`write_midx_internal()` function takes six arguments.
Future commits in this series will want to add additional arguments, and
in general this function's signature will be the union of parameters
among *all* possible ways to write a MIDX.
Instead of adding yet more arguments to this function to support MIDX
compaction, introduce a `struct write_midx_opts`, which has the same
struct members as `write_midx_internal()`'s arguments.
Adding additional fields to the `write_midx_opts` struct is preferable
to adding additional arguments to `write_midx_internal()`. This is
because the callers below all zero-initialize the struct, so each time
we add a new piece of information, we do not have to pass the zero value
for it in all other call-sites that do not care about it.
For now, no functional changes are included in this patch.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In midx_pack_order(), we compute for each bitampped pack the first bit
to correspond to an object in that pack, along with how many bits were
assigned to object(s) in that pack.
Initially, each bitmap_nr value is set to zero, and each bitmap_pos
value is set to the sentinel BITMAP_POS_UNKNOWN. This is done to ensure
that there are no packs who have an unknown bit position but a somehow
non-zero number of objects (cf. `write_midx_bitmapped_packs()` in
midx-write.c).
Once the pack order is fully determined, midx_pack_order() sets the
bitmap_pos field for any bitmapped packs to zero if they are still
listed as BITMAP_POS_UNKNOWN.
However, we enumerate the bitmapped packs in order of `ctx->pack_perm`.
This is fine for existing cases, since the only time the
`ctx->pack_perm` array holds a value outside of the addressable range of
`ctx->info` is when there are expired packs, which only occurs via 'git
multi-pack-index expire', which does not support writing MIDX bitmaps.
As a result, the range of ctx->pack_perm covers all values in [0,
`ctx->nr`), so enumerating in this order isn't an issue.
A future change necessary for compaction will complicate this further by
introducing a wrapper around the `ctx->pack_perm` array, which turns the
given `pack_int_id` into one that is relative to the lower end of the
compaction range. As a result, indexing into `ctx->pack_perm` through
this helper, say, with "0" will produce a crash when the lower end of
the compaction range has >0 pack(s) in its base layer, since the
subtraction will wrap around the 32-bit unsigned range, resulting in an
uninitialized read.
But the process is completely unnecessary in the first place: we are
enumerating all values of `ctx->info`, and there is no reason to process
them in a different order than they appear in memory. Index `ctx->info`
directly to reflect that.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit d4bf1d88b90 (multi-pack-index: verify missing pack, 2018-09-13)
adds a new test to the MIDX test script to test how we handle missing
packs.
While the commit itself describes the test as "verify missing pack[s]",
the test itself is actually called "verify packnames out of order",
despite that not being what it tests.
Likely this was a copy-and-paste of the test immediately above it of the
same name. Correct this by renaming the test to match the commit
message.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since c39fffc1c90 (tests: start asserting that *.txt SYNOPSIS matches -h
output, 2022-10-13), the manual page for 'git multi-pack-index' has a
SYNOPSIS section which differs from 'git multi-pack-index -h'.
Correct this while also documenting additional options accepted by the
'write' sub-command.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since fcb2205b774 (midx: implement support for writing incremental MIDX
chains, 2024-08-06), the command-line options '--incremental' and
'--bitmap' were declared to be incompatible with one another when
running 'git multi-pack-index write'.
However, since 27afc272c49 (midx: implement writing incremental MIDX
bitmaps, 2025-03-20), that incompatibility no longer exists, despite the
documentation saying so. Correct this by removing the stale reference to
their incompatibility.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All multi-pack-index sub-commands (write, verify, repack, and expire)
support a '--progress' command-line option, despite not listing it as
one of the common options in `common_opts`.
As a result each sub-command declares its own `OPT_BIT()` for a
"--progress" command-line option. Centralize this within the
`common_opts` to avoid re-declaring it in each sub-command.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When trying to print out, say, the hexadecimal representation of a
MIDX's hash, our code will do something like:
hash_to_hex_algop(get_midx_checksum(m),
m->source->odb->repo->hash_algo);
, which is both cumbersome and repetitive. In fact, all but a handful of
callers to `get_midx_checksum()` do exactly the above. Reduce the
repetitive nature of calling `get_midx_checksum()` by having it return a
pointer into a static buffer containing the above result.
For the handful of callers that do need to compare the raw bytes and
don't want to deal with an encoded copy (e.g., because they are passing
it to hasheq() or similar), introduce `get_midx_hash()` which returns
the raw bytes.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To make clear that the fucntion `get_midx_checksum()` does not do
anything to modify its argument, mark the MIDX pointer as const.
The following commit will rename this function altogether to make clear
that it returns the raw bytes of the checksum, not a hex-encoded copy of
it.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The clar unit testing framework carries a couple of files that contain
expected output for its self-tests. Some of these files expectedly end
with a blank line at the end of the file, which Git would consider to be
a whitespace error by default.
Teach our gitattributes to ignore those errors.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The clar project has introduced a couple of new assertions that perform
relative integer comparisons, like "greater than" or "less or equal".
Adapt the reftable-record unit tests to demonstrate their usage.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update clar to commit 39f11fe (Merge pull request #131 from
pks-gitlab/pks-integer-double-evaluation, 2025-12-05). This commit
includes the following changes relevant to Git:
- There are now typesafe integer comparison functions. Furthermore,
the range of comparison functions has been included to also have
relative comparisons, like "greater than".
- There is a new `cl_failf()` macro that allows the caller to specify
an error message with formatting directives.
- The TAP format has been fixed to correctly terminate YAML blocks
with "...\n" instead of "---\n".
Note that we already had a `cl_failf()` function declared in our own
sources. This function is equivalent to the upstreamed function, so we
can simply drop it now.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ps/object-source-management:
odb: handle recreation of quarantine directories
odb: handle changing a repository's commondir
chdir-notify: add function to unregister listeners
odb: handle initialization of sources in `odb_new()`
http-push: stop setting up `the_repository` for each reference
t/helper: stop setting up `the_repository` repeatedly
builtin/index-pack: fix deferred fsck outside repos
oidset: introduce `oidset_equal()`
odb: move logic to disable ref updates into repo
odb: refactor `odb_clear()` to `odb_free()`
odb: adopt logic to close object databases
setup: convert `set_git_dir()` to have file scope
path: move `enter_repo()` into "setup.c"
Git allows setting a different object directory via
'GIT_OBJECT_DIRECTORY', but provides no equivalent for references.
This asymmetry makes it difficult to test different reference backends
or use alternative reference storage locations without modifying the
repository structure.
Add a new environment variable 'GIT_REF_URI' that specifies both the
reference backend and directory path using a URI format:
<ref_backend>://<URI-for-resource>
When set, this variable is used to obtain the main reference store for
all Git commands. The variable is checked in `get_main_ref_store()`
when lazily assigning `repo->refs_private`. We cannot initialize this
earlier in `repo_set_gitdir()` because the repository's hash algorithm
isn't known at that point, and the reftable backend requires this
information during initialization.
When used with worktrees, the specified directory is treated as the
reference directory for all worktree operations.
Add a new test file 't1423-ref-backend.sh' to test this environment
variable.
Helped-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The refs subsystem uses the `get_main_ref_store()` to obtain the main
ref_store for a given repository. In the upcoming patches we also want
to create a ref_store for any given reference directory, which may exist
in arbitrary paths. For the files backend and the reftable backend, the
reference directory is generally the $GIT_DIR.
To support such behavior, extract out the core logic for creating out
the ref_store from `get_main_ref_store()` into a new function
`get_ref_store_for_dir()` which can provide the ref_store for a
given (repository, directory, reference format) combination.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 5a993593b2 (fsck: avoid parse_timestamp() on buffer that isn't
NUL-terminated, 2025-11-18), we added a wrapper that copies the
timestamp into a buffer before calling parse_timestamp().
Now that we have a more robust helper for parsing from a buffer, we can
drop our wrapper and switch to that. We could just do so inline, but the
choice of "unsigned" vs "signed" depends on the typedef of timestamp_t.
So we'll wrap that in a macro that is defined alongside the rest of the
timestamp abstraction.
The resulting function is almost a drop-in replacement, but the new
interface means we need to hold the result in a separate timestamp_t,
rather than returning it directly from one function into the parameter
of another. The old one did still detect overflow errors by returning
TIME_MAX, since date_overflows() checks for that, but now we'll see it
more directly from the return of parse_timestamp_from_buf(). The
behavior should be the same.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In c4c9089584 (cache-tree: avoid strtol() on non-string buffer,
2025-11-18) we wrote an ad-hoc integer parser which did not detect
overflow. This wasn't too big a problem, since the original use of
strtol() did not do so either. But now that we have a more robust
parsing function, let's use that. It reduces the amount of code and
should catch more cases of malformed entries.
I kept our local parse_int() wrapper here, since it handles management
of our ptr/len pair (rather than doing it inline in the entry parser of
read_one()).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you have a buffer that is not NUL-terminated but want to parse an
integer, there aren't many good options. If you use strtol() and
friends, you risk running off the end of the buffer if there is no
non-digit terminating character. And even if you carefully make sure
that there is such a character, ASan's strict-string-check mode will
still complain.
You can copy bytes into a temporary buffer, terminate it, and then call
strtol(), but doing so adds some pitfalls (like making sure you soak up
whitespace and leading +/- signs, and reporting overflow for overly long
input). Or you can hand-parse the digits, but then you need to take some
care to handle overflow (and again, whitespace and +/- signs).
These things aren't impossible to do right, but it's error-prone to have
to do them in every spot that wants to do such parsing. So let's add
some functions which can be used across the code base.
There are a few choices regarding the interface and the implementation.
First, the implementation:
- I went with with parsing the digits (rather than buffering and
passing to libc functions). It ends up being a similar amount of
code because we have to do some parsing either way. And likewise
overflow detection depends on the exact type the caller wants, so we
either have to do it by hand or write a separate wrapper for
strtol(), strtoumax(), and so on.
- Unsigned overflow detection is done using the same techniques as in
unsigned_add_overflows(), etc. We can't use those macros directly
because our core function is type-agnostic (so the caller passes in
the max value, rather than us deriving it on the fly). This is
similar to how git_parse_int(), etc, work.
- Signed overflow detection assumes that we can express a negative
value with magnitude one larger than our maximum positive value
(e.g., -128..127 for a signed 8-bit value). I doubt this is
guaranteed by the standard, but it should hold in practice, and we
make the same assumption in git_parse_int(), etc. The nice thing
about this is that we can derive the range from the number of bits
in the type. For ints, you obviously could use INT_MIN..INT_MAX, but
for an arbitrary type, we can use maximum_signed_value_of_type().
- I didn't bother with handling bases other than 10. It would
complicate the code, and I suspect it won't be needed. We could
probably retro-fit it later without too much work, if need be.
For the interface:
- What do we call it? We have git_parse_int() and friends, which aim
to make parsing less error-prone. And in some ways, these are just
buffer (rather than string) versions of those functions. But not
entirely. Those functions are aimed at parsing a single user-facing
value. So they accept a unit prefix (e.g., "10k"), which we won't
always want. And they insist that the whole string is consumed
(rather than passing back an "end" pointer).
We also have strtol_i() and strtoul_ui() wrappers, which try to make
error handling simpler (especially around overflow), but mostly
behave like their libc counterparts. These also don't pass out an
end pointer, though.
So I started a new namespace, "parse_<type>_from_buf".
- Like those other functions above, we use an out-parameter to store
the result, which lets us return an error code directly. This avoids
the complicated errno dance for detecting overflow that you get with
strtol().
What should the error code look like? git_parse_int() uses a bool
for success/failure. But strtol_ui() uses the syscall-like "0 is
success, -1 is error" convention.
I went with the bool approach here. Since the names are closest to
those functions, I thought it would cause the least confusion.
- Unlike git_parse_signed() and friends, we do not insist that the
entire buffer be consumed. For parsing a specific standalone string
that makes sense, but within an unterminated buffer you are much
more likely to be parsing multiple fields from a larger data set.
We pass out an "end" pointer the same way strtol() does. Another
option is to accept the input as an in-out parameter and advance the
pointer ourselves (and likewise shrink the length pointer). That
would let you do something like:
if (!parse_int_from_buf(&p, &len, &out))
return error(...);
/* "p" and "len" were adjusted automatically */
if (!len || *p++ != ' ')
return error(...);
That saves a few lines of code in some spots, but requires a few
more in others (depending on whether the caller has a length in the
first place or is using an end pointer). Of the two callers I intend
to immediately convert, we have one of each type!
I went with the strtol() approach as flexible and time-tested.
- We could likewise take the input buffer as two pointers (start and
end) rather than a pointer and a length. That again makes life
easier for some callers and harder for others. I stuck with pointer
and length as the more usual interface.
- What happens when a caller passes in a NULL end pointer? This is
allowed by strtol(). But I think it's often a sign of a lurking bug,
because there's no way to know how much was consumed (and even if a
caller wants to assume everything is consumed, you have no way to
verify it). So it is simply an error in this interface (you'd get a
segfault).
I am tempted to say that if the end pointer is NULL the functions
could confirm that the entire buffer was consumed, as a convenience.
But that felt a bit magical and surprising.
Like git_parse_*(), there is a generic signed/unsigned helper, and then
we can add type-specific helpers on top. I've added an int helper here
to start, and we'll add more as we convert callers.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of the integer parsing functions in parse.[ch] return an int that is
"0" for failure or "1" for success. Since most of the other functions in
Git use "0" for success and "-1" for failure, this can be confusing.
Let's switch the return types to bool to make it clear that we are using
this other convention. Callers should not need to update at all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/asan-bonanza:
t: enable ASan's strict_string_checks option
fsck: avoid parse_timestamp() on buffer that isn't NUL-terminated
fsck: remove redundant date timestamp check
fsck: avoid strcspn() in fsck_ident()
fsck: assert newline presence in fsck_ident()
cache-tree: avoid strtol() on non-string buffer
Makefile: turn on NO_MMAP when building with ASan
pack-bitmap: handle name-hash lookups in incremental bitmaps
compat/mmap: mark unused argument in git_munmap()
By default git-last-modified(1) only shows information about paths at
the root level. This can be confusing. Clarify the command's behavior in
the documentation.
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git" itself has completion for its long options and subcommands,
but not for its short options. Add support for them.
Signed-off-by: Wiktor Mis <mwiktor023@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Option --max-depth is supported by git-last-modified(1), because it was
added to the diff machinery in a1dfa5448d (diff: teach tree-diff a
max-depth parameter, 2025-08-07).
This option is useful for everyday use of the git-last-modified(1)
command, so document it's existence in the man page and `-h` output.
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When option `-z` is provided to git-last-modified(1), each line is
separated with a NUL instead of a newline. Document this properly and
handle parsing of the option in the builtin itself.
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When shallow cloning based on a date, it happens that not all
shallow border commits are reachable.
Original implementation of a generic shallow boundary finder
based on rev-list sets a commit (from the initial list of border
commit candidates) to be the border commit as soon as it finds one
of its parentis that wasn't on the list of initial candidates. This
results in a successful shallow clone, where some of its declared
border commits may not be reachable and they would not actually exist
in the cloned repository. Thus the result may contradict existing
comment in the code, which correctly states that such commmit should
not be considered border.
One can inspect such case by running the added test scenario:
- 'clone shallow since all borders reachable'
The modified implementation of a generic shallow boundary finder
based on rev-list ensures that all shallow border commits are reachable
also after being grafted. This is achieved by inspecting all parents
of each initial border commit candidate. The border commit candidate
is set border only when all its parents wern't on the initial list of
candidates. Otherwise the border commit candidate is not set as border
however its parents that weren't on the list of candidates are set as
borders.
Signed-off-by: Samo Pogačnik <samo_pogacnik@t-2.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The expected behaviour for `git config list` is:
A. Without `--global`, it should not bail on unreadable/non-existent
global config files.
B. With `--global`, it should bail when both `$HOME/.gitconfig` and
`$XDG_CONFIG_HOME/git/config` are unreadable. It should not bail
when one or more of them is readable.
The previous patch, config: read global scope via config_sequence,
introduced a regression in scenario B. When both global config files are
unreadable, running `git config list --global` would not fail. For
example, `GIT_CONFIG_GLOBAL=does-not-exist git config list --global`
exits with status code 0.
Assuming that `config_source->scope == CONFIG_SCOPE_GLOBAL` iff the
`--global` argument is specified, use this to determine whether to bail.
When reading only the global scope and both config files are unreadable,
then adjust the return code to be non-zero.
Note: When bailing, the exit code is not determined by sum of the return
codes of the underlying operations. Instead, the exit code is modified
via a single decrement. If this is undesirable, we can change it to sum
the return codes of the underlying operations instead.
Lastly, modify the tests to remove the known breakage/regression. The
tests for scenario B will now pass.
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Delilah Ashley Wu <delilahwu@microsoft.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The output of `git config list --global` should include both the home
(`$HOME/.gitconfig`) and XDG (`$XDG_CONFIG_HOME/git/config`) configs,
but it only reads from the former.
We assumed each config scope corresponds to a single config file. Under
this assumption, `git config list --global` reads the global config by
calling `git_config_from_file_with_options(...,"~/.gitconfig", ...)`.
This function usage restricts us to a single config file. Because the
global scope includes two files, we should read the configs via another
method.
The output of `git config list --show-scope --show-origin` (without
`--global`) correctly includes both the home and XDG config files. So
there's existing code that respects both locations, namely the
`do_git_config_sequence()` function which reads from all scopes.
Introduce flags to make it possible to ignore all but the global scope
(i.e. ignore system, local, worktree, and cmdline). Then, reuse the
function to read only the global scope when `--global` is specified.
This was the suggested solution in the bug report:
https://lore.kernel.org/git/kl6ly1oze7wb.fsf@chooglen-macbookpro.roam.corp.google.com.
Then, modify the tests to check that `git config list --global` includes
both home and XDG configs.
This patch introduces a regression. If both global config files are
unreadable, then `git config list --global` should exit non-zero. This
is no longer the case, so mark the corresponding test as a "TODO known
breakage" and address the issue in the next patch, config: keep bailing
on unreadable global files.
Implementation notes:
1. The `ignore_global` flag is not set anywhere, so the
`if (!opts->ignore_global)` condition is always met. We can remove
this flag if desired.
2. I've assumed that `config_source->scope == CONFIG_SCOPE_GLOBAL` iff
`--global` is specified. This comparison determines whether to call
`do_git_config_sequence()` for the global scope, or to keep calling
`git_config_from_file_with_options()` for other scopes.
3. Keep populating `opts->source.file` in `builtin/config.c` because
it is used as the destination config file for write operations.
The proposed changes could convolute the code because there is no
single source of truth for the config file locations in the global
scope. Add a comment to help clarify this. Please let me know if
it's unclear.
Reported-by: Jade Lovelace <lists@jade.fyi>
Suggested-by: Glen Choo <glencbz@gmail.com>
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Delilah Ashley Wu <delilahwu@microsoft.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `git config list --global` output includes `$HOME/.gitconfig` (home
config), but ignores `$XDG_CONFIG_HOME/git/config` (XDG config). It
should include both files.
Modify tests to check the following and expect a failure:
- `git config list --global` should include contents from both the
home and XDG config locations (assuming they are readable), not
just the former.
- `--show-origin` should print correct paths to both config files,
assuming they exist.
Also, add tests to ensure subsequent patches do not introduce
regressions to `git config list`. Specifically, check that:
- The home config should take precedence over the XDG config.
- Without `--global`, it should not bail on unreadable/non-existent
global config files.
- With `--global`, it should bail when both `$HOME/.gitconfig` and
`$XDG_CONFIG_HOME/git/config` are unreadable. It should not bail if
at least one of them is readable.
The next patch, config: read global scope via config_sequence, will
implement a fix to include both config files when `--global` is
specified.
Reported-by: Jade Lovelace <lists@jade.fyi>
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Delilah Ashley Wu <delilahwu@microsoft.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git prefers forward slashes as directory separators across all
platforms. On Windows, the backslash is the native directory separator,
but all Windows versions supported by Git also accept the forward slash
in all but rare circumstances. Our tests expect forward slashes. Git
generates relative paths with forward slashes. Forward slashes are more
convenient to use in shell scripts.
For these reasons, we enforced forward slashes in `interpolate_path()`
in 5ca6b7bb47b (config --show-origin: report paths with forward slashes,
2016-03-23). However, other code paths may generate paths containing
backslashes. For example, `config --show-origin` prints the XDG config
path with mixed slashes on Windows:
$ git config --list --show-origin
file:C:/Program Files/Git/etc/gitconfig system.foo=bar
file:"C:\\Users\\delilah/.config/git/config" xdg.foo=bar
file:C:/Users/delilah/.gitconfig home.foo=bar
file:.git/config local.foo=bar
Let's enforce forward slashes in all code paths that directly or
indirectly call `cleanup_path()` by modifying it to use
`convert_slashes()` on Windows. Since `convert_slashes()` modifies the
path in-place, change the argument and return type of `cleanup_path()`
from `const char *` to `char *`. All existing callers of
`cleanup_path()` pass `char *` anyways, so this change is compatible.
The next patch, config: test home and xdg files in `list --global`, will
assert that the XDG config path uses forward slashes.
Suggested-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Delilah Ashley Wu <delilahwu@microsoft.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In some cases, we zero-initialize our object IDs, which sets the algo
member to zero as well, which is not a valid algorithm number. This is
a bad practice, but we typically paper over it in many cases by simply
substituting the repository's hash algorithm.
However, our new Rust loose object map code doesn't handle this
gracefully and can't find object IDs when the algorithm is zero because
they don't compare equal to those with the correct algo field. In
addition, the comparison code doesn't have any knowledge of what the
main algorithm is because that's global state, so we can't adjust the
comparison.
To make our code function properly and to avoid propagating these bad
entries, if we get a source object ID with a zero algo, just make a copy
of it with the fixed algorithm. This has the benefit of also fixing the
object IDs if we're in a single algorithm mode as well.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our new binary object map code avoids needing to be intimately involved
with file handling by simply writing data to an object implement Write.
This makes it very easy to test by writing to a Cursor wrapping a Vec
for tests, and thus decouples it from intimate knowledge about how we
handle files.
However, we will actually want to write our data to an actual file,
since that's the most practical way to persist data. Implement a
wrapper around the hashfile code that implements the Write trait so that
we can write our object map into a file.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our current loose object format has a few problems. First, it is not
efficient: the list of object IDs is not sorted and even if it were,
there would not be an efficient way to look up objects in both
algorithms.
Second, we need to store mappings for things which are not technically
loose objects but are not packed objects, either, and so cannot be
stored in a pack index. These kinds of things include shallows, their
parents, and their trees, as well as submodules. Yet we also need to
implement a sensible way to store the kind of object so that we can
prune unneeded entries. For instance, if the user has updated the
shallows, we can remove the old values.
For these reasons, introduce a new binary object map format. The
careful reader will notice that it resembles very closely the pack index
v3 format. Add an in-memory object map as well, and allow writing to a
batched map, which can then be written later as one of the binary object
maps. Include several tests for round tripping and data lookup across
algorithms.
Note that the use of this code elsewhere in Git will involve some C code
and some C-compatible code in Rust that will be introduced in a future
commit. Thus, for example, we ignore the fact that if there is no
current batch and the caller asks for data to be written, this code does
nothing, mostly because this code also does not involve itself with
opening or manipulating files. The C code that we will add later will
implement this functionality at a higher level and take care of this,
since the code which is necessary for writing to the object store is
deeply involved with our C abstractions and it would require extensive
work (which would not be especially valuable at this point) to port
those to Rust.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a future commit, we'll want to hash some data when dealing with an
object map. Let's make this easy by creating a structure to hash
objects and calling into the C functions as necessary to perform the
hashing. For now, we only implement safe hashing, but in the future we
could add unsafe hashing if we want. Implement Clone and Drop to
appropriately manage our memory. Additionally implement Write to make
it easy to use with other formats that implement this trait.
While we're at it, add some tests for the various hashing cases.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Cargo uses the build.rs script to determine how to compile and link a
binary. The only binary we're generating, however, is for our tests,
but in a future commit, we're going to link against libgit.a for some
functionality and we'll need to make sure the test binaries are
complete.
Add a build.rs file for this case and specify the files we're going to
be linking against. Because we cannot specify different dependencies
when building our static library versus our tests, update the Makefile
to specify these dependencies for our static library to avoid race
conditions during build.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'd like to be able to hash our data in Rust using the same contexts as
in C. However, we need our helper functions to not be inline so they
can be linked into the binary appropriately. In addition, to avoid
managing memory manually and since we don't know the size of the hash
context structure, we want to have simple alloc and free functions we
can use to make sure a context can be easily dynamically created.
Expose the helper functions and create alloc, free, and init functions
we can call.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'll soon be writing out an object map using the hashfile code. Add an
fsync component to allow us to handle fsyncing it correctly.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We want to call this code from Rust and ensure that the types are the
same for compatibility, which is easiest to do if the type is a fixed
size. Since unsigned int is 32 bits on all the platforms we care about,
define it as a uint32_t instead.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Right now, users can internally access the contents of the ObjectID
struct, which can lead to data that is not valid, such as invalid
algorithms or non-zero-padded hash values. These can cause problems
down the line as we use them more.
Add a constructor for ObjectID that allows us to set these values and
also provide an accessor for the algorithm so that we can access it. In
addition, provide useful Display and Debug implementations that can
format our data in a useful way.
Now that we have the ability to work with these various components in a
nice way, add some tests as well to make sure that ObjectID and
HashAlgorithm work together as expected.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In C, it's easy for us to look up a hash algorithm structure by its
offset by simply indexing the hash_algos array. However, in Rust, we
sometimes need a pointer to pass to a C function, but we have our own
hash algorithm abstraction.
To get one from the other, let's provide a simple function that looks up
the C structure from the offset and expose it in Rust.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This works very similarly to the existing one in C except that it
doesn't provide any functionality to hash an object. We don't currently
need that right now, but the use of those function pointers do make it
substantially more difficult to write a bit-for-bit identical structure
across the C/Rust interface, so omit them for now.
Instead of the more customary "&self", use "self", because the former is
the size of a pointer and the latter is the size of an integer on most
systems. Don't define an unknown value but use an Option for that
instead.
Update the object ID structure to allow slicing the data appropriately
for the algorithm.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'd like to be able to write some Rust code that can work with object
IDs. Add a structure here that's identical to struct object_id in C,
for easy use in sharing across the FFI boundary. We will use this
structure in several places in hot paths, such as index-pack or
pack-objects when converting between algorithms, so prioritize efficient
interchange over a more idiomatic Rust approach.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently use an int for this value, but we'll define this structure
from Rust in a future commit and we want to ensure that our data types
are exactly identical. To make that possible, use a uint32_t for the
hash algorithm.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we set up a repository that doesn't have a compatibility hash
algorithm, we set the destination algorithm object to NULL. In such a
case, we want to silently do nothing instead of crashing, so simply
treat the operation as a no-op and copy the object ID.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We'll be implementing some of our interoperability code, like the loose
object map, in Rust. While the code currently compiles with the old
loose object map format, which is written entirely in C, we'll soon
replace that with the Rust-based implementation.
Require the use of Rust for compatibility mode and die if it is not
supported. Because the repo argument is not used when Rust is missing,
cast it to void to silence the compiler warning, which we do not care
about.
Add a prerequisite in our tests, RUST, that checks if Rust functionality
is available and use it in the tests that handle interoperability.
This is technically a regression in functionality compared to our
existing state, but pack index v3 is not yet implemented and thus the
functionality is mostly quite broken, which is why we've recently marked
this functionality as experimental. We don't believe anyone is getting
useful use out of the interoperability code in its current state, so no
actual users should be negatively impacted by this change.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Implement a new `--trailer <text>` option for `git rebase`
(support merge backend only now), which appends arbitrary
trailer lines to each rebased commit message.
Reject it if the user passes an option that requires the
apply backend (git am) since it lacks message‑filter/trailer
hook. otherwise we can just use the merge backend.
Automatically set REBASE_FORCE when any trailer is supplied.
And reject invalid input before user edits the interactive file.
Signed-off-by: Li Chen <chenl311@chinatelecom.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Route all trailer insertion through trailer_process() and make
builtin/interpret-trailers just do file I/O before calling into it.
amend_file_with_trailers() now shares the same code path.
This removes the fork/exec and tempfile juggling, cutting overhead and
simplifying error handling. No functional change. It also
centralizes logic to prepare for follow-up rebase --trailer patch.
Signed-off-by: Li Chen <chenl311@chinatelecom.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function would be used by trailer_process
in following commits.
Signed-off-by: Li Chen <chenl311@chinatelecom.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extracted trailer processing into a helper that accumulates output in
a strbuf before writing.
Updated interpret_trailers() to reuse the helper, buffer output, and
clean up both input and output buffers after writing.
Signed-off-by: Li Chen <chenl311@chinatelecom.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When exclude_matches_pathspec() tries to determine if an otherwise
excluded item matches the pathspec given, it goes through each
pathspec element and declares a hit, without checking if the element
is a negative ":(exclude)" element. Fix it be applying the usual "a
path matches if it matches any one of positive pathspec element, and
if it matches none of negative pathspec elements" rule in the
function.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Also add advice to put new worktrees outside of existing ones.
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-10 11:20:25 -07:00
244 changed files with 12578 additions and 2884 deletions
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.