-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
fetchTree: Allow fetching plain files #6548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -181,9 +181,17 @@ Currently the `type` attribute can be one of the following: | |
| * `tarball`: Tarballs. The location of the tarball is specified by the | ||
| attribute `url`. | ||
|
|
||
| In URL form, the schema must be `http://`, `https://` or `file://` | ||
| URLs and the extension must be `.zip`, `.tar`, `.tgz`, `.tar.gz`, | ||
| `.tar.xz`, `.tar.bz2` or `.tar.zst`. | ||
| In URL form, the schema must be `tarball+http://`, `tarball+https://` or `tarball+file://`. | ||
| If the extension corresponds to a known archive format (`.zip`, `.tar`, | ||
| `.tgz`, `.tar.gz`, `.tar.xz`, `.tar.bz2` or `.tar.zst`), then the `tarball+` | ||
| can be dropped. | ||
|
Comment on lines
+184
to
+187
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't that change breaking backwards compatibility? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nope, on the contrary: Currently http(s)/file urls are required to have such an extension and are treated as tarballs. The new behavior makes it so that what was already accepted still is, and with the same semantics, and the rest is treated as file urls There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problematic bit though, is that adding a new tarball extension would break backwards-compat a bit: with this change, |
||
|
|
||
| * `file`: Plain files or directory tarballs, either over http(s) or from the local | ||
| disk. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Might as well rename the function to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Might be an idea, indeed :) I’d leave that for later though if you’re OK with that |
||
|
|
||
| In URL form, the schema must be `file+http://`, `file+https://` or `file+file://`. | ||
| If the extension doesn’t correspond to a known archive format (as defined by the | ||
| `tarball` fetcher), then the `file+` prefix can be dropped. | ||
|
|
||
| * `github`: A more efficient way to fetch repositories from | ||
| GitHub. The following attributes are required: | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,105 @@ | ||
| source common.sh | ||
|
|
||
| clearStore | ||
|
|
||
| cd "$TEST_ROOT" | ||
|
|
||
| test_fetch_file () { | ||
| echo foo > test_input | ||
|
|
||
| input_hash="$(nix hash path test_input)" | ||
|
|
||
| nix eval --impure --file - <<EOF | ||
| let | ||
| tree = builtins.fetchTree { type = "file"; url = "file://$PWD/test_input"; }; | ||
| in | ||
| assert (tree.narHash == "$input_hash"); | ||
| tree | ||
| EOF | ||
| } | ||
|
|
||
| # Make sure that `http(s)` and `file` flake inputs are properly extracted when | ||
| # they should be, and treated as opaque files when they should be | ||
| test_file_flake_input () { | ||
| rm -fr "$TEST_ROOT/testFlake"; | ||
| mkdir "$TEST_ROOT/testFlake"; | ||
| pushd testFlake | ||
|
|
||
| mkdir inputs | ||
| echo foo > inputs/test_input_file | ||
| tar cfa test_input.tar.gz inputs | ||
| cp test_input.tar.gz test_input_no_ext | ||
| input_tarball_hash="$(nix hash path test_input.tar.gz)" | ||
| input_directory_hash="$(nix hash path inputs)" | ||
|
|
||
| cat <<EOF > flake.nix | ||
| { | ||
| inputs.no_ext_default_no_unpack = { | ||
| url = "file://$PWD/test_input_no_ext"; | ||
| flake = false; | ||
| }; | ||
| inputs.no_ext_explicit_unpack = { | ||
| url = "tarball+file://$PWD/test_input_no_ext"; | ||
| flake = false; | ||
| }; | ||
| inputs.tarball_default_unpack = { | ||
| url = "file://$PWD/test_input.tar.gz"; | ||
| flake = false; | ||
| }; | ||
| inputs.tarball_explicit_no_unpack = { | ||
| url = "file+file://$PWD/test_input.tar.gz"; | ||
| flake = false; | ||
| }; | ||
| outputs = { ... }: {}; | ||
| } | ||
| EOF | ||
|
|
||
| nix flake update | ||
| nix eval --file - <<EOF | ||
| with (builtins.fromJSON (builtins.readFile ./flake.lock)); | ||
|
|
||
| # Url inputs whose extension doesn’t match a know archive format should | ||
| # not be unpacked by default | ||
| assert (nodes.no_ext_default_no_unpack.locked.type == "file"); | ||
| assert (nodes.no_ext_default_no_unpack.locked.unpack or false == false); | ||
| assert (nodes.no_ext_default_no_unpack.locked.narHash == "$input_tarball_hash"); | ||
|
|
||
| # For backwards compatibility, flake inputs that correspond to the | ||
| # old 'tarball' fetcher should still have their type set to 'tarball' | ||
| assert (nodes.tarball_default_unpack.locked.type == "tarball"); | ||
| # Unless explicitely specified, the 'unpack' parameter shouldn’t appear here | ||
| # because that would break older Nix versions | ||
| assert (!nodes.tarball_default_unpack.locked ? unpack); | ||
| assert (nodes.tarball_default_unpack.locked.narHash == "$input_directory_hash"); | ||
|
|
||
| # Explicitely passing the unpack parameter should enforce the desired behavior | ||
| assert (nodes.no_ext_explicit_unpack.locked.narHash == nodes.tarball_default_unpack.locked.narHash); | ||
| assert (nodes.tarball_explicit_no_unpack.locked.narHash == nodes.no_ext_default_no_unpack.locked.narHash); | ||
| true | ||
| EOF | ||
| popd | ||
|
|
||
| [[ -z "${NIX_DAEMON_PACKAGE}" ]] && return 0 | ||
|
|
||
| # Ensure that a lockfile generated by the current Nix for tarball inputs | ||
| # can still be read by an older Nix | ||
|
|
||
| cat <<EOF > flake.nix | ||
| { | ||
| inputs.tarball = { | ||
| url = "file://$PWD/test_input.tar.gz"; | ||
| flake = false; | ||
| }; | ||
| outputs = { self, tarball }: { | ||
| foo = builtins.readFile "${tarball}/test_input_file"; | ||
| }; | ||
| } | ||
| nix flake update | ||
|
|
||
| clearStore | ||
| "$NIX_DAEMON_PACKAGE/bin/nix" eval .#foo | ||
| EOF | ||
| } | ||
|
|
||
| test_fetch_file | ||
| test_file_flake_input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A problem with the use of
downloadFile()is that it usesFileIngestionMethod::Flatinstead ofFileIngestionMethod::Recursive. Currently it's assumed that all flake inputs use recursive+sha256 with a name of "source". This allows inputs to be substituted using thenarHashattribute in the lock file (seeInput::computeStorePath()). However, the lazy trees branch will probably remove the ability to substitute inputs anyway...There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh indeed, I didn’t notice the
Flathere. Would it be an issue to makedownloadFileuseRecursive? Afaik none of the other use sites really care about how the result is hashed since it’s only used internally by the fetchers but doesn’t leak outside.(Alternatively I could just wrap that by adding an extra step that would copy the output to a
RecursiveCA location, but if that can be avoided it’s even better)