Coder Social home page Coder Social logo

kpcyrd / rebuilderd Goto Github PK

View Code? Open in Web Editor NEW
344.0 13.0 22.0 1.23 MB

Independent verification of binary packages - reproducible builds

Home Page: https://rebuilderd.com/

License: GNU General Public License v3.0

Dockerfile 0.33% Rust 97.06% Shell 2.03% Makefile 0.58%
reproducible-builds rebuilder supply-chain-security security-tools rust supply-chain

rebuilderd's Introduction

rebuilderd(1) crates.io cncf slack irc.libera.chat:6697/#archlinux-reproducible

Independent verification system of binary packages.

rebuildctl pkgs ls example output

rebuilderd monitors the package repository of a linux distribution and uses rebuilder backends like archlinux-repro to verify the provided binary packages can be reproduced from the given source code.

It tracks the state of successfully verified packages and optionally generates a report of differences with diffoscope for debugging. Note that due to the early state of this technology a failed rebuild is more likely due to an undeterministic build process instead of a supply chain compromise, but if multiple rebuilders you trust report 100% reproducible for the set of packages you use you can be confident that the binaries on your system haven't been tampered with. People are encouraged to run their own rebuilders if they can afford to.

Status

Status Docker Doesn't need --privileged Doesn't need /dev/kvm Backend
Arch Linux ✔️ supported - ✔️ archlinux-repro
Debian 🚀 experimental ✔️ ✔️ debrebuild.py
Tails 🚀 experimental - docs (script)
Alpine ✨ planned - - - -

Docker: There's a docker-compose example setup in this repository, but not all rebuilder backends support running inside of a docker container (for example because it's creating containers itself).

Doesn't need --privileged: Some rebuilder backends create containers in a way that works inside of a docker container, if they're granted the required kernel capabilities to do so. This may have security implications for other containers running on that system or the code running inside the container may reconfigure the system outside of the docker container.

Doesn't need /dev/kvm: Some build tools may need to start a virtual machine and depend on /dev/kvm to be available. This is a special requirement for the hosting environment, you either need a VPS with Nested KVM or dedicated non-virtualized hardware.

Accessing a rebuilderd instance in your browser

Many instance run a web frontend to display their results. rebuilderd-website is a very good choice and the software powering the Arch Linux rebuilderd instance:

https://reproducible.archlinux.org/

Loading the index of all packages may take a short time.

Scripting access to a rebuilderd instance

Packaging status

It's also possible to query and manage a rebuilderd instance in a scriptable way. It's recommended to install the rebuildctl commandline util to do this (instructions for your system may vary, see packaging status to the right):

pacman -S rebuilderd-tools

You can then query a rebuilderd instance for the status of a specific package:

rebuildctl -H https://reproducible.archlinux.org pkgs ls --name rebuilderd

You have to specify which instance you want to query because there's no definite truth™. You could ask multiple instances though, including one you operate yourself.

If the rebuilder seems to have outdated data or lists a package as unknown the update may still be in the build queue. You can query the build queue of an instance like this:

rebuildctl -H https://reproducible.archlinux.org queue ls --head

If there's no output that means the build queue is empty.

If you're the administrator of this instance you can also run commands like:

rebuildctl status

Or immediately retry all failed rebuild attempts (there's an automatic retry on by default):

rebuildctl pkgs requeue --status BAD --reset

Running a rebuilderd instance yourself

journalctl output of a rebuilderd-worker

"I compile everything from source" - a significant amount of real world binary packages can already be reproduced today. The more people run rebuilders, the harder it is to compromise all of them.

At the current stage of the project we're interested in every rebuilder there is! Most rebuilderd discussion currently happens in #archlinux-reproducible on libera, feel free to drop by if you're running a instance or considering setting one up. Having a few unreproducible packages is normal (even if it's slightly more than the official rebuilder), but having additional people confirm successful rebuilds is very helpful.

Rebuilding Arch Linux

Please see the setup instructions in the Arch Linux Wiki.

Development with docker

There is a docker-compose setup in the repo, to start a basic stack simply clone the repository and run:

DOCKER_BUILDKIT=1 docker-compose up

The initial build is going to take some time.

To recompile your changes (you can optionally specify a specific image to build):

DOCKER_BUILDKIT=1 docker-compose build

The auth cookie has strict permissions, for development simply change them with:

sudo chmod 0644 secret/auth

Check you can successfully run administrative tasks, use this command to compile and run the rebuildctl binary:

REBUILDERD_COOKIE_PATH=secret/auth cargo run -p rebuildctl -- -v status

There are no packages in the database yet, there's an example profile that we can load. It only contains one lightweight package and should successfully rebuild out-of-the-box in our docker-compose setup.

REBUILDERD_COOKIE_PATH=secret/auth cargo run -p rebuildctl -- pkgs sync-profile --sync-config contrib/confs/rebuilderd-sync.conf debian-anarchism

Check the package was successfully added to the database with status UNKWN:

REBUILDERD_COOKIE_PATH=secret/auth cargo run -p rebuildctl -- pkgs ls

You can display the build queue with this command, it's also going to display a timer for jobs that are currently in progress:

REBUILDERD_COOKIE_PATH=secret/auth cargo run -p rebuildctl -- queue ls --head

You can use a combination of the commands mentioned to monitor your rebuilder. The packages should eventually show up as GOOD in rebuildctl pkgs ls.

Development

If you want to build from source or you want to run rebuilderd built from a specific commit this section contains instructions for that.

A rebuilder consists of the rebuilderd daemon and >= 1 workers:

First we switch into the daemon/ folder and run our rebuilderd daemon:

cd daemon; cargo run

This takes a moment but the api should now be available at https://127.0.0.1:8484/api/v0/dashboard.

This daemon needs to run in the background, so we're starting a new terminal to continue with the next steps.

Next we're going to build the rebuilctl binary and confirm it's able to connect to the api. If we don't get an error message this means it's working.

cd tools; cargo run -- status

We didn't connect any workers yet so this output is empty.

Next we want to connect a rebuilder. rebuilderd only does the scheduling for you, so you need to install additional software here (called a rebuilder backend):

  • Arch Linux: pacman -S archlinux-repro or git clone https://github.com/archlinux/archlinux-repro && archlinux-repro/ && make && sudo make install. Note that on debian buster you need to install systemd from buster-backports.

With a rebuilder backend installed we're now going to run our first worker:

cd worker; cargo run -- connect http://127.0.0.1:8484

This rebuilder should now show up in our rebuildctl status output:

cd tools; cargo run -- status

Next we're going to import some packages:

cd tools; cargo run -- pkgs sync archlinux community \
    'https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch' \
    --architecture x86_64 --maintainer kpcyrd

The --maintainer option is optional and allows you to rebuild packages by a specific maintainer only.

To show the current status of our imported packages run:

cd tools; cargo run -- pkgs ls

To monitor your workers are picking up tasks:

cd tools; cargo build && CLICOLOR_FORCE=1 watch -c ../target/debug/rebuildctl status

To inspect the queue run:

cd tools; cargo run -- queue ls

An easy way to test the package import is using a command like this:

cargo watch -- cargo run --bin rebuildctl -- pkgs sync-profile --print-json --sync-config contrib/confs/rebuilderd-sync.conf tails

Build a package directly:

cargo run --bin rebuilderd-worker -- \
	build debian 'http://deb.debian.org/debian/pool/main/a/anarchism/anarchism_15.3-3_all.deb' \
	--input-url 'https://buildinfos.debian.net/buildinfo-pool/a/anarchism/anarchism_15.3-3_all.buildinfo' \
	--backend 'debian=./rebuilder-debian.sh'

Dependencies

Debian: pkg-config liblzma-dev libssl-dev libsqlite3-dev libzstd-dev

Funding

Rebuilderd development is currently funded by:

  • kpcyrd's savings account
  • Google and The Linux Foundation
  • People like you and me on github sponsors

License

GPLv3+

rebuilderd's People

Contributors

adityasaky avatar grawlinson avatar inglor avatar jelly avatar joyliu-q avatar jvoisin avatar kpcyrd avatar santiagotorres avatar vekhir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rebuilderd's Issues

Packages with epoch in their filename are not marked as GOOD

Seeing the following error on the rebuilderd worker and the package is shown as BAD on https://reproducible.archlinux.org/

[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Comparing "/tmp/rebuilderd7RU43S/inputs/re2-1:20211101-1-x86_64.pkg.tar.zst" with "/tmp/rebuilderd7RU43S/out/re2-1:20211101-1-x86_64.pkg.tar.zst"
[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Files are identical, marking as GOOD
[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Generating signed link
[2021-11-25T00:30:18Z ERROR rebuilderd_worker] Unexpected error while rebuilding package package: Failed to generate in-toto attestation: illegal argument: Path cannot contain ":"
[2021-11-25T00:30:18Z INFO  rebuilderd_worker] Sending build report to rebuilderd...

rebuildctl: smarter endpoint detection?

rebuildctl should look at /etc/rebuilderd.conf for finding an endpoint before falling back to the hardcoded http://127.0.0.1:8080. Currently it only checks the -H arg and a config file in the user's config directory

also would be nice to have endpoint under [http] in the default/example config for discoverability

Debian support

This is the tracking issue for Debian support. The list of current issues:

  • Collect packages currently in debian: The rebuilderd scheduler expects a list of all packages that it needs to issue rebuilds for
  • Map packages to buildinfo files: Debian has detached buildinfo files so there's some code needed to locate the right buildinfo file before we can attempt to rebuild a package
  • Integrate debrebuild.py with rebuilderd
  • binNMUs: binary non-maintainer uploads are used to build the package again in a more recent build environment. The buildinfo file uses the "binNMU version" but the files published to the apt repository only contains the version of the initial source upload, which means we're unable locate the correct buildinfo file.

Do not capture absolute paths in in-toto link metadata

Currently, when rebuilderd records an in-toto link after a successful rebuild, the absolute path of both the input and output packages are recorded. Typically, this looks something like /tmp/rebuilderd<build string>/{inputs,out}/<package file>. This should be replaced with just <package file>, enabling more straightforward artifact rules in in-toto layouts. Since rebuilderd is aware of the build location, it can pass this path to in-toto as a string to be left-stripped.

Related: in-toto/in-toto-rs#12

sync-profile doesn't skip missing maintainers field

with the default config

## rebuild all of core
[profile."archlinux-core"]
distro = "archlinux"
suite = "core"
architecture = "x86_64"
source = "https://ftp.halifax.rwth-aachen.de/archlinux/core/os/x86_64/core.db"

running rebuildctl pkgs sync-profile archlinux-core (the same as the sync timer/service) results in

Error: Failed to load config
Because: missing field `maintainer` for key `profile.archlinux-core` at line 2 column 1

this does not appear to be a problem with rebuildctl pkgs sync ...

Test failure when building rebuilderd

Hi,
when I'm building rebuilderd 0.19.0 in a clean chroot on Arch Linux, I get the following two test failures during check():

failures:

---- decompress::tests::decompress_bzip2_compression stdout ----
thread 'decompress::tests::decompress_bzip2_compression' panicked at tools/src/decompress.rs:91:9:
assertion `left == right` failed
  left: Unknown
 right: Bzip2
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- decompress::tests::detect_bzip2_compression stdout ----
thread 'decompress::tests::detect_bzip2_compression' panicked at tools/src/decompress.rs:84:9:
assertion `left == right` failed
  left: Unknown
 right: Bzip2


failures:
    decompress::tests::decompress_bzip2_compression
    decompress::tests::detect_bzip2_compression

test result: FAILED. 32 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s

error: test failed, to rerun pass `-p rebuildctl --bin rebuildctl`

This seems rather odd to me, as the tests are - from my limited understanding - correct and should pass. The issue is clearly in detect_compression (source), so either tree_magic_mini somehow doesn't recognize the magic bytes or the mapping from mime to CompressedWith fails.

This quick test

touch hello.txt
bzip2 hello.txt
mv hello.txt.bz2 hello
xdg-mime query filetype hello

prints out application/x-bzip2 for me. Maybe it's related.

Are you able to reproduce the issue? It is reliably reproducible on my side.

System information

OS: Arch Linux
Kernel: Linux 6.5.6-arch2-1
Rebuilderd: 0.19.0-2
Rust: 1:1.73.0-1
tree_magic_mini: v3.0.3 (via cargo)
bzip2: v0.4.3 (via cargo)
bzip2-sys: v0.1.11+1.0.8 (via cargo)

Add API documentation

To easily create integrations with rebuilderd it would be nice if the API endpoints would be documented and available somewhere.

[email protected]: all builds fail

Due to the introduction of setsid -c in repro (cf. archlinux/archlinux-repro#82 ) and that by default systemd-services don't run in a terminal, by running [email protected] all builds will eventually fail with the following message:

setsid: failed to set the controlling terminal: Inappropriate ioctl for device

I am not sure if this issue can be resolved by simply telling the service to use a tty somehow. As far as I've tested it this results in the referenced bug (ie. first build succeeding, subsequent failing until service is restarted).

rebuildctl pkgs sync: Storing a different distro value then the sync method used

This command:

% rebuildctl pkgs sync --print-json archlinux cachyos-v3 'https://mirror.cachyos.org/repo/$arch/$repo' --architecture x86_64_v3

Produces entries like this:

  {
    "name": "cachy-browser",
    "version": "94.0.2-3",
    "distro": "archlinux",
    "suite": "cachyos-v3",
    "architecture": "x86_64_v3",
    "input_url": null,
    "artifacts": [
      {
        "name": "cachy-browser",
        "version": "94.0.2-3",
        "url": "https://mirror.cachyos.org/repo/x86_64_v3/cachyos-v3/cachy-browser-94.0.2-3-x86_64_v3.pkg.tar.zst"
      }
    ]
  },

Since distro is set to archlinux it'd pick the Arch Linux rebuilder-backend. Changing this to cachyos fails like this:

% rebuildctl pkgs sync --print-json cachyos cachyos-v3 'https://mirror.cachyos.org/repo/$arch/$repo' --architecture x86_64_v3
Error: No integrated sync for "cachyos", use sync-stdin instead

There should be a way to use the built-in sync code for Arch Linux while setting distro to a different value.

Related to #107

Store build logs or diffoscope html output

Currently it's only possible to investigate why a build was unreproducible by rebuilding it locally and running for example diffoscope. It would be nice to be able to view the build log, results or diffoscope html output via an API call.

Possibility to provide at archlinux packages as x86_64_v3

Hello,

First thanks for providing such a nice project.

I got a question, i saw youve implemented to provide different march's.
But i dont understand the code so far. My goal is to provide a repo which rebuilds several arch packages.

But at the makepkg.conf needs to be set the CARCH="x86_64_v3".
Also at the PKGBUILD's its needed to change the arch=(x86_64) to arch=(x86_64_v3).
And of course the CFFLAGS and the CXXFLAGS needs to be changed to -v3.

For me its personally the correct way to provide optimized v3 packages.
To use this packages the user need to change his pacman.conf to Architectures= x86_64 x86_64_v3, until arch got a solution to provide this natively with pacman, right now they are using uname -m which only gives the x86 architecture out.

I read the code so far but dont get a solution that the packages still synced and compiled with this march.

Regards.

Failing Tails build

Dec 10 12:49:19 rebuilderd rebuilderd-worker[580]: [2021-12-10T11:49:19Z INFO  rebuilderd_worker] Requesting work from rebuilderd...
Dec 10 12:49:19 rebuilderd rebuilderd-worker[580]: [2021-12-10T11:49:19Z INFO  rebuilderd_worker] Starting rebuild of "tails-amd64-4.25.img" "4.25"
Dec 10 12:49:19 rebuilderd rebuilderd-worker[580]: [2021-12-10T11:49:19Z INFO  rebuilderd_worker::download] Downloading "https://mirrors.wikimedia.org/tails/stable/tails-amd64-4.25/tails-amd64-4.25.img" to "/tmp/rebuilderd6BqMb6/inputs/tails-amd64-4.25.img"
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:06:51Z INFO  rebuilderd_worker::download] Downloaded 1194328064 bytes
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:06:51Z INFO  rebuilderd_worker::proc] Running "/usr/libexec/rebuilderd/rebuilder-tails.sh" ["/tmp/rebuilderd6BqMb6/inputs/tails-amd64-4.25.img"]
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + IMG_PATH=/tmp/rebuilderd6BqMb6/inputs/tails-amd64-4.25.img
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + basename /tmp/rebuilderd6BqMb6/inputs/tails-amd64-4.25.img
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + sed -nr s/tails-amd64-([0-9a-z~\.]+)\.[^\]+/\1/p
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + sed s/~/-/g
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + TAG=4.25
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + REPO_URL=https://gitlab.tails.boum.org/tails/tails.git
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + export TAILS_BUILD_OPTIONS=nomergebasebranch forcecleanup
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + virsh vol-list default
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + awk {print $1}
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + grep ^tails-builder-
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + xargs -rL1 virsh vol-delete --pool default
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: Vol tails-builder-amd64-buster-20210818-e240e31abf_vagrant_box_image_0.img deleted
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + mktemp -d -t tails.XXXXXX
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + WORK_DIR=/tmp/tails.TZV5be
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + trap { rm -rf -- "$WORK_DIR"; } EXIT
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + chmod 0711 /tmp/tails.TZV5be
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + export GNUPGHOME=/tmp/tails.TZV5be/gpg
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + mkdir -m 0700 -- /tmp/tails.TZV5be/gpg
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + gpg --import
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: gpg: keybox '/tmp/tails.TZV5be/gpg/pubring.kbx' created
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: gpg: /tmp/tails.TZV5be/gpg/trustdb.gpg: trustdb created
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: gpg: key DBB802B258ACD84F: public key "Tails developers (offline long-term identity key) <[email protected]>" imported
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: gpg: Total number processed: 1
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: gpg:               imported: 1
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + REPO_DEST=/tmp/tails.TZV5be/tails
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: + git clone --branch 4.25 -- https://gitlab.tails.boum.org/tails/tails.git /tmp/tails.TZV5be/tails
Dec 10 13:06:51 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails'...
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: Note: switching to 'c1c5a1acac2ddf48dcbe8fdba8063c84acf10303'.
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: You are in 'detached HEAD' state. You can look around, make experimental
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: changes and commit them, and you can discard any commits you make in this
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: state without impacting any branches by switching back to a branch.
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: If you want to create a new branch to retain commits you create, you may
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: do so (now or later) by using -c with the switch command. Example:
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]:   git switch -c <new-branch-name>
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: Or undo this operation with:
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]:   git switch -
Dec 10 13:07:57 rebuilderd rebuilderd-worker[580]: Turn off this advice by setting config variable advice.detachedHead to false
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: + cd /tmp/tails.TZV5be/tails
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: + git verify-tag -v -- 4.25
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: object c1c5a1acac2ddf48dcbe8fdba8063c84acf10303
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: type commit
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: tag 4.25
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: tagger anonym <[email protected]> 1638803095 +0100
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: tagging version 4.25
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg: Signature made Mon 06 Dec 2021 04:04:55 PM CET
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg:                using RSA key 05469FB85EAD6589B43D41D3D21DAD38AF281C0B
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg: Good signature from "Tails developers (offline long-term identity key) <[email protected]>" [unknown]
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg:                 aka "Tails developers <[email protected]>" [unknown]
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg: WARNING: This key is not certified with a trusted signature!
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: gpg:          There is no indication that the signature belongs to the owner.
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Primary key fingerprint: A490 D0F4 D311 A415 3E2B  B7CA DBB8 02B2 58AC D84F
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]:      Subkey fingerprint: 0546 9FB8 5EAD 6589 B43D  41D3 D21D AD38 AF28 1C0B
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: + git submodule update --init
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/chutney' (https://gitlab.tails.boum.org/tails/chutney.git) registered for path 'submodules/chutney'
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/jenkins-tools' (https://gitlab.tails.boum.org/tails/jenkins-tools.git) registered for path 'submodules/jenkins-tools'
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/mirror-pool-dispatcher' (https://gitlab.tails.boum.org/tails/mirror-pool-dispatcher.git) registered for path 'submodules/mirror-pool-dispatcher'
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/tails-workarounds' (https://gitlab.tails.boum.org/tails/workarounds.git) registered for path 'submodules/tails-workarounds'
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/torbrowser-launcher' (https://gitlab.tails.boum.org/tails/torbrowser-launcher.git) registered for path 'submodules/torbrowser-launcher'
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails/submodules/chutney'...
Dec 10 13:07:58 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails/submodules/jenkins-tools'...
Dec 10 13:07:59 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails/submodules/mirror-pool-dispatcher'...
Dec 10 13:08:00 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails/submodules/tails-workarounds'...
Dec 10 13:08:00 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.TZV5be/tails/submodules/torbrowser-launcher'...
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/chutney': checked out '29dfa133af65a081e9861c12a862f5f3368c6640'
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/jenkins-tools': checked out '9f7cb532e717a5b181f1b933675d75b17df0a4d9'
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/mirror-pool-dispatcher': checked out '573ca423708b9fdcc35abd17605e71c1c6ef794c'
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/tails-workarounds': checked out 'e81b1006a802bca117faa9b295a4b90332c7c47f'
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/torbrowser-launcher': checked out 'd1252566af4440ae3a17e3920f5e92d56cf5af67'
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: + dpkg-parsechangelog --show-field=Date
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: + date --utc --date=Mon, 06 Dec 2021 16:03:22 +0100 +%s
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: + SOURCE_DATE_EPOCH=1638803002
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: + export SOURCE_DATE_EPOCH
Dec 10 13:08:01 rebuilderd rebuilderd-worker[580]: + ARTIFACTS=/tmp/rebuilderd6BqMb6/out rake build
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: You have uncommitted changes in the Git repository. Due to limitations
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: of the build system, you need to commit them before building Tails:
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: ?? vagrant/.vagrant.d/
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: If you don't care about those changes and want to build Tails nonetheless,
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: please add `ignorechanges` to the TAILS_BUILD_OPTIONS environment
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: variable.
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: Uncommitted changes. Aborting.
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: + rm -rf -- /tmp/tails.TZV5be
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:02Z INFO  rebuilderd_worker::proc] "/usr/libexec/rebuilderd/rebuilder-tails.sh" exited with exit=exit status: 1, captured 4772 bytes
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:02Z INFO  rebuilderd_worker::rebuild] Build failed, no output artifact found at "/tmp/rebuilderd6BqMb6/out/tails-amd64-4.25.img"
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:02Z WARN  rebuilderd_worker] Failed to verify package
Dec 10 13:08:02 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:02Z INFO  rebuilderd_worker] Sending build report to rebuilderd...
Dec 10 13:08:05 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:05Z INFO  rebuilderd_worker] Requesting work from rebuilderd...
Dec 10 13:08:05 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:05Z INFO  rebuilderd_worker] Starting rebuild of "tails-amd64-4.25.iso" "4.25"
Dec 10 13:08:05 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:08:05Z INFO  rebuilderd_worker::download] Downloading "https://mirrors.wikimedia.org/tails/stable/tails-amd64-4.25/tails-amd64-4.25.iso" to "/tmp/rebuilderdIpOZql/inputs/tails-amd64-4.25.iso"
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:28:48Z INFO  rebuilderd_worker::download] Downloaded 1184440320 bytes
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:28:48Z INFO  rebuilderd_worker::proc] Running "/usr/libexec/rebuilderd/rebuilder-tails.sh" ["/tmp/rebuilderdIpOZql/inputs/tails-amd64-4.25.iso"]
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + IMG_PATH=/tmp/rebuilderdIpOZql/inputs/tails-amd64-4.25.iso
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + basename /tmp/rebuilderdIpOZql/inputs/tails-amd64-4.25.iso
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + sed -nr s/tails-amd64-([0-9a-z~\.]+)\.[^\]+/\1/p
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + sed s/~/-/g
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + TAG=4.25
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + REPO_URL=https://gitlab.tails.boum.org/tails/tails.git
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + export TAILS_BUILD_OPTIONS=nomergebasebranch forcecleanup
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + virsh vol-list default
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + awk {print $1}
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + grep ^tails-builder-
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + xargs -rL1 virsh vol-delete --pool default
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + mktemp -d -t tails.XXXXXX
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + WORK_DIR=/tmp/tails.koOWGj
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + trap { rm -rf -- "$WORK_DIR"; } EXIT
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + chmod 0711 /tmp/tails.koOWGj
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + export GNUPGHOME=/tmp/tails.koOWGj/gpg
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + mkdir -m 0700 -- /tmp/tails.koOWGj/gpg
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + gpg --import
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: gpg: keybox '/tmp/tails.koOWGj/gpg/pubring.kbx' created
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: gpg: /tmp/tails.koOWGj/gpg/trustdb.gpg: trustdb created
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: gpg: key DBB802B258ACD84F: public key "Tails developers (offline long-term identity key) <[email protected]>" imported
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: gpg: Total number processed: 1
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: gpg:               imported: 1
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + REPO_DEST=/tmp/tails.koOWGj/tails
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: + git clone --branch 4.25 -- https://gitlab.tails.boum.org/tails/tails.git /tmp/tails.koOWGj/tails
Dec 10 13:28:48 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails'...
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: Note: switching to 'c1c5a1acac2ddf48dcbe8fdba8063c84acf10303'.
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: You are in 'detached HEAD' state. You can look around, make experimental
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: changes and commit them, and you can discard any commits you make in this
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: state without impacting any branches by switching back to a branch.
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: If you want to create a new branch to retain commits you create, you may
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: do so (now or later) by using -c with the switch command. Example:
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]:   git switch -c <new-branch-name>
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: Or undo this operation with:
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]:   git switch -
Dec 10 13:29:58 rebuilderd rebuilderd-worker[580]: Turn off this advice by setting config variable advice.detachedHead to false
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: + cd /tmp/tails.koOWGj/tails
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: + git verify-tag -v -- 4.25
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg: Signature made Mon 06 Dec 2021 04:04:55 PM CET
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg:                using RSA key 05469FB85EAD6589B43D41D3D21DAD38AF281C0B
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg: Good signature from "Tails developers (offline long-term identity key) <[email protected]>" [unknown]
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg:                 aka "Tails developers <[email protected]>" [unknown]
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg: WARNING: This key is not certified with a trusted signature!
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: gpg:          There is no indication that the signature belongs to the owner.
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Primary key fingerprint: A490 D0F4 D311 A415 3E2B  B7CA DBB8 02B2 58AC D84F
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]:      Subkey fingerprint: 0546 9FB8 5EAD 6589 B43D  41D3 D21D AD38 AF28 1C0B
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: + git submodule update --init
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: object c1c5a1acac2ddf48dcbe8fdba8063c84acf10303
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: type commit
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: tag 4.25
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: tagger anonym <[email protected]> 1638803095 +0100
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: tagging version 4.25
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/chutney' (https://gitlab.tails.boum.org/tails/chutney.git) registered for path 'submodules/chutney'
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/jenkins-tools' (https://gitlab.tails.boum.org/tails/jenkins-tools.git) registered for path 'submodules/jenkins-tools'
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/mirror-pool-dispatcher' (https://gitlab.tails.boum.org/tails/mirror-pool-dispatcher.git) registered for path 'submodules/mirror-pool-dispatcher'
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/tails-workarounds' (https://gitlab.tails.boum.org/tails/workarounds.git) registered for path 'submodules/tails-workarounds'
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Submodule 'submodules/torbrowser-launcher' (https://gitlab.tails.boum.org/tails/torbrowser-launcher.git) registered for path 'submodules/torbrowser-launcher'
Dec 10 13:29:59 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails/submodules/chutney'...
Dec 10 13:30:02 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails/submodules/jenkins-tools'...
Dec 10 13:30:02 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails/submodules/mirror-pool-dispatcher'...
Dec 10 13:30:03 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails/submodules/tails-workarounds'...
Dec 10 13:30:03 rebuilderd rebuilderd-worker[580]: Cloning into '/tmp/tails.koOWGj/tails/submodules/torbrowser-launcher'...
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/chutney': checked out '29dfa133af65a081e9861c12a862f5f3368c6640'
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/jenkins-tools': checked out '9f7cb532e717a5b181f1b933675d75b17df0a4d9'
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/mirror-pool-dispatcher': checked out '573ca423708b9fdcc35abd17605e71c1c6ef794c'
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/tails-workarounds': checked out 'e81b1006a802bca117faa9b295a4b90332c7c47f'
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: Submodule path 'submodules/torbrowser-launcher': checked out 'd1252566af4440ae3a17e3920f5e92d56cf5af67'
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: + dpkg-parsechangelog --show-field=Date
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: + date --utc --date=Mon, 06 Dec 2021 16:03:22 +0100 +%s
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: + SOURCE_DATE_EPOCH=1638803002
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: + export SOURCE_DATE_EPOCH
Dec 10 13:30:05 rebuilderd rebuilderd-worker[580]: + ARTIFACTS=/tmp/rebuilderdIpOZql/out rake build
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: You have uncommitted changes in the Git repository. Due to limitations
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: of the build system, you need to commit them before building Tails:
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: ?? vagrant/.vagrant.d/
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: If you don't care about those changes and want to build Tails nonetheless,
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: please add `ignorechanges` to the TAILS_BUILD_OPTIONS environment
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: variable.
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: Uncommitted changes. Aborting.
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: + rm -rf -- /tmp/tails.koOWGj
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:30:06Z INFO  rebuilderd_worker::proc] "/usr/libexec/rebuilderd/rebuilder-tails.sh" exited with exit=exit status: 1, captured 4688 bytes
Dec 10 13:30:06 rebuilderd rebuilderd-worker[580]: [2021-12-10T12:30:06Z INFO  rebuilderd_worker::rebuild] Build failed, no output artifact found at "/tmp/rebuilderdIpOZql/out/tails-amd64-4.25.iso"

It looks like a git clean -xdf is missing somewhere, or something like this.

Show queue filtered by distro

It's currently not easy to use rebuildctl queue ls --head if there are multiple distros in the rebuilderd instance because it's going to show the whole queue even if you only one to inspect one for a specific distro (eg. because you've been maintaining these specific workers).

$ rebuildctl queue ls --head --distro debian
error: Found argument '--distro' which wasn't expected, or isn't valid in this context

USAGE:
    rebuildctl queue ls --head

For more information try --help

Ensure all output has been read before calling .wait()

I'm suspecting there's a race with leftover data not being read from a child process if the .wait() future resolves before the last .read(). The parent process should read until the streams are closed before awaiting .wait().

Tails doesn't build anymore :'(

$ rebuildctl pkgs sync-profile --sync-config /etc/rebuilderd-sync.conf tails

[…]

Aug 08 16:03:53 rebuilderd rebuilderd-worker[575]: ==> box: Box file was not detected as metadata. Adding it directly...
Aug 08 16:03:53 rebuilderd rebuilderd-worker[575]: ==> box: Adding box 'tails-builder-amd64-bullseye-20220712-fb05d75887' (v0) for provider:
Aug 08 16:03:53 rebuilderd rebuilderd-worker[575]:     box: Unpacking necessary files from: file:///tmp/tails.uOtKYL/tails/tails-builder-amd64-bullseye-20220712-fb05d75887.box
Aug 08 16:03:56 rebuilderd rebuilderd-worker[575]: [239B blob data]
Aug 08 16:03:57 rebuilderd rebuilderd-worker[575]: Bringing machine 'default' up with 'libvirt' provider...
Aug 08 16:03:57 rebuilderd rebuilderd-worker[575]: ==> default: Uploading base box image as volume into Libvirt storage...
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: [47.9K blob data]
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: [24.4K blob data]
Aug 08 16:03:59 rebuilderd libvirtd[32076]: libvirt version: 8.5.0, package: 1 (Andrea Bolognani <[email protected]> Sun, 17 Jul 2022 17:12:07 +0200)
Aug 08 16:03:59 rebuilderd libvirtd[32076]: hostname: rebuilderd
Aug 08 16:03:59 rebuilderd libvirtd[32076]: storage volume 'apt-cacher-ng-data.qcow2' exists already
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: Error while creating volume for domain: Call to virStorageVolCreateXML failed: storage volume 'apt-cacher-ng-data.qcow2' exists already
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: rake aborted!
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: VagrantCommandError: 'vagrant ["up", "--provision"]' command failed with exit status 1
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:113:in `rescue in run_vagrant'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:110:in `run_vagrant'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:702:in `block (2 levels) in <top (required)>'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /usr/share/rubygems-integration/all/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: Caused by:
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: CommandError: command ["vagrant", "up", "--provision"], {:chdir=>"./vagrant"} failed with exit status 1
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:78:in `run_command'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:111:in `run_vagrant'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /tmp/tails.uOtKYL/tails/Rakefile:702:in `block (2 levels) in <top (required)>'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: /usr/share/rubygems-integration/all/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: Tasks: TOP => build => vm:up
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: (See full trace by running task with --trace)
Aug 08 16:03:59 rebuilderd rebuilderd-worker[575]: + rm -rf -- /tmp/tails.uOtKYL
Aug 08 16:04:00 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:00Z INFO  rebuilderd_worker::proc] "/usr/libexec/rebuilderd/rebuilder-tails.sh" exited with exit=exit status: 1, captured 96909 bytes
Aug 08 16:04:00 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:00Z INFO  rebuilderd_worker::rebuild] No output artifact found, marking as BAD: "/tmp/rebuilderdER21h5/out/tails-amd64-5.3.1.img"
Aug 08 16:04:00 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:00Z INFO  rebuilderd_worker::rebuild] No output artifact found, marking as BAD: "/tmp/rebuilderdER21h5/out/tails-amd64-5.3.1.iso"
Aug 08 16:04:00 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:00Z INFO  rebuilderd_worker] Sending build report to rebuilderd...
Aug 08 16:04:00 rebuilderd rebuilderd[578]: [2022-08-08T14:04:00Z INFO  actix_web::middleware::logger] 127.0.0.1:58292 "POST /api/v0/build/report HTTP/1.1" 200 4 "-" "-" 0.050013
Aug 08 16:04:03 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:03Z INFO  rebuilderd_worker] Requesting work from rebuilderd...
Aug 08 16:04:03 rebuilderd rebuilderd[578]: [2022-08-08T14:04:03Z INFO  actix_web::middleware::logger] 127.0.0.1:58292 "POST /api/v0/queue/pop HTTP/1.1" 200 9 "-" "-" 0.000366
Aug 08 16:04:03 rebuilderd rebuilderd-worker[575]: [2022-08-08T14:04:03Z INFO  rebuilderd_worker] No pending tasks, sleeping for 180s...

Archlinux split packages are rebuilt once for each split

Some PKGBUILD files produce multiple packages. Rebuilderd is not currently aware of this relationship between the split packages and schedules each one for an independent rebuild, each of which builds all the packages only to ignore all but one of them. Instead a single build should be run, followed by checking all the packages produced.

This is especially an issue for gcc, which takes a long time to build and produces 8 different packages: gcc, gcc-libs, gcc-fortran, gcc-objc, gcc-ada, gcc-go, lib32-gcc-libs, and gcc-d. The whole of gcc is therefore built 8 times when rebuilding archlinux core and any time gcc is updated.

Colon in filename causes attestation signing to fail

@foutrelis discovered reproducible packages with a colon : in their filename are flagged as BAD because the attestation can't be generated:

[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Comparing "/tmp/rebuilderd7RU43S/inputs/re2-1:20211101-1-x86_64.pkg.tar.zst" with "/tmp/rebuilderd7RU43S/out/re2-1:20211101-1-x86_64.pkg.tar.zst"
[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Files are identical, marking as GOOD
[2021-11-25T00:30:18Z INFO  rebuilderd_worker::rebuild] Generating signed link
[2021-11-25T00:30:18Z ERROR rebuilderd_worker] Unexpected error while rebuilding package package: Failed to generate in-toto attestation: illegal argument: Path cannot contain ":"
[2021-11-25T00:30:18Z INFO  rebuilderd_worker] Sending build report to rebuilderd...

This causes rebuilderd to return an Result::Err for the rebuild which is reported to the rebuilderd daemon as a failed rebuild (the log is set to an empty string in this case). This seems to affect all Arch Linux package with an epoch= set.

Add support for other compression formats for the repository database

rebuilderd currently supports only gzip compression for the Arch Linux repo.db file. We ran into this issue on our mirror because we use .tar.xz files rather than .tar.gz. We've switched to .tar.gz now. However, since pacman's repo-add supports a variety of formats, I imagine this can be a problem going forward for other folks.

Killing builds after timeout doesn't work reliably

Especially on the debian rebuilder builds get frequently stuck during tests and the kill-after-timeout doesn't seem to work reliably.

The code that configures the timeout:

let opts = proc::Options {
timeout: Duration::from_secs(timeout),
size_limit: ctx.build.max_bytes,
kill_at_size_limit: false,
passthrough: !ctx.build.silent,
envs,
};
let (_success, log) = proc::run(bin.as_ref(), &[input_path], opts).await?;

The code that's supposed to kill the process:

pub async fn next_wakeup(&mut self, child: &mut Child) -> Result<Duration> {
// check if we need to SIGKILL due to SIGTERM timeout
if let Some(sigterm_sent) = self.sigterm_sent {
if sigterm_sent.elapsed() > Duration::from_secs(SIGKILL_DELAY) {
if let Some(pid) = child.id() {
warn!("child(pid={}) didn't terminate {}s after SIGTERM, sending SIGKILL", pid, SIGKILL_DELAY);
// child.id is going to return None after this
child.kill().await?;
}
}
}
// check if the process timed out and we need to SIGTERM
if let Some(remaining) = self.timeout.checked_sub(self.start.elapsed()) {
return Ok(remaining);
} else if self.sigterm_sent.is_none() {
// the process has timed out, sending SIGTERM
warn!("child timed out, killing...");
self.truncate(child, "TRUNCATED DUE TO TIMEOUT", true).await?;
}
// if we don't need any timeouts anymore we just return any value
Ok(Duration::from_secs(SIGKILL_DELAY))
}
}

This needs investigation.

Support a build timeout for "stuck" builds

Some rebuilds will get stuck due to build issues which seem to hang the build forever such as GCC, implementing a build timeout which kills the build and marks it as bad. Otherwise a whole worker is blocked as it seems that restarting the worker will pick up the task again. It's hard to found a good kill timeout since builds may take arbitrary time due to hardware or network limitations so it might be a good thing to be able to configure the timeout in rebuilderd-worker.conf

Apr 24 14:25:45 repro2.pkgbuild.com rebuilderd-worker[972844]: make[4]: Entering directory '/build/gcc/src/gcc-build/build-x86_64-pc-linux-gnu/libiberty/testsuite'
Apr 24 14:25:45 repro2.pkgbuild.com rebuilderd-worker[972844]: make[4]: Nothing to be done for 'all'.
Apr 24 14:25:45 repro2.pkgbuild.com rebuilderd-worker[972844]: make[4]: Leaving directory '/build/gcc/src/gcc-build/build-x86_64-pc-linux-gnu/libiberty/testsuite'
Apr 24 14:26:01 repro2.pkgbuild.com rebuilderd-worker[978109]: checking whether objcopy supports debuglink...
Apr 24 14:26:01 repro2.pkgbuild.com rebuilderd-worker[978109]: bjcopy: /tmp/ls33612: cannot fill deb

store last modification in JSON output

I'm trying to store the generated files in a database, for that it would be important to know the latest modification. Please offer the date of last change as a unix timestamp, e.g. build_date.

Tails signature verification failure

The tails.sh script hardcodes the Tails gpg key, unfortunately, it needs to be refreshed, but since the signing key weights around ~1.5M, it doesn't seem practical to embed it in the script.

Maybe download it with wget https://tails.boum.org/tails-signing.key, import it into gpg, and hack something with git verify-tag?

Delete orphaned builds in batches

There's a background job for cleanup that's likely looking up the database for extended periods of time:

let query = diesel::sql_query("delete from builds as b where not exists (select 1 from packages as p where p.build_id = b.id);");

This could be split up so the select is executed on its own, and the deletes are executed in batches, so each delete locks up the database briefly instead of locking it once for a long time.

rebuilderd-sync with local repo?

Hi,

I'm trying to configure rebuilderd to fetch packages from a local repo I keep with paru.

This is my /etc/rebuilderd-sync.conf

## rebuild all of archlinux core
[profile."archlinux-core"]
distro = "archlinux"
suite = "core"
architectures = ["x86_64"]
source = "https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch"

## packages I maintain in AUR (and their dependencies)
[profile."iyanmv-aur"]
distro = "archlinux"
suite = "iyanmv"
architectures = ["x86_64"]
source = "/home/iyan/ArchLinux/repos/iyanmv/os/x86_64"

And this is the error I get when trying to start the service

× [email protected] - rebuilderctl sync: periodically import packages
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: failed (Result: exit-code) since Sat 2023-11-11 21:05:21 CET; 4min 58s ago
   Duration: 16ms
TriggeredBy: ● [email protected]
    Process: 1536333 ExecStart=/usr/bin/rebuildctl pkgs sync-profile iyanmv-aur (code=exited, status=1/FAILURE)
   Main PID: 1536333 (code=exited, status=1/FAILURE)
        CPU: 18ms

Nov 11 21:05:21 ilum systemd[1]: Started rebuilderctl sync: periodically import packages.
Nov 11 21:05:21 ilum rebuildctl[1536333]: [2023-11-11T20:05:21Z INFO  rebuildctl::schedule] Reading "/home/iyan/ArchLinux/repos/iyanmv/os/x86_64/iyanmv.db"...
Nov 11 21:05:21 ilum rebuildctl[1536333]: Error: Permission denied (os error 13)
Nov 11 21:05:21 ilum systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
Nov 11 21:05:21 ilum systemd[1]: [email protected]: Failed with result 'exit-code'.

Since I couldn't find in the documentation how to configure sources to use local repos I also tried following the /etc/pacman.conf syntax, i.e.,

source = "file:///home/iyan/ArchLinux/repos/iyanmv/os/x86_64"

But that also didn't work. The error I get in that case is:

× [email protected] - rebuilderctl sync: periodically import packages
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: failed (Result: exit-code) since Sat 2023-11-11 21:13:10 CET; 2s ago
   Duration: 17ms
TriggeredBy: ● [email protected]
    Process: 1538385 ExecStart=/usr/bin/rebuildctl pkgs sync-profile iyanmv-aur (code=exited, status=1/FAILURE)
   Main PID: 1538385 (code=exited, status=1/FAILURE)
        CPU: 19ms

Nov 11 21:13:10 ilum systemd[1]: Started rebuilderctl sync: periodically import packages.
Nov 11 21:13:10 ilum rebuildctl[1538385]: [2023-11-11T20:13:10Z INFO  rebuildctl::schedule] Reading "file:///home/iyan/ArchLinux/repos/iyanmv/os/x86_64/iyanmv.db"...
Nov 11 21:13:10 ilum rebuildctl[1538385]: Error: No such file or directory (os error 2)
Nov 11 21:13:10 ilum systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE
Nov 11 21:13:10 ilum systemd[1]: [email protected]: Failed with result 'exit-code'.

So I guess the first option is the correct one, but there seems to be some permissions issue. Am I doing something wrong or is this an issue to be fixed here?

Flaky proc tests: size_limit_no_kill, hello_world

These tests seem to be racy:

---- proc::tests::size_limit_no_kill stdout ----
thread 'proc::tests::size_limit_no_kill' panicked at 'assertion failed: `(left == right)`
  left: `"AAAAAAAAAAAAAAAAAAAAAAAA\n\n\nTRUNCATED DUE TO SIZE LIMIT\n\n"`,
 right: `"AAAAAAAAAAAAAAAAAAAAAAAA\nAAAAAAAAAAAAAAAAAAAAAAAA\n\n\nTRUNCATED DUE TO SIZE LIMIT\n\n"`', worker/src/proc.rs:206:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
    proc::tests::size_limit_no_kill
test result: FAILED. 9 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.53s
error: test failed, to rerun pass '-p rebuilderd-worker --bin rebuilderd-worker'
---- proc::tests::hello_world stdout ----
thread 'proc::tests::hello_world' panicked at 'assertion failed: `(left == right)`
  left: `""`,
 right: `"hello world\n"`', worker/src/proc.rs:189:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
    proc::tests::hello_world
test result: FAILED. 9 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.60s
error: test failed, to rerun pass '-p rebuilderd-worker --bin rebuilderd-worker'

archlinux sync: no MIME magic files found

Trying out a rebuilderd worker for arch, the sync service fails with:

Nov 24 21:19:31 localhost systemd[1]: Started rebuilderctl sync: periodically import packages.
Nov 24 21:19:31 localhost docker[15248]: [2021-11-24T21:19:31Z INFO  rebuildctl::schedule] Downloading "https://ftp.halifax.rwth-aachen.de/archlinux/core/os/x86_64/core.db"...
Nov 24 21:19:34 localhost docker[15248]: [2021-11-24T21:19:34Z INFO  rebuildctl::schedule::archlinux] Parsing index (139803 bytes)...
Nov 24 21:19:34 localhost docker[15248]: thread 'main' panicked at 'No MIME magic files found in the XDG default paths', /var/cache/buildkit/cargo/registry/src/github.com-1ecc6299db9ec823/tree_magic_mini-3.0.2/src/fdo_magic/builtin/runtim>
Nov 24 21:19:34 localhost docker[15248]: note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Nov 24 21:19:34 localhost systemd[1]: [email protected]: Main process exited, code=exited, status=101/n/a
Nov 24 21:19:34 localhost systemd[1]: [email protected]: Failed with result 'exit-code'.

This makes me wonder whether tree_magic_mini is looking for some sort of xdg-mime database that's not populated...

Automatically retry failed builds with low prio

A failed build should periodically retry automatically, if there's nothing more important to do. This could be done by having a priority column in the queue that we can sort by.

Right now we are manually triggering rebuilds if we assume something might have failed.

Error when following instructions in rebuilderd readme

Hello! I'm Joy, a student working on the GSoC for in-toto-rs.

I will be continuing Santiago's PR #22 on adding in-toto link attestations to rebuilderd.

Issue

When I'm replicating the instructions in the readme, I came across an error in this step, which prevents me from continuing.

Afterwards it's time to import some packages:

cd tools; cargo run pkgs sync archlinux community \
    'https://ftp.halifax.rwth-aachen.de/archlinux/$repo/os/$arch' \
    --architecture x86_64 --maintainer kpcyrd

I included more context and the logs of the error below. Happy to provide more information! Thanks again :D

When was this issue encountered?

I came across this issue last week, prior to 0.13.0's release. However, 0.13.0 also has this issue on my end.

  • Branch: main branch, directly from rebuilderd github.

Logs

thread 'main' panicked at 'Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context.', /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/blocking/shutdown.rs:51:21
stack backtrace:
   0:     0x55a9e93e0ae0 - std::backtrace_rs::backtrace::libunwind::trace::h63b7a90188ab5fb3
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
   1:     0x55a9e93e0ae0 - std::backtrace_rs::backtrace::trace_unsynchronized::h80aefbf9b851eca7
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x55a9e93e0ae0 - std::sys_common::backtrace::_print_fmt::hbef05ae4237a4d72
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:67:5
   3:     0x55a9e93e0ae0 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h28abce2fdb9884c2
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:46:22
   4:     0x55a9e940256f - core::fmt::write::h3b84512577ca38a8
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/fmt/mod.rs:1092:17
   5:     0x55a9e93daba2 - std::io::Write::write_fmt::h465f8feea02e2aa1
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/io/mod.rs:1572:15
   6:     0x55a9e93e2c35 - std::sys_common::backtrace::_print::h525280ee0d29bdde
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:49:5
   7:     0x55a9e93e2c35 - std::sys_common::backtrace::print::h1f0f5b9f3ef8fb78
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/sys_common/backtrace.rs:36:9
   8:     0x55a9e93e2c35 - std::panicking::default_hook::{{closure}}::ha5838f6faa4a5a8f
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:208:50
   9:     0x55a9e93e26e3 - std::panicking::default_hook::hfb9fe98acb0dcb3b
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:225:9
  10:     0x55a9e93e323d - std::panicking::rust_panic_with_hook::hb89f5f19036e6af8
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:591:17
  11:     0x55a9e90fef63 - std::panicking::begin_panic::{{closure}}::h2881e26e65559ec2
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:520:9
  12:     0x55a9e91029a9 - std::sys_common::backtrace::__rust_end_short_backtrace::hf72755a2a94f34f6
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:141:18
  13:     0x55a9e90fee99 - std::panicking::begin_panic::h15c2be904e611030
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panicking.rs:519:12
  14:     0x55a9e908ad79 - tokio::runtime::blocking::shutdown::Receiver::wait::h073c770868e90f61
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/blocking/shutdown.rs:51:21
  15:     0x55a9e906ac26 - tokio::runtime::blocking::pool::BlockingPool::shutdown::h48ac0f5088b38575
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/blocking/pool.rs:145:12
  16:     0x55a9e906b25c - <tokio::runtime::blocking::pool::BlockingPool as core::ops::drop::Drop>::drop::hcd39ceb02f241ef0
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/blocking/pool.rs:162:9
  17:     0x55a9e9085647 - core::ptr::drop_in_place<tokio::runtime::blocking::pool::BlockingPool>::h2e4b52bdd94c6483
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:187:1
  18:     0x55a9e90842f4 - core::ptr::drop_in_place<tokio::runtime::Runtime>::hf9896070b08b0dfa
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:187:1
  19:     0x55a9e8eeebbf - reqwest::blocking::wait::enter::hc01fa312e78d1873
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.4/src/blocking/wait.rs:76:21
  20:     0x55a9e8eeced0 - reqwest::blocking::wait::timeout::hace97bf1d9b62b1a
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.4/src/blocking/wait.rs:13:5
  21:     0x55a9e8ed5878 - reqwest::blocking::client::ClientHandle::new::hf73bfebd75d060c6
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.4/src/blocking/client.rs:920:15
  22:     0x55a9e8ed4faf - reqwest::blocking::client::ClientBuilder::build::h1e141b4c2549f278
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.4/src/blocking/client.rs:100:9
  23:     0x55a9e8ed504c - reqwest::blocking::client::Client::new::h33cc0a8e5a4b7a59
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/reqwest-0.11.4/src/blocking/client.rs:722:9
  24:     0x55a9e8a94abc - rebuildctl::schedule::archlinux::sync::hc8ddf85f9c22439c
                               at /home/joy/Projects/GSoC/rebuilderd/tools/src/schedule/archlinux.rs:134:18
  25:     0x55a9e8b04620 - rebuildctl::sync::{{closure}}::hf729556701093bb0
                               at /home/joy/Projects/GSoC/rebuilderd/tools/src/main.rs:40:30
  26:     0x55a9e8b1ef19 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::ha81b80f8cb797f02
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
  27:     0x55a9e8b07088 - rebuildctl::main::{{closure}}::h6fc79775d6d12415
                               at /home/joy/Projects/GSoC/rebuilderd/tools/src/main.rs:102:47
  28:     0x55a9e8b1f419 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hec27939bc90dd308
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/future/mod.rs:80:19
  29:     0x55a9e8aada60 - tokio::park::thread::CachedParkThread::block_on::{{closure}}::h40ff13b5e13a9663
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/park/thread.rs:263:54
  30:     0x55a9e8aed442 - tokio::coop::with_budget::{{closure}}::h2465c661d042b242
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/coop.rs:106:9
  31:     0x55a9e8b4c4f8 - std::thread::local::LocalKey<T>::try_with::hc54abe921f7fb865
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:272:16
  32:     0x55a9e8b4bf9d - std::thread::local::LocalKey<T>::with::h2e983d3c9957a159
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/local.rs:248:9
  33:     0x55a9e8aad7ae - tokio::coop::with_budget::h4d46bd605f376470
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/coop.rs:99:5
  34:     0x55a9e8aad7ae - tokio::coop::budget::hd7d3e51c1c453581
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/coop.rs:76:5
  35:     0x55a9e8aad7ae - tokio::park::thread::CachedParkThread::block_on::hf6d327a8a6864dbf
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/park/thread.rs:263:31
  36:     0x55a9e8aed931 - tokio::runtime::enter::Enter::block_on::h6bdb29a1bea19c71
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/enter.rs:151:13
  37:     0x55a9e8abbb48 - tokio::runtime::thread_pool::ThreadPool::block_on::h600a773b584e89e9
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/thread_pool/mod.rs:71:9
  38:     0x55a9e8abbcb9 - tokio::runtime::Runtime::block_on::h66ac4d89edf27f73
                               at /home/joy/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.8.2/src/runtime/mod.rs:452:43
  39:     0x55a9e8b05acb - rebuildctl::main::h0553f01748004dcc
                               at /home/joy/Projects/GSoC/rebuilderd/tools/src/main.rs:293:5
  40:     0x55a9e8ab275b - core::ops::function::FnOnce::call_once::he718fecb5defa377
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
  41:     0x55a9e8b4bd7e - std::sys_common::backtrace::__rust_begin_short_backtrace::h88b3e56f0aae8de1
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/sys_common/backtrace.rs:125:18
  42:     0x55a9e8ad0601 - std::rt::lang_start::{{closure}}::h757fc01a566eefa4
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:66:18
  43:     0x55a9e93e373a - core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once::h44574effd2120c86
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/core/src/ops/function.rs:259:13
  44:     0x55a9e93e373a - std::panicking::try::do_call::h10b0bd4879c8dfb0
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:379:40
  45:     0x55a9e93e373a - std::panicking::try::h60c6780d33419e92
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panicking.rs:343:19
  46:     0x55a9e93e373a - std::panic::catch_unwind::h111f33e08c52e2ce
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/panic.rs:431:14
  47:     0x55a9e93e373a - std::rt::lang_start_internal::h126f2e09345dbfda
                               at /rustc/9bc8c42bb2f19e745a63f3445f1ac248fb015e53/library/std/src/rt.rs:51:25
  48:     0x55a9e8ad05e0 - std::rt::lang_start::hc9f9e1df0ded33aa
                               at /home/joy/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/rt.rs:65:5
  49:     0x55a9e8b0fe8c - main
  50:     0x7f0ffcbf10b3 - __libc_start_main
  51:     0x55a9e8a240ee - _start
  52:                0x0 - <unknown>

Pull database and packages from the same mirror

Right now the user is supposed to provide a database url, but the packages are always downloaded from mirrors.kernel.org. This is a leftover from an early prototype and becomes a problem for automatic monitoring if the database is downloaded from a mirror that is faster. A package might be queued before it's available on mirrors.kernel.org, resulting in BAD.

This is more urgent due to the lack of #14.

Signed rebuild attestations

Right now rebuilders are very informal. They report their status, but a GOOD attestation is not used for anything serious yet. Right now it's sufficient to use the basic security that https provides.

I'm opening this as a tracking issue to collect ideas and thoughts how this should look like in the future.

Writes can be silently truncated during package download

I recently set up a rebuilderd instance and noticed around 20 packages failing to reproduce, even though the results in build/ are exactly identical to what I can download from the mirror. After examining my logs closely, I noticed that the workers were reporting download sizes smaller then the size of the files on the mirror by a small amount (usually only a few percent). I think I triggered this issue by having a large (relative to the size of the machine) number of workers, which for one reason or another (thread contention?) forced some writes to return partial results, causing the rest of the write buffer to be discarded.

Looking at worker/src/download.rs, I see that we write from the request stream using this loop:

    let mut bytes: u64 = 0;
    while let Some(item) = stream.next().compat().await {
        let item = item?;
        bytes += f.write(&item).await? as u64;
    }
    info!("Downloaded {} bytes", bytes);

where f is a tokio::fs::File and write is from the trait tokio::io::AsyncWriteExt. This loop assumes that successful writes always write the whole buffer, but the documentation for this method explicitly rejects that:

This function will attempt to write the entire contents of buf, but the entire write may not succeed, or the write may also generate an error. A call to write represents at most one attempt to write to any wrapped object.

Instead there should be a loop like this:

    let mut bytes: u64 = 0;
    while let Some(item) = stream.next().compat().await {
        let mut item = item?;
        while !item.is_empty() {
            let written = f.write(&item).await? as u64;
            bytes += written;
            _, item = item.split_at(written);
        }
    }
    info!("Downloaded {} bytes", bytes);

More generally, there should also be some integrity checking on the download, just to ensure that the downloaded package hasn't been corrupted. Otherwise we risk wasting a bunch of time trying to reproduce a package that can't be reproduced.

Option to schedule with delay

Somebody might want to run a rebuilder that follows the package index with a delay. An entity or organization could decide to run both a regular and a delayed rebuilder that both have their own queue and scheduler. The second, delayed rebuilder would act as a check to ensure an external review would be able to replicate our verification work.

An operator is encouraged to also run a regular, non-delayed rebuilder since those results are going to be more useful if the build artifact is intended to be rolled out to users immediately.

A failed delayed rebuild does not imply bad faith or a compromise of the earlier rebuilder, but it would work as a "nothing up my sleeve" mechanism the earlier rebuilder runs in addition to provide transparency which of their results we could replicated in the future and which we won't.

Future replication might still break for other reasons though, like network resources disappearing.

Make workers profile aware or profiles worker aware

I'd like to be able to use some workers exclusively to rebuild selected profiles, for example, Arch Linux's community repository which has significantly more packages than core and extra. This would also allow for more granular control over which underlying host builds which packages. For example, heavier packages like linux*, gcc etc can be in a separate profile, that is always assigned to a host with the necessary resources.

SQLite tracking issue

To avoid having difficult to configure external dependencies we've decided to use sqlite to track the rebuilderd state. We had some starting issues with sqlite locking up. It got better but still isn't fully resolved. The python ecosystems seems to have this figured out.

The error shows up as:

[2020-04-16T02:42:53Z ERROR r2d2] database is locked

or

[2020-04-20T11:34:08Z ERROR actix_http::response] Internal Server Error: DatabaseError(__Unknown, "database is locked")
[2020-04-20T11:34:08Z INFO  actix_web::middleware::logger] 127.0.0.1:45710 "POST /api/v0/build/ping HTTP/1.1" 500 18 "-" "-" 0.251571

There has been some prior discussion in diesel-rs/diesel#2365

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.