sfttech / kevin Goto Github PK
View Code? Open in Web Editor NEWA simple-stupid self-hostable continuous integration service. :see_no_evil:
License: GNU Affero General Public License v3.0
A simple-stupid self-hostable continuous integration service. :see_no_evil:
License: GNU Affero General Public License v3.0
build <kevin.build.Build object at 0x7f9064176208> finished multiple times, wtf?
Happened quite a lot when redelivering failed webhooks this morning.
It would be nice if the job order would be deterministic, i.e. a job listed first in the config should run first.
Line 38 in 3175df8
Please use function performing constant time comparison like hmac.compare_digest.
The scroll-wheel event stops the autoscroll, but not the manual movement of the scrollbar.
Could you please generate and publish the pydoc of this project making it easier to program for it? For example it would be nice to see the documentation of Watcher while extending it.
It would be useful to have the Kevin frontend turn locations of source code in the build output into links pointing at the PR repo. To do this, it would have to find filenames in the output, check whether they correspond to sources (and not build artifacts, include locations, etc.) and then turn them into links. If there turns out to be too many, degrading browser performance, we could only turn locations in errors or warnings into links.
2016-04-18 11:31:33,111] [253] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/debian-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-24 12:01:01,180] [255] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/debian-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-24 12:05:43,258] [257] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/arch-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-24 12:17:53,350] [259] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/debian-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-24 12:22:26,723] [261] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/arch-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-29 13:20:54,734] [263] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/debian-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-29 13:24:10,015] [265] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/arch-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2016-04-30 12:45:41,212] [267] new client connected
Formatting '/tmp/kevin-tmp.img_00', fmt=qcow2 size=10737418240 backing_file=/home/kevin/vm/debian-openage.img encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
Somehow it increments the connection number by 2.
Mandy uses ES6 and new shit that is only supported in the "latest" browsers (firefox >= 52, ...).
There should be some warning displayed to notify users their browser is too old.
Currently, Kevin reaches out to all configured Falk instances (the container spawners).
It would be better the other way round, i.e. Falk connecting to Kevin.
One just has to reverse the possible connection transports: Unix socket and SSH-forced-command-tunnel.
We should have a matrix bot that reports build results. There is already a Travis bot for matrix. Kevin must not be inferior!
Kevin needs an interactive web frontent. Ideally, it would be served statically as a single-page-thingy, and communicates with the kevin service via websockets.
Here is a list of used / suggested names with their usage if applicable:
all builds are stored on disk anyway, but we should drop old builds from ram.
this can prevent denial of service attacks where all builds for projects are walked through and requested, thereby loaded from disk into ram. for many builds, this can exhaust memory of the process.
-> keep a maximum of $n
builds in memory, delete more.
Currently, the whole repo is cloned for a job. If the repo has many branches and unneccessary stuff, the clone takes quite some time and traffic, even though the needed branch may be much smaller.
-> Update the git clone
call to only fetch the needed commit (and its ancestors).
We should test kevin with kevin. For now, as we have no tests (of course), we could just do pylint verification.
Currently (yes, I'm ashamed) the status update for github blocks the event loop, as it is carried out by requests
, without asyncio
.
This should be changed by using either aiohttp
or wrapping the request call in an executor.
cmake and the compilers don't recognize stdout as a tty, thus they do not emit color escape sequences. iirc this worked previously.
to avoid crap like apt --version > /dev/null || make checkall
I'm trying to get Kevin, Falk and Chantal to work together to login to the guest vm, but I haven't been able to get this to work yet.
Setup:
~/.ssh/kevin-keys
and ssh is configured to use themThe console output of running falk, kevin, and kevin.simulator is here
I can login to the guest vm by running ssh [email protected]
without providing a password, so I'm not sure why it's not working. Any help is appreciated.
The management shell via ssh kills the container as soon as the ssh connection dies.
When restarting/powering off the VM, the ssh connection is closed before the machine is off.
-> The machine is killed before it is off.
I experienced data corruption as stuff was not yet synched to disk.
When github notifies kevin of a push to some branch (e.g. master
), Kevin already performs the build.
Missing is the notification back to github, so that the shiny green arrow will also pop up there.
Currently: only pull request results are reported back.
Goal: also regular branch builds shall be sent to github.
Implementation is in kevin/service/github.py
.
Currently, when a machine is being managed, it is still used as template for started machines.
Possible improvements:
Those ideas can be combined, but each one would be an improvement already. The easiest one is the last one, I guess.
Github Actions supports self-hosted runners:
https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners
Kevin could mimic such a runner, and report to the github-actions UI.
GitHub's runner code is here: https://github.com/actions/runner
Currently, there are only triggers for some events (e.g. github webhook).
But one might want to create nightly builds daily (heh). For that, a trigger that fires up a build at specific times must be implemented.
I'm seeing several failures in job processing:
Sep 12 15:23:16 cyberkischt env[4331]: [2018-09-12 15:23:16,957] exception in Job.run() openage.arch-clang [ecaf496cde4fc12333d383875783c367cddbbbb6]
Sep 12 22:18:47 cyberkischt env[4331]: Traceback (most recent call last):
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/job.py", line 350, in run
Sep 12 22:18:47 cyberkischt env[4331]: await control_handler.asend(data)
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/job.py", line 563, in control_handler
Sep 12 22:18:47 cyberkischt env[4331]: await self.control_message(msg)
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/job.py", line 581, in control_message
Sep 12 22:18:47 cyberkischt env[4331]: await self.set_step_state(msg["step"], msg["state"], msg["text"])
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/job.py", line 287, in set_step_state
Sep 12 22:18:47 cyberkischt env[4331]: time=time))
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/watchable.py", line 55, in send_update
Sep 12 22:18:47 cyberkischt env[4331]: await watcher.on_update(update)
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/build.py", line 387, in on_update
Sep 12 22:18:47 cyberkischt env[4331]: lambda subscriber: isinstance(subscriber, Job)
Sep 12 22:18:47 cyberkischt env[4331]: File "/usr/lib64/python3.6/site-packages/kevin/watchable.py", line 51, in send_update
Sep 12 22:18:47 cyberkischt env[4331]: for watcher in self.watchers:
Sep 12 22:18:47 cyberkischt env[4331]: RuntimeError: Set changed size during iteration
Pls investigate and fix.
GitHub has added new actions to the webhook api. We currently raise
a ValueError
when we get unknown actions, which results in a traceback in the log and errors in the GitHub webhook overview:
Traceback (most recent call last):
File "/home/kevin/kevin/kevin/service/github.py", line 265, in post
self.handle_pull_request(project, json_data)
File "/home/kevin/kevin/kevin/service/github.py", line 331, in handle_pull_request
raise ValueError("unknown pull_request action '%s'" % action)
ValueError: unknown pull_request action 'edited'
We probably should handle this more gracefully.
Once a Job is done, its updates can be baked and simplified. The result state is then known and all the output can be just one blob.
This would speed up the retrieval in Mandy, as not all updates have to be processed again on every view.
In order to trigger a rebuild, kevin could use the labels of pull request.
If the user adds a "kevin pls rebuild" label, the build status will be deleted and built again.
After this was done, kevin removes the label automatically.
I think I observed that a forcepush, which aborts a build, kills the running job, but not others associated with the build.
I conclude this problem from the following error message of an aborted build:
debian
Job cancelled (it could sucessfully clone and partially run the job)
arch
Chantal failed; stdout: Traceback (most recent call last):, File "/home/openage/chantal/__main__.py", line 32, in main, build_job(args), File "/home/openage/chantal/build.py", line 40, in build_job, run_command("git checkout -q " + args.commit_sha, base_env), File "/home/openage/chantal/util.py", line 46, in run_command, raise RuntimeError("command failed: %s [%d]" % (cmd, retval)), RuntimeError: command failed: git checkout -q 54024bf525232a4426ffb46c5d48a0304624fc41 [128],
git checkout -q 54024bf525232a4426ffb46c5d48a0304624fc41
fatal: reference is not a tree: 54024bf525232a4426ffb46c5d48a0304624fc41
command returned 128
internal error
-> It tried cloning a non-existing git reference which was force-pushed away.
We could add a new label that, when added to a pull request, instructs kevin to merge that pull request, after the CI-run passes.
It would be useful to show the lines in the console and let us link to them like in a review of a PR. It could get most useful I guess, when we activate clang-tidy checks to directly link to the problems or even make kind of a todo list in kevin and output it directly in the PR.
Here is an example.
We set up ccache for openage (SFTtech/openage#1000), which requires a persistent storage device for each VM.
This implies that this storage device must not be mounted twice at once. Which is only possible if each VM can only run alone at a given moment.
Enabling such a "limit" is a missing feature in falk and kevin. It should be configured in the falk.conf
for each machine.
We should automatically create and publish nightly packages when a PR is merged. The same could be done for commits that are tagged with "release".
Currently, the build is performed in a throwaway qemu-vm, which some admin has to set up.
As an extension, falk should have a docker backend:
That way, the container configuration can even be done by external contributors, not only by the VM admin.
Vagrant is a very convenient tool to create VMs from a configuration file (very much like docker with a dockerfile). Would a vagrant plugin be something useful for kevin? This would allow developers to define VMs via a config file, that then would be spun up by vagrant (probably via falk). Vagrant could then use the builtin provisioner to run chantal, or simply ssh into the VM after it has been launched via vagrant ssh
. Vagrant also supports multiple virtualization backends (libvirt, virtualbox, vmware but also docker & lxc) and provides a unified interface to all these. I think it can even run windows in VM, but have never tried it.
Would this be a useful addon/plugin for kevin?
X-GitHub-Delivery
headerbuild <kevin.build.Build object at 0x7f9066647110> finished multiple times, wtf?
needs a better messageA declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.