moved to concourse/concourse
This repository has been incorporated into the concourse/concourse
repo as part of The Great Project
Restructuring of 2018.
old - now lives in https://github.com/concourse/concourse
License: BSD 2-Clause "Simplified" License
concourse/concourse
This repository has been incorporated into the concourse/concourse
repo as part of The Great Project
Restructuring of 2018.
We have been trying to use the concourse S3 resource to hold some state and
been running into some issues that proved hard to diagnose. We think that when
a resource returns an empty array of versions, atc gets confused and the job is
shown as pending.
We have pipeline containing:
resources:
- name: s3-bucket
type: s3
source:
access_key_id: {{aws_key}}
bucket: piotr-state
region_name: eu-west-1
secret_access_key: {{secret_key}}
versioned_file: vpc.tfstate
jobs:
- name: test-s3
serial: true
plan:
- get: s3-bucket
- task: echo
config:
image: docker:///governmentpaas/bosh-init
inputs:
- name: s3-bucket
path: ""
run:
path: date
This is a cut-down version of our pipeline, which runs on concourse-lite and is
used to bootstrap our real concourse installation.
When we run this for the first time, the S3 bucket contains no files matching
the versioned_file parameter passed to the S3 resource ie. the resource is
"empty" and has no versions.
We did a bit of debugging and found that the call to check passed in a null
version, as might be expected from reading
http://concourse.ci/implementing-resources.html.
The output from the S3 resource check executable contained an empty array of versions.
When a resource is empty, how should it respond to a check? With an empty array
of versions or a single null version?
If a resource does respond in a way that's unexpected or not allowed by
atc/concourse, we think it would be helpful to make the job show as failed
along with a reason. We spent a fair amount of time thinking that the job
itself was hanging.
It would be nice to have icons associated with resources. This way it is easy at a glance to see which type of resource is being in/outputted. And it prevents resource names with the type included.
I am using 0.68 concourse/lite vagrant release.
On the web UI in for my "src" resource I've got somehow following error:
checking failed
resource script '/opt/resource/check []' failed: exit status 255
stderr:
start process: container_daemon: connect to socket: unix_socket: connect to server socket: dial unix /var/vcap/data/garden/depot/6fav943qhns/run/wshd.sock: connection refused
If in that condition I do fly hijack
I got following error in atc.error.log
:
2015/11/25 16:51:23 http: panic serving 192.168.100.1:60867: runtime error: invalid memory address or nil pointer dereference
goroutine 163213 [running]:
net/http.(*conn).serve.func1(0xc8205400b0, 0x7f06fd07e4c0, 0xc8200285e0)
/usr/local/go/src/net/http/server.go:1287 +0xb5
github.com/concourse/atc/api/containerserver.(*Server).hijack(0xc8202b6990, 0x7f06fd0ba548, 0xc820540160, 0xc8201ca986, 0xb, 0xc8204b7ef0, 0x4, 0x0, 0x0, 0x0, ...)
/var/vcap/packages/atc/src/github.com/concourse/atc/api/containerserver/hijack.go:75 +0x180f
github.com/concourse/atc/api/containerserver.(*Server).HijackContainer(0xc8202b6990, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/concourse/atc/api/containerserver/hijack.go:49 +0x618
github.com/concourse/atc/api/containerserver.(*Server).HijackContainer-fm(0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/concourse/atc/api/handler.go:160 +0x3e
net/http.HandlerFunc.ServeHTTP(0xc8201b13d0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/usr/local/go/src/net/http/server.go:1422 +0x3a
github.com/concourse/atc/auth.checkAuthHandler.ServeHTTP(0x7f06fd07a000, 0xc8201b13d0, 0x7f06fff0e640, 0xfd1130, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/concourse/atc/auth/check_auth_handler.go:22 +0x6b
github.com/concourse/atc/auth.(*checkAuthHandler).ServeHTTP(0xc8202bade0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
<autogenerated>:4 +0xb4
github.com/concourse/atc/auth.authHandler.ServeHTTP(0x7f06fff0e668, 0xc8202bade0, 0x7f06fff0e408, 0xfd1130, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/concourse/atc/auth/wrap_handler.go:28 +0x101
github.com/concourse/atc/auth.(*authHandler).ServeHTTP(0xc8202bae00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
<autogenerated>:16 +0xb4
github.com/bmizerany/pat.(*PatternServeMux).ServeHTTP(0xc8200280f0, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/bmizerany/pat/mux.go:109 +0x244
net/http.(*ServeMux).ServeHTTP(0xc820346c00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/usr/local/go/src/net/http/server.go:1699 +0x17d
github.com/concourse/atc/auth.CookieSetHandler.ServeHTTP(0x7f06fff0ed90, 0xc820346c00, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/concourse/atc/auth/cookie_set_handler.go:35 +0x2be
github.com/concourse/atc/auth.(*CookieSetHandler).ServeHTTP(0xc82032d440, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
<autogenerated>:5 +0xb6
github.com/gorilla/context.ClearHandler.func1(0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/var/vcap/packages/atc/src/github.com/gorilla/context/context.go:141 +0x85
net/http.HandlerFunc.ServeHTTP(0xc820344c40, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/usr/local/go/src/net/http/server.go:1422 +0x3a
net/http.serverHandler.ServeHTTP(0xc8202f3380, 0x7f06fd0ba548, 0xc820540160, 0xc82037d0a0)
/usr/local/go/src/net/http/server.go:1862 +0x19e
net/http.(*conn).serve(0xc8205400b0)
/usr/local/go/src/net/http/server.go:1361 +0xbee
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:1910 +0x3f6
If you are logged out and press (+) you are redirected to log back in, the resulting redirect back loses the method (POST
or PUT
) of the request and does a GET
instead. This results in landing on a page that says method not allowed. This can be confusing, as it may make users think they can't log in.
Seems the header is interfering with the ability to see the bottom n pixels of the pipeline sidebar in scrolling mode.
To reproduce:
queue-once
and area51
are no longer visiblearea51
pipeline (last in the list)The 0.73.0
release has broken font downloads for Safari only. You can see the behavior on https://java-experience.ci.springapps.io/pipelines/cf-java-client. It appears that 404's are being returned for the font artifacts:
The downloads appear to work properly in Chrome.
When my build view has a long line that make the output scroll horizontally, I can't scroll to the right. The Javascript on the page seems to keep pushing it back to the left every time I hit the right arrow key.
https://main.bosh-ci.cf-app.com/pipelines/bosh-agent-windows/jobs/test-vsphere-stemcell/builds/38 is an example of this. You'll have to narrow the window to see this bug. Only tried Chrome 49.0.2623.87.
We located a task in another pipeline that we knew had the same container requirements. If the docker container had been output in the UI it would have saved us digging through the task definitions of the other pipeline.
This would also be useful for public Concourses where you might not be able to easily find and access the task.yml.
Should be a better 404 page, including a link back to home.
Currently the oauth endpoint is hardcoded to github.com
: https://github.com/concourse/atc/blob/master/auth/github/provider.go#L35
Would be nice if it could be set to a internal GE instance for authentication.
When a 'out' resource returns an empty object {}
the currently open view of the build status renders it just fine. But after reloading the build view the build status overview just shows a bar with 'loading' and a spinner.
javascript console throws this erorr:
failed to fetch plan: UnexpectedPayload ("expecting an object but got null"): ({ build = { id = 106, name = "35", job = Just { name = "build", pipelineName = "pylib" }, status = Succeeded, duration = { startedAt = Just {}, finishedAt = Just {} } }, steps = Nothing, errors = Nothing, state = StepsLoading, context = { events = Address <function>, buildStatus = Address <function> }, eventSource = Nothing, eventSourceOpened = False },None)
When the resource returns an object with empty 'version' {"version": {}}
this problem does not happen.
This is repeatable every time I run ginkgo -r -race on the resource directory
[1430594457] Resource Suite - 64/64 specs โขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโข==================
WARNING: DATA RACE
Read by goroutine 47:
github.com/concourse/atc/resource_test.funcยท085()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out_test.go:57 +0x4c
github.com/cloudfoundry-incubator/garden/fakes.(*FakeProcess).Wait()
~/workspace/concourse/src/github.com/cloudfoundry-incubator/garden/fakes/fake_process.go:71 +0x220
github.com/concourse/atc/resource.funcยท003()
~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:134 +0x6b
Previous write by goroutine 7:
github.com/concourse/atc/resource_test.funcยท086()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out_test.go:51 +0x352
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:104 +0x11b
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0xd3
github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:14 +0x78
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:149 +0x362
github.com/onsi/ginkgo/internal/spec.(*Spec).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:118 +0x1a9
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:144 +0x2fa
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:61 +0xb8
github.com/onsi/ginkgo/internal/suite.(*Suite).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/suite/suite.go:59 +0x35b
github.com/onsi/ginkgo.RunSpecsWithCustomReporters()
~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:203 +0x38f
github.com/onsi/ginkgo.RunSpecs()
~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:184 +0x100
github.com/concourse/atc/resource_test.TestResource()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_suite_test.go:30 +0xa8
testing.tRunner()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:447 +0x133
Goroutine 47 (running) created at:
github.com/concourse/atc/resource.funcยท004()
~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:140 +0xa66
github.com/tedsuo/ifrit.RunFunc.Run()
~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
github.com/concourse/atc/resource.funcยท002()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_out.go:33 +0x35c
github.com/tedsuo/ifrit.RunFunc.Run()
~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
github.com/concourse/atc/resource.(*versionedSource).Run()
<autogenerated>:14 +0x97
github.com/tedsuo/ifrit.(*process).run()
~/workspace/concourse/src/github.com/tedsuo/ifrit/process.go:71 +0x97
Goroutine 7 (running) created at:
testing.RunTests()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:555 +0xd4e
testing.(*M).Run()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:485 +0xe0
main.main()
github.com/concourse/atc/resource/_test/_testmain.go:54 +0x28c
==================
โขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโขโข==================
WARNING: DATA RACE
Read by goroutine 30:
github.com/concourse/atc/resource_test.funcยท024()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_in_test.go:56 +0x4c
github.com/cloudfoundry-incubator/garden/fakes.(*FakeProcess).Wait()
~/workspace/concourse/src/github.com/cloudfoundry-incubator/garden/fakes/fake_process.go:71 +0x220
github.com/concourse/atc/resource.funcยท003()
~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:134 +0x6b
Previous write by goroutine 7:
github.com/concourse/atc/resource_test.funcยท025()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_in_test.go:50 +0x3fc
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:104 +0x11b
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/runner.go:63 +0xd3
github.com/onsi/ginkgo/internal/leafnodes.(*SetupNode).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/leafnodes/setup_nodes.go:14 +0x78
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:149 +0x362
github.com/onsi/ginkgo/internal/spec.(*Spec).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/spec/spec.go:118 +0x1a9
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:144 +0x2fa
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:61 +0xb8
github.com/onsi/ginkgo/internal/suite.(*Suite).Run()
~/workspace/concourse/src/github.com/onsi/ginkgo/internal/suite/suite.go:59 +0x35b
github.com/onsi/ginkgo.RunSpecsWithCustomReporters()
~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:203 +0x38f
github.com/onsi/ginkgo.RunSpecs()
~/workspace/concourse/src/github.com/onsi/ginkgo/ginkgo_dsl.go:184 +0x100
github.com/concourse/atc/resource_test.TestResource()
~/workspace/concourse/src/github.com/concourse/atc/resource/resource_suite_test.go:30 +0xa8
testing.tRunner()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:447 +0x133
Goroutine 30 (running) created at:
github.com/concourse/atc/resource.funcยท004()
~/workspace/concourse/src/github.com/concourse/atc/resource/run_script.go:140 +0xa66
github.com/tedsuo/ifrit.RunFunc.Run()
~/workspace/concourse/src/github.com/tedsuo/ifrit/runner.go:36 +0x56
github.com/concourse/atc/resource.(*versionedSource).Run()
<autogenerated>:14 +0x97
github.com/tedsuo/ifrit.(*process).run()
~/workspace/concourse/src/github.com/tedsuo/ifrit/process.go:71 +0x97
Goroutine 7 (running) created at:
testing.RunTests()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:555 +0xd4e
testing.(*M).Run()
/usr/local/Cellar/go/1.4.2/libexec/src/testing/testing.go:485 +0xe0
main.main()
github.com/concourse/atc/resource/_test/_testmain.go:54 +0x28c
==================
BOSH team ran into an issue where their builds started erroring immediately with "multiple containers found: handle-a, handle-b". This happened when running the checks for all input resources. If we find multiple containers when searching for what should be uniquely identifying information (in this case, roughly pipeline name + resource name + resource config), we intentionally blow up, as something weird must have happened.
In this case, it ultimately came down to the worker taking too long to heartbeat:
Jul 21 13:09:27 worker-3 groundcrew: 2015/07/21 20:09:26 heartbeat took 1.111563ms
Jul 21 13:09:57 worker-3 groundcrew: 2015/07/21 20:09:56 heartbeat took 20.089841ms
Jul 21 13:10:27 worker-3 groundcrew: 2015/07/21 20:10:26 heartbeat took 6.510488ms
Jul 21 13:10:57 worker-3 groundcrew: 2015/07/21 20:10:56 heartbeat took 6.690761ms
Jul 21 13:12:33 worker-3 groundcrew: 2015/07/21 20:12:32 heartbeat took 1m5.971438976s
Jul 21 13:13:03 worker-3 groundcrew: 2015/07/21 20:13:02 heartbeat took 1.260327ms
Jul 21 13:13:32 worker-3 groundcrew: 2015/07/21 20:13:32 heartbeat took 118.188899ms
Jul 21 13:14:02 worker-3 groundcrew: 2015/07/21 20:14:02 heartbeat took 1.113901ms
Jul 21 13:14:32 worker-3 groundcrew: 2015/07/21 20:14:32 heartbeat took 1.112643ms
The botched heartbeat happened at 13:12:33, during which the worker will have dropped out of the pool (its TTL having expired), and if during that window we run a check, we'll not find the containers on the worker, and create another. When the worker comes back, we'll have two containers.
How should we handle this? Nuke one of the containers? Prevent action if all workers aren't in the pool? (Which we can't know ahead of time.)
Currently the only keybinding for refreshing the page is Cmd-R on a mac; or to click on a build number at the top of the screen.
But if a job has finished, and there are new builds running/finished, they don't show up automatically and cannot be selected. So you still need to Cmd-R first. Then click on a new build number.
Instead, could we have a keybinding to "refresh to current active or last finished build" please?
Bonus: make the keybinding redirect to it a URL endpoint e.g. jobs/foobar/latest
that could be placed in a dashboard (the dashboard could go to this url ever 1 minute to ensure its showing the latest)
The pattern of concentric circles is there in Safari, but doesn't animate.
Nothing at all shows up in Firefox. The box is grey.
It would be nice to be able to authenticate to the Concourse API in scripts without using basic auth. The current OAuth workflow can only happen in a browser that is logged into Github.
It was mentioned in Slack that there could be another auth type like personal access tokens. There are also other OAuth flows that might make sense.
When I'm on build 120/163 the top bar shows up like this:
When I move to build 140 it changes to this:
Seems like in the first one it's broken, since it makes it look like 121 is the most recent build. I don't know what the best solution would be, since you won't be able to fit all those builds on the screen, but it should make clear what build you are on, and where it falls in the history of builds.
Hi,
i've recently deployed the newest garden-linux-release and found that atc won't specify a user resulting in:
error":"linux_container: Run: A User for the process to run as must be specified. ...."
https://github.com/concourse/atc/blob/master/exec/task_step.go#L144 does not specify any user so that part is right.
Anything i am missing here ?
cya
M.Schwarz
Would be helpful for debugging. Maybe down by the cli
list in the bottom right?
If you are using Concourse on a wall display and the server dies, it will continue to show the same view. Build animations continue so there is no way to tell that it will never update.
I suggest opening a modal that says "Communication with the server has been interrupted" when the server connection is down.
https://concourse.diego-ci.cf-app.com/pipelines/main/resources/diego-release currently takes 1.2 minutes to load. Maybe some lazy loading is in order?
When you get to having a lot of pipelines - grouping them because a way to draw your eye directly to the section of the side panel your pipeline resides in.
We've done this in a hacky way and it looks like this:
Would be nice to officially support this so it doesn't have to be a paused blank pipeline.
It would be nice to be able to see on the check page for a resource:
A common pattern is a simple task command which builds artifacts in the source directory. These artifacts then need to be copied/moved out of this directory into a directory which is an output
because in-/outputs can't be nested. This directory is outside of the source directory, adding the need to modify/wrap the simple command to account for this and making these commands less CI agnostic.
This also applies to inputs
where the command needs to be (made) aware of the CI directory structure to find the input artifact.
A antipattern I now apply is building tasks like this:
---
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
tag: '14.04'
inputs:
- name: my-repo
- name: version
outputs:
- name: dist
run:
path: /bin/sh
args:
- -c
- cp -v version/version dist/version; /usr/bin/make -C src test build;mv -v src/dist/*.tar.gz dist/
My suggestion is to add actions
to the inputs
/outputs
which override the default behaviour of the directory nesting error and allow to copy or move files out.
Example:
---
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
tag: '14.04'
inputs:
- name: my-repo
outputs:
- name: artifact
copy: my-repo/dist/
run:
path: my-repo/scripts/test
my-repo/scripts/test
would generate a artifact my-repo/dist/build.tar.gz
which would then be copied to artifact/build.tar.gz
.
Or possibly support globing to allow artifact extraction without subdirectories:
---
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
tag: '14.04'
inputs:
- name: my-repo
outputs:
- name: artifact
copy: my-repo/*.tar.gz
run:
path: my-repo/scripts/test
where my-repo/scripts/test
would generate a artifact my-repo/build.tar.gz
which would then be copied to artifact/build.tar.gz
.
We had an issue today where we had misconfigured our pipelines in such a way that they were checking ok (the git repository they were looking at existed) but were failing to find refs / versions (the paths we told concourse to watch were bogus). The only way to find these half-broken inputs was to click on each input (and there were about a dozen total) and see which ones had a "checked successfully", but did not list any found versions.
It would be helpful in cases like these if the web UI behaved as follows:
These endpoints:
https://github.com/concourse/atc/blob/master/wrappa/api_auth_wrappa.go#L62-L80
...should be pared down to just the ones required to log in (ListAuthMethods, etc.), and the rest should require publicly_viewable
to be true
.
enqueueing builds can be slow because it's one big expensive query to determine candidate inputs. unfortunately when it's slow it also prevents pending builds from being scheduled, because we run one after the other on one interval (10sec currently).
we should ultimately fix the slowness, but until then a cheap improvement is to split these two operations into independently running loops with their own intervals (still 10sec). that way if one loop is blocking the other one at least works.
Hi,
is it possible or planned that github webhooks can be used instead of polling for new commits on git repos? Our pipelines' pollings create way too much traffic on certain github repos whose content changes seldomly.
Best regards, Stephan Weber
If you have too many groups and they wrap, they overlay the menu at a higher z-index, and items on the menu are unclickable.
I've created a bookmarklet with some styling hacks that helps some. Before and after screenshots are attached.
Styling probably needs a holistic look, but perhaps at least the z-index/menu positioning fixes can be incorporated sooner than later to fix menu usability issues.
javascript:(function() {
$('.nav-item, nav .groups li a').css('font-size', '12px', 'important');
$('.nav-item, nav .groups li a').css('font-family', 'sans-serif', 'important');
$('.nav-item, nav .groups li a').css('max-height', '40px', 'important');
$('.nav-item, nav .groups li a').css('max-width', '150px', 'important');
$('.nav-item, nav .groups li a').css('line-height', '40px', 'important');
$('.nav-item, nav .groups li a').css('overflow', 'hidden', 'important');
$('.nav-item, nav .groups li a').css('white-space', 'nowrap', 'important');
$('.nav-item, nav .groups li a').css('text-overflow', 'ellipsis', 'important');
$('.nav-item, nav .groups li a').css('padding', '0 2px', 'important');
$('div .nav-container').css('z-index', '4', 'important');
$('div .nav-container').css('top', '40px', 'important');
$('.pipelines li').css('margin-top', '-34px', 'important');
})();
No timestamps when looking at atc.stderr.log
, e.g.:
failed to set keepalive count: setsockopt: bad file descriptor
failed to set keepalive interval: setsockopt: bad file descriptor
Timestamps are in atc.stdout.log
, e.g.:
{"timestamp":"1442945883.213434696","source":"atc","message":"atc.p-redis-deployments:radar.scanner-failed","log_level":2,"data":{"error":"multiple containers found, expected one: 0o59l29o642, 0o546uc66si","member":"p-redis-deployments:bosh-lite-site-changes","session":"1493318"}}
It would be nice if Concourse could run tests outside containers (which seem the main focus right now).
As an example, we have a Jenkins server that connects to a slave and starts a custom-built Vagrant VM that runs tests on GNOME 3 and other desktop environments (e.g. Windows). Sometimes these tests must be able to perform certain things that only exist in a Virtual Machine or even a proper Desktop computer (e.g. audio).
Since Concourse relies on Garden heavily, and Garden seems to work exclusively with workers, it's not clear how to migrate our CI pipeline to Concourse.
Currently the collapsed pipeline 'menu' kind of hides other pipelines for newcomers to the CI until you discover it. Would be nice to be able to have the menu shown be default (and keep hidden toggle state in browser storage) or to be able to set a pipeline overview as starting page instead of the main pipeline.
I setted up concourse on AWS with the standalone binary way (http://concourse.ci/binaries.html) and I am getting the following error 'could not read file from tar' on executing my pipeline.
If I run the same pipeline against a concourse setup within a local VM with vagrant it runs.
This is my configured image resource:
image_resource:
type: docker-image
source: {repository: ubuntu}
There is no more error message in the logs and from the source code it seems that the original error is hidden.
What is the cause of the error so that I can use concourse?
It'd be great to have the equivalent of this Buildkite feature.
This would let us specify logical stages of our build output that could be collapsed. For example:
echo '--- bundling'
bundle check || bundle install
<lots of bundler output>
.
.
.
echo '--- preparing DB'
service mysql start
bundle exec rake db:migrate
<lots of migration output>
.
.
.
echo '--- running tests'
bundle exec rspec
There are times where you want to specify runtime configuration during the out
phase of a resource. This configuration typically happens in the params
, rather than the resource's source
. Due to the way resources perform in
directly after out
, this runtime configuration from the params
is lost.
One example in particular is the bosh-deployment
resource, where the BOSH target can be specified with target_file
in the params
. Because this runtime configuration is scoped only to the out
, there was a decision made to include the target
from the target_file
in the version
output so it is available to the impending in
. This seems counter to the intent of what makes up a version
. Instead, I believe it makes sense to either pass along the params
(or as a new bit called runtime
) as well as version
between these actions. This will allow the passing of the configuration separately from the version
for less confusion.
Currently the build start and finish time are shown relatively. Would be nice if it could show absolute time as well (extra field, tooltip).
Would be nice to set a (multiline) description on a pipeline which would appear on the pipeline page. This would allow to provide context about the pipeline and also provide links to github page, documentation, etc.
Hi,
we are running Concourse 0.74 and our atc jobs on the "web" nodes are crashing often with:
failed to enable connection keepalive: file tcp 10.1.6.0:33296->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33297->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33299->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33303->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:33314->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35453->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35455->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35456->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35457->10.1.6.12:7777: fcntl: too many open files
failed to enable connection keepalive: file tcp 10.1.6.0:35458->10.1.6.12:7777: fcntl: too many open files
As a workaround, we call "monit restart atc". Do you know the root cause of this problem? Please tell us if you need more system information for analysis.
Thanks and Best Regards,
Jochen.
{"timestamp":"1444779382.015635014","source":"atc","message":"atc.build-tracker.track.failed-to-get-lease","log_level":2,"data":{"build":8302,"error":"pq: deadlock detected","session":"5.139138"}}
possibly related to memory contention on our vSphere cluster - @vito
plenty of other weirdness:
Today we execute a pretty expensive query to determine the candidate versions for a build's get
steps, based on their passed
constraints. If the constraints are complicated enough (either passed: [a0, a1, ... aN]
or many inputs with correlated passed constraints), this kills the postgres as more and more versions and more and more builds have to be scanned through.
The related database logic is (PipelineDB).GetLatestInputVersions
. It already attempts some optimization (by splitting up the query to one per input, adding constraints as it goes along), but it's still very expensive.
We're testing out the ensure feature and found some Ui inconsistencies when using it with a do block. It works fine and looks like we'd expect when both tasks in the do block fail or only the first task fails, but the case when the last fails and the first passes looks odd in the UI
Here's the YAML for the pipeline:
---
resources:
jobs:
- name: make-it-happen
plan:
- do:
- task: beep
config:
platform: linux
image: docker:///ubuntu
run:
path: /bin/bash
args: ["-c", "exit 0"]
- task: boop
privileged: true
config:
platform: linux
image: docker:///ubuntu
run:
path: /bin/bash
args: ["-c", "exit 1"]
ensure:
task: moop
config:
platform: linux
image: docker:///ubuntu
run:
path: /bin/bash
args: ["-c", "ls"]
I pin a Concourse tab in all my browsers because CONCOURSE IS LIFE.
But a tab for an app without a favicon looks a little sad ๐ฟ
A favicon would be really useful. I would offer to submit a PR but we all know what happens to my PRs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.