๐จ NOTE: Knative Build is deprecated in favor of Tekton Pipelines. There are no plans to produce future releases of this component. ๐จ
The original README can be found here for historical purposes.
A Kubernetes-native Build resource.
License: Apache License 2.0
๐จ NOTE: Knative Build is deprecated in favor of Tekton Pipelines. There are no plans to produce future releases of this component. ๐จ
The original README can be found here for historical purposes.
GCB allows users to specify a timeout for their build, with a default of 10 minutes. If the build takes longer than this it's killed with status TIMEOUT
. We've also added support for per-step timeout
(default of infinity).
It would be nice to support these in Build CRD as well.
This issue is intended to track documenting (and if necessary designing / implementing) facilities for inter-build caching.
I'm opening this issue to track ideas for additional built-in "source" types for build, where today we only support Git / Custom.
These repositories were bootstrapped with placeholder events (e.g.)
This issue covers adding events for relevant pieces (e.g. step completion?) similar to knative/serving#358
The zz_generated.deepcopy.go
file isn't being regenerated.
Additional context: https://elafros.slack.com/archives/C93EQRQ4A/p1520444564000373
Ideally, we wouldn't need to create a custom BuildTemplate for every buildpack app that's built via the Build CRD. However, many buildpack apps depend on custom environment variables that affect the build process, and setting those environment variables can only be achieved by modifying the BuildTemplate.
Could we introduce an env
object to the Configuration that is applied for all steps in the BuildTemplate?
In particular, the use of hostPath volumes is not currently something we support bridging to/from GCB's yamls.
Today GCB mounts some of these things into every container, so this is at least something that we should support in FromCRD
, but given this lossy translation it isn't clear how we implement ToCRD
and test roundtripping.
In particular, this should have:
package validate
// Implements error
type Error struct {...}
func NewError(...) error {...}
I'm not 100% sure whether I think the common validation should move here, but that should be considered as well.
I have some thoughts on this, will follow up with that.
Off the top of my head:
Currently accessing Build logs is a poor experience in (at least) two ways.
Accessing any logs currently requires me to break encapsulation and peel your way through the Job
to the Pod
running, and then access the logs of the relevant init container.
For failed steps, this is less than ideal, but works.
For successful steps, you get a container not-found error (I guess it's aggressively cleaned up).
We should use a validating webhook (beta in 1.9) to perform validation logic synchronously.
We should also add logic to ensure that the "Spec" of builds doesn't change post-create.
We should be able to restrict the network access of steps so that users can enforce the hermeticity of their builds.
In terms of mechanism something to consider is that service meshes like Istio inject egress proxies in addition to handling ingress. Perhaps we could leverage this mechanic to block (or perhaps filter) outbound traffic.
We would need the capacity for Source to run outside of this. We would likely also need a way to opt steps out, so that they may publish artifacts from the build.
GCB supports an API endpoint to cancel an ongoing build. When a build is cancelled its status is set to CANCELLED
and the worker VM is terminated immediately.
Cancelling a finished build is an HTTP 400
.
it would be useful if there were a model for launching sidecar containers for Build steps.
Google has a need to support a new type of source for users who don't have their source in a Git repository, but instead may have source staged in Google Cloud Storage.
The simplest form is to specify a tar.gz
file in GCS, which some client would upload before initiating the build/deployment, and which we'd fetch and untar during the build.
A slightly more complex form is to specify a "source manifest" file in GCS, which lists other individual objects in GCS, named after the file's SHA, and mapped to a location in the source context where that file should be downloaded to during the build.
For instance, a source manifest file might look like:
{
"path/to/main.go": {
"sourceUrl": "gs://bucket/foo/bar",
"sha1Sum": "deadbeef",
},
"path/to/Dockerfile": {
"sourceUrl": "gs://bucket/foo/bar",
"sha1Sum": "facadecafe",
},
...
}
This indicates that two objects at gs://bucket/foo/bar/deadbeef
and gs://bucket/foo/bar/facadecafe
should be downloaded and placed at path/to/main.go
and path/to/Dockerfile
respectively. We'd also verify each object's SHA matches, during download.
The main benefit of source manifests is that they enable incremental source upload from the client, since objects are named by their hash, files with the same SHA may already uploaded from previous deployments. This is the basic mechanism that App Engine source uploads have used for many years, with no serious modifications or problems.
Here's a simple strawman of how to specify GCS source in the Build CRD (very open for discussion):
spec:
source:
gcs:
type: Archive | Manifest
location: 'gs://bucket/path/to/manifest.json' # or '...archive.tar.gz'
steps:
...
If gcs
source is specified, the Build controller would prepend a step that specifies an image with code to fetch archives or manifests from GCS. We'll opensource the code for that fetcher image, and we'll document and opensource the spec for the source manifest file format.
If there are no concerns, we'd like to implement this as soon as this week. Please let me know if you have questions/concerns/ideas.
/cc @squee1945 @mattmoor
See the autoscaler roadmap for example.
We should be able to use this to drive our selection of issues for a particular milestone.
Now that we have visibility into the steps we can better surface errors about which step failed, and how. We should leverage this to expose better errors in the Build status.
./hack/update-*.sh
is no-diffThis should walk through various scenarios (e.g. Private Github => DockerHub).
Where possible, these should remain agnostic of the template they are using.
I see:
apiVersion: cloudbuild.googleapis.com/v1alpha1
kind: Build
metadata:
clusterName: ""
creationTimestamp: 2018-02-08T05:36:00Z
generation: 0
labels:
expect: complete
name: test-template-args
namespace: default
resourceVersion: "2488570"
selfLink: /apis/cloudbuild.googleapis.com/v1alpha1/namespaces/default/builds/test-template-args
uid: f250bbc2-0c91-11e8-af08-42010af00074
spec:
template:
arguments:
- name: FOO
value: foo
- name: BAZ
value: bazzzzz
name: template-args
status:
builder: Google
completionTime: null
conditions:
- lastTransitionTime: 2018-02-08T05:36:01Z
message: 'googleapi: Error 400: invalid build: key in the template "FOO" is not
a valid built-in substitution, badRequest'
reason: BuildExecuteFailed
state: Failed
status: "True"
startTime: null
In GCB, $GIT_COMMIT
is a fairly widely used substitution, e.g. it can be used to give Docker images unique tags (with a hint of provenance).
I'd posit that this class of substitution is most useful in the context of a trigger (specifically a Git trigger), which is something we haven't meaningfully discussed and may want to keep out of scope for now. I'm illustrating it here more to show my thinking than anything else:
apiVersion: cloudbuild.dev/v1alpha1
kind: GithubTrigger
spec:
repo: # overload of GitSourceSpec
url: https://github.com/foo/bar
branch: <regexp, e.g. master>
tag: <regexp, e.g. v[0-9.]+>
refs: <regexp, e.g. refs/pulls/.*>
# TODO(mattmoor): Additional conditions, e.g. PR from admin, comment from admin on PR.
template:
name: dockerfile-build-push
namespace: cloud-builders
arguments:
- name: IMAGE
value: gcr.io/foo/bar:${GIT_COMMIT}
/cc @imjasonh
Whenever I run ./hack/update-codegen.sh
:
Pruning is now performed automatically by dep ensure.
Set prune settings in Gopkg.toml and it it will be applied when running ensure.
This command currently still prunes as it always has, to ease the transition.
However, it will be removed in a future version of dep.
Now is the time to update your Gopkg.toml and remove `dep prune` from any scripts.
When a BuildTemplate is missing, the controller just quietly does nothing. We sould surface a validation error.
At present, we use cloudbuild.googleapis.com/v1alpha1
, which is confusing since these Kubernetes-style resources aren't 1:1 with the GCB specification.
We should acquire a domain (ideally vendor agnostic), and replace this with that.
When specifying a git
source
, e.g.,
source:
git:
url: https://github.com/mchmarny/rester-tester.git
branch: master
It seems that we will need to allow the user to (optionally) identify the root of the source to be built, e.g.,
source:
git:
url: https://github.com/mchmarny/rester-tester.git
branch: master
src_dir: webapp/src
A git repo may contain any number of supporting folders and the required build key file (e.g., pom.xml
, package.json
, build.gradle
) may not be housed at the very top level.
When fetching the contents of this repo, only the src_dir
contents should be placed in the build workspace. (Perhaps there is some git optimization that allows only this sub-folder to be fetched in the first place.)
You can see an example use case here.
Guards against concurrent map access panics in certain tests, and it's just good hygiene.
Elafros repo PR: knative/serving#365
Something like:
Annotations: map[string]string{
"sidecar.istio.io/inject": "false",
},
To test true isolation from Google, it should be possible to run a build publishing to an on-cluster registry (most testing currently works against GCR), so that subsequent deployments may pull the image from there.
In the simplest case, testing this in an unauthenticated (e.g. cluster-visible) setup is probably a good starting point. Beyond that we get into non-GCR authentication questions, which is a separate issue.
What is the motivating goal of a vendor-agnostic Build CRD that runs on Kubernetes.
There are still two things that require go get
while setting up, which should be moved into dep
and vendor/
.
See: knative/serving#365
I had a crazy idea derived from the thinking on how we'd materialize secrets for cluster builds.
In a nutshell: I can read a K8s ConfigMap or Secret via the K8s API, given K8s auth.
What if the first step we ran in GCB populated volumes with data read directly from the API?
This assumes:
Coupled with this it should mean that essentially anything we do for secrets for cluster builds could also work for things running on GCB.
/cc @imjasonh WDYT?
We should incorporate a story for tracking provenance into the Build CRD's model.
Within the current model (w/o further restriction), any step can fetch additional inputs or publish outputs.
We need a format in which steps can surface this information, a mechanism by which they do (volume?, termination message?), and a mechanism by which this information is published somewhere (Grafeas?).
This seems to capture a superset of the existing GCB Git functionality (except provenance, for which I hope we soon have a model that this could embrace).
Prow doesn't seem to be working properly in Build.
cc @adrcunha
Successfully creating the builds doesn't necessarily tell the whole story. Having a scripted test for that would be. E2E tests should run watch -n 2 kubectl get builds ...
as described in https://github.com/elafros/build/blob/master/DEVELOPMENT.md#running-integration-tests, until builds stabilize, and check the expected status.
E2E creates the builds, but doesn't check that they succeeded.
Manually inspecting the output as currently described in the README.md
is error-prone and highly undesirable. These should be harnessed into some level of testing that can be orchestrated by Travis or some other CI setup (potentially thru Bazel, e.g. sh_test
)
We should run the appropriate configuration against each "builder" (currently Google vs. Cluster).
@imjasonh FYI as I know this is a sore spot.
We need the capacity to perform builder-specific validation, so that certain builders can opt to reject certain builds.
For example, the on-cluster builder may be configured to reject privileged builds.
For example, the Google builder may reject builds that require access to on-cluster resources (secrets, configmaps, volume plugins) or simply don't translate completely.
Came up in this issue, which effectively discusses a customized pre-step.
In principle, with generalized pre-step support, the entire solution in the issue above could be implemented as a mutating webhook that simply injects a pre-step with the appropriate volume mounts. This kind of separation feels like a nice way to split the core model from the convention(s) that make it more approachable.
This had come up in code review previously, and it seemed like a more minor thing at the time, but I'm learning the hard way why Go style prefers error
and type checks over returning concrete types: https://golang.org/doc/faq#nil_error
/cc @imjasonh I'm sorry for ever doubting you :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.