Coder Social home page Coder Social logo

openshift / ci-tools Goto Github PK

View Code? Open in Web Editor NEW
40.0 11.0 248.0 171.13 MB

DPTP Tooling

License: Apache License 2.0

Makefile 0.38% Go 92.97% Dockerfile 0.23% Shell 2.65% Python 0.56% Awk 0.02% HTML 0.15% TypeScript 2.69% CSS 0.03% JavaScript 0.32%

ci-tools's Introduction

CI-Tools

This repositotory contains tooling used in Openshift CI. Please refer to the documentation for details.

ci-tools's People

Contributors

alexnpavel avatar alvaroaleman avatar bbguimaraes avatar bear-redhat avatar danilo-gemoli avatar deads2k avatar dgoodwin avatar droslean avatar eggfoobar avatar emilvberglind avatar geobk avatar hongkailiu avatar jmguzik avatar jupierce avatar neisw avatar nesty156 avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar petr-muller avatar prucek avatar psalajova avatar smarterclayton avatar smg247 avatar stbenjam avatar stevekuznetsov avatar vrutkovs avatar wking avatar xueqzhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ci-tools's Issues

always_run does not get merged from config/ to jobs/

In #3068, @zaneb added support for specifying always_run: false in configs/, but it only works right when creating new job files; the logic for merging changes to existing jobs didn't get updated, so once a job has been created, changing the value of always_run in its config will have no effect; you have to manually update the job. (FTR, there are 80 jobs where the config specifies always_run: false but the job itself is currently always_run: true.)

We could change it to always use the value of always_run from the config, but that would require fixing all of the existing configs first; most of them never got updated to specify always_run: false. (There are currently 2804 jobs where the job specifies always_run: false but the config does not.)

Alternatively, we could change it to do merged.AlwaysRun = old.AlwaysRun && new.AlwaysRun (so that if either the config or the pre-existing job specifies always_run: false, then it becomes always_run: false). That would fix the 80 currently-broken jobs without affecting the 2804 "secretly allow-run-false" jobs. Though it doesn't really fix the problem, because it would mean that you can convert existing always-run jobs to not-always-run, but you'd still need manual jobs-file editing when converting a not-always-run job back to always-run. So, meh.

junit: ci-operator no longer records containers in jUnit test suites

Compare the last good and first bad jUnit files:

Broken, at 2019-12-03 21:22:13 +0000 UTC:

<testsuites>
  <testsuite name="operator" tests="4" skipped="0" failures="0" time="4961.711833966">
    <testcase name="Find all of the input images from ocp/4.3:${component} and tag them into the output image stream" time="16.180708508"></testcase>
    <testcase name="All images are built and tagged into stable" time="2.219e-06"></testcase>
    <testcase name="Create the release image containing all images built by this job" time="61.258032765"></testcase>
    <testcase name="Run template e2e-aws" time="4884.272640237"></testcase>
  </testsuite>
</testsuites>

Good, at 2019-12-03 09:51:52 +0000 UTC

<testsuites>
  <testsuite name="operator" tests="7" skipped="0" failures="0" time="4247.361443821">
    <testcase name="Find all of the input images from ocp/4.3:${component} and tag them into the output image stream" time="12.935432193"></testcase>
    <testcase name="All images are built and tagged into stable" time="1.464e-06"></testcase>
    <testcase name="Create the release image containing all images built by this job" time="64.139096316"></testcase>
    <testcase name="Run template e2e-aws - e2e-aws container lease" time="57"></testcase>
    <testcase name="Run template e2e-aws - e2e-aws container setup" time="1874"></testcase>
    <testcase name="Run template e2e-aws - e2e-aws container teardown" time="523"></testcase>
    <testcase name="Run template e2e-aws - e2e-aws container test" time="1694"></testcase>
  </testsuite>
</testsuites>

cc @wking @abhinavdahiya

Support multiple secrets with multiple mount points

As a user, I'd like for jobs to have access to multiple secrets so that individual jobs can have common secrets and individual secrets. This is to support an addon testing effort that will involve users supplying us tokens that supplant that values that we use for most of our job runs.

pr-reminder: detect new commits

We in OCP storage team use cmd/pr-reminder and I'd appreciate if a PR age was not counted since the PR was created, but since it got a new comment, label, new title, new push / force-push or similar activity. It happens to me that I'm waiting for review comments to be addressed in several PRs, they get orange / red and I tend to ignore them. I then miss a new comment or a new commit. It does not need to get green, any other form of visible emphasis would be enough.

@psalajova, what do you think?

[Enhancement?] Allow for provisioning clusters on a given machine via CRC

Is there any way to enable support for spinning up an OpenShift cluster on a specified machine via CRC (or any mechanism that yields a small, single node cluster)?

I have a fairly small set of tests that are not particularly resource intensive and I'd rather not have to go through AWS/Google Cloud/whatever.

Thanks for the consideration

Template change didn't auto-trigger rehearsal of annotated job

So Clayton and I were looking at openshift/release#8154. After making a few small edits, the only code left in the changeset belonged to the template:
ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml

At this point, the job defined here below no longer triggered for rehearsals.
https://github.com/openshift/release/blob/master/ci-operator/jobs/openshift/release/openshift-release-release-4.2-periodics.yaml#L324
Clayton suggested I make sure the rehearsal annotation was present, but as you can see, I have linked in directly above.

I have run into this behavior before, and thought this was expected.
In order to force a run on the job, I left in a dummy variable in interim commit:
openshift/release@c9ef6a2

This produced job log:
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/8154/rehearse-8154-release-openshift-origin-installer-e2e-remote-libvirt-s390x-4.2/7

However, the following commit (with the dummy var removed) (openshift/release@16c4dbf) did not retrigger this job.

I asked Steve Kuznetsov about this as well, and he linked me to:
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/8154/pull-ci-openshift-release-master-pj-rehearse/18031#1:build-log.txt%3A15

pkg/api: Secret names should be checked for DNS 1123 subdomain, not label, format

According to the Kubernetes naming docs, object names can contain .; I believe this implies that names are valid DNS 1123 subdomains. The subdomain regexp is here. The regexp used by api/config however is for DNS 1123 labels. Secret names that contain ., such as quay.io Secret's, will fail prowgen.

I can submit a PR to use k8s.io/apimachinery/pkg/util/validation.IsDNS1123Subdomain(), or simply copy the regexp if there's a dependency issue.

Logs are not collected when the job is canceled because of a timeout

ci-operator regression:

 2020/05/01 15:35:47 error: unable to signal to artifacts container to terminate in pod format, triggering deletion: could not run remote command: unable to upgrade connection: container not found ("artifacts")
2020/05/01 15:35:47 error: unable to retrieve artifacts from pod format: could not read gzipped artifacts: unable to upgrade connection: container not found ("artifacts") 

https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_ci-tools/744/pull-ci-openshift-ci-tools-master-format/2348#1:build-log.txt%3A21
https://search.apps.build01.ci.devcluster.openshift.com/chart?search=error%3A+unable+to+retrieve+artifacts+from+pod+.*%3A+could+not+read+gzipped+artifacts%3A+unable+to+upgrade+connection%3A+container+not+found&maxAge=48h&context=1&type=build-log&name=.*&maxMatches=5&maxBytes=20971520&groupBy=job

Getting cloneref for non-amd64 archs

Hi,

The cloneref binary is required to run the test cases in prow and is added to the environment in the ci-tools source here:

ADD ./app.binary /clonerefs

Now, this comes from an amd64 image, and doesn't work for non-amd64 archs. To run it on a multi-arch environment, there could be a workaround to get the cloneref binary using go get. I have been able to run cloneref using go get k8s.io/test-infra/prow/cmd/clonerefs on a ppc64le system.

Would it be a right solution to enable ci-tools to run on multi-arch systems? Or should there be another way to support non-amd64 archs? Thanks.

pj-rehearse did not create CM for new template

openshift/release#4938 added a new job together with a new template used by this new job. The rehearsal created for this job got stuck on failing to mount the template CM:

MountVolume.SetUp failed for volume "job-definition" : configmaps "rehearse-template-cluster-launch-installer-ipi-e2e-dd409aae" not found

The pj-rehearse output suggests that the CM was not even created:

time="2019-09-12T16:19:57Z" level=info msg="Rehearsing Prow jobs for a configuration PR" pr=4938
time="2019-09-12T16:20:00Z" level=info msg="templates changed" pr=4938 templates="[{ci-operator/templates/openshift/installer/cluster-launch-installer-ipi-e2e.yaml dd409aae952175ffec1a2c6f38822f919a85e1ab}]"
time="2019-09-12T16:20:00Z" level=info msg="Job has been chosen for rehearsal" diffs=" .Agent:   a: ''   b: 'kubernetes'" job-name=pull-ci-openshift-installer-master-e2e-ipi pr=4938 repo=openshift/installer
time="2019-09-12T16:20:00Z" level=info msg="Created a rehearsal job to be submitted" pr=4938 rehearsal-job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi target-job=pull-ci-openshift-installer-master-e2e-ipi target-repo=openshift/installer
time="2019-09-12T16:20:00Z" level=info msg="Submitting a new prowjob." job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi name=2b7aacd1-d579-11e9-9688-0a58ac100856 org=openshift pr=4938 repo=release type=presubmit
time="2019-09-12T16:20:00Z" level=info msg="Submitted rehearsal prowjob" job=rehearse-4938-pull-ci-openshift-installer-master-e2e-ipi name=2b7aacd1-d579-11e9-9688-0a58ac100856 org=openshift pr=4938 repo=release type=presubmit 

I have copied the commit from the problematic PR here for reference:
petr-muller/release@d69407a

system requirements validation

Which Git release is required? Is it 2.x or 1.8.3 suffice or any version for that matter?
Which golang version to use? Didn't work with golang 1.11 but when upgraded to 1.13.5, it did.

Build failure: unrecognized import path "vbom.ml/util"

$ make build 
hack/build-go.sh  
go: downloading vbom.ml/util v0.0.0-20180919145318-efcd4e0f9787
../../../go/pkg/mod/github.com/!google!cloud!platform/[email protected]/util/gcs/read.go:32:2: unrecognized import path "vbom.ml/util": https fetch: Get "https://vbom.ml/util?go-get=1": dial tcp: lookup vbom.ml on 127.0.0.1:53: no such host
[ERROR] PID 2449932: hack/build-go.sh:13: `go build ./cmd/...` exited with status 1.
[INFO]          Stack Trace: 
[INFO]            1: hack/build-go.sh:13: `go build ./cmd/...`
[INFO]   Exiting with code 1.
[ERROR] hack/build-go.sh exited with code 1 after 00h 00m 01s
make: *** [Makefile:33: build] Error 1

See fvbommel/util#7

step registry integration: nil pointer

Seen in an integration job run:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x13cec21]
goroutine 118 [running]:
github.com/openshift/ci-tools/pkg/load.Registry.func1(0xc00039e2c0, 0x3f, 0x0, 0x0, 0x19840e0, 0xc0003cee40, 0x0, 0x0)
	/go/src/github.com/openshift/ci-tools/pkg/load/load.go:93 +0x81
path/filepath.walk(0xc000402270, 0x28, 0x19cdf00, 0xc0002049c0, 0xc000373cf8, 0x0, 0x0)
	/usr/local/go/src/path/filepath/path.go:378 +0x20c
path/filepath.walk(0xc000043440, 0x20, 0x19cdf00, 0xc000204680, 0xc000373cf8, 0x0, 0x0)
	/usr/local/go/src/path/filepath/path.go:382 +0x2ff
path/filepath.walk(0x7ffcfddbc352, 0x1c, 0x19cdf00, 0xc0002045b0, 0xc000373cf8, 0x0, 0xc000373cc0)
	/usr/local/go/src/path/filepath/path.go:382 +0x2ff
path/filepath.Walk(0x7ffcfddbc352, 0x1c, 0xc000373cf8, 0x153df64b408bb, 0xc021590a06)
	/usr/local/go/src/path/filepath/path.go:404 +0xff
github.com/openshift/ci-tools/pkg/load.Registry(0x7ffcfddbc352, 0x1c, 0x24a9800, 0xc0003cecf0, 0xc0003ced20, 0xc0003ced50, 0x0, 0x0)
	/go/src/github.com/openshift/ci-tools/pkg/load/load.go:92 +0xfe
github.com/openshift/ci-tools/pkg/load.(*registryAgent).loadRegistry(0xc000151220, 0x0, 0x0)
	/go/src/github.com/openshift/ci-tools/pkg/load/registryAgent.go:118 +0xdc
github.com/openshift/ci-tools/pkg/coalescer.(*coalescer).Run.func1()
	/go/src/github.com/openshift/ci-tools/pkg/coalescer/coalescer.go:40 +0x6c
sync.(*Once).doSlow(0xc000442850, 0xc0004ef760)
	/usr/local/go/src/sync/once.go:66 +0xe3
sync.(*Once).Do(...)
	/usr/local/go/src/sync/once.go:57
github.com/openshift/ci-tools/pkg/coalescer.(*coalescer).Run(0xc0002e4ca0, 0x0, 0x0)
	/go/src/github.com/openshift/ci-tools/pkg/coalescer/coalescer.go:34 +0xb8
github.com/openshift/ci-tools/pkg/load.reloadWatcher.func1(0x1982a40, 0xc0002e4ca0)
	/go/src/github.com/openshift/ci-tools/pkg/load/agent_utils.go:42 +0x35
created by github.com/openshift/ci-tools/pkg/load.reloadWatcher
	/go/src/github.com/openshift/ci-tools/pkg/load/agent_utils.go:41 +0x32e 

/assign @AlexNPavel

Error while running tests locally

when trying to run tests locally I get the following error :

[irosenzw@IdoWork ci-tools]$ ./ci-operator --config openshift-console-master.yaml --git-ref openshift/console@master
2020/06/02 11:54:42 unset version 0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xa8 pc=0x16b590c]

goroutine 1 [running]:
main.(*options).Complete(0xc00046a000, 0xc00061ff20, 0x4)
	/home/irosenzw/openshift-ci/ci-tools/cmd/ci-operator/main.go:374 +0x11c
main.main()
	/home/irosenzw/openshift-ci/ci-tools/cmd/ci-operator/main.go:196 +0x1f8

Multi-secrets leads to a duplicate secret name error

Example: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/6519/rehearse-6519-periodic-ci-openshift-osde2e-master-addon-prow-operator-test/2

When using multi-secrets, at least in a rehearsal, the job fails with:

failed to create or restart test pod: unable to create pod: Pod "addon-prow-operator-test" is invalid: spec.volumes[2].name: Duplicate value: "test-secret"

Despite the fact that the job yaml doesn't reference this. For the moment it makes the multi-secret support unusable.

ci-secret-generator does not work against 4.12 cluster

Description

oc sa create-kubeconfig does not work any more against 4.12 cluster.
https://bugzilla.redhat.com/show_bug.cgi?id=2108241

It has impact on our tools because

https://docs.openshift.com/container-platform/4.10/authentication/bound-service-account-tokens.html

3. The application that uses the bound token must handle reloading the token when it rotates.

The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours.

So we rely on ci-secret-generator more than ever.

The workaround before the bug is fixed (it might not be fixed ever).

https://github.com/openshift/ci-tools/blob/master/images/ci-secret-generator/oc_sa_create_kubeconfig.sh

which needs 4.11:cli.

However, it turns out that the generator cannot use 4.11:cli because it generates the deprecated message in the output of oc sa create-kubeconfig.

Blocker before bumping to 4.11:cli: @bear-redhat is working on it.

Next steps:
Option 1:

  • wait for #2917
  • update to 4.11:cli

Option 2:

  • save 2 versions of oc-cli: 4.10 and 4.11 in the image
  • use 4.11 cli only for build02' items: This will verify the script in the tool. We expect to fix the job after this.
  • use 4.11 cli and the script for other clusters: Then the deprecated message should not occur any more because we do not use oc sa create-kubeconfig any more.
  • remove 4.10 oc-cli in the image.

I bet pull-ci-openshift-release-master-build-clusters will block us on the way and we have to figure out how to fix that too. (Update: Checked the code again. Found no blocker there).

"Please help us cut down on flakes by" dead link

Use simpler container names and conventions in step registry

When launching a pod that is a step name subset (so <JOB_PREFIX>-<STEP_NAME>) we can omit duplicating <STEP_NAME> in the container, which reduces annoyances getting into those steps and debugging them (and puts some consistency in place). We should probably have conventions for the container name, but setup, teardown, and test are all good conventions (i.e. what we use right now in the template). Pre and post are setup and teardown, test is anything in the middle.

We should also make sure that when the step registry is launching a pod the "user facing" pod is first in the containers list, which makes rsh, rsync, exec, log and other commands happier (simplifies debugging).

We're going to have to annotate some of the pods/secrets so that cluster bot can get auth info out, but that can wait (bot basically needs to get at the right container to get the kubeadmin).

IMAGE_FORMAT env var not set when testing against stable OCP release

When using the following configuration instead of tag_specification, the IMAGE_FORMAT variable is empty in the test environment:

releases:
  latest:
    release:
      channel: stable
      version: "4.5"

This is a problem because the images that are built by ci-operator for the current run are not easily accessible.

Link to the test run that has the empty env variable: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/11443/rehearse-11443-pull-ci-openshift-knative-serverless-operator-master-4.5-upstream-e2e-aws-ocp-45/1300444362731163648/build-log.txt

Why those 3 jobs are not rehearsed?

openshift/release#6331

Those 3 jobs are with rehearse label:

  • pull-ci-openshift-origin-master-e2e-aws-fips
  • pull-ci-openshift-origin-master-e2e-aws-multitenant
  • pull-ci-openshift-origin-master-e2e-aws-ovn

But they are not rehearsed.

They should be since we have changed the config file with the cluster params.

missing volume definition in pkg/prowgen/podspec.go?

in podspec.go in the default podspec there are three VolumeMounts defined. However, there are only two Volumes defined. The Volume definition gcs-credentials is missing. When I run this against ci-operator configs, the jobs fail because of the missing Volume.

In RH Prow is this Volume definition being injected somehow. I thought it might have been with Prow presets: but that doesn't seem to be the case.

var defaultPodSpec = corev1.PodSpec{
        ServiceAccountName: "ci-operator",
        Containers: []corev1.Container{
                {
                        Args: []string{
                                "--image-import-pull-secret=/etc/pull-secret/.dockerconfigjson",
                                "--gcs-upload-secret=/secrets/gcs/service-account.json",
                                "--report-credentials-file=/etc/report/credentials",
                        },
                        Command:         []string{"ci-operator"},
                        Image:           "ci-operator:latest",
                        ImagePullPolicy: corev1.PullAlways,
                        Resources: corev1.ResourceRequirements{
                                Requests: corev1.ResourceList{"cpu": *resource.NewMilliQuantity(10, resource.DecimalSI)},
                        },
                        VolumeMounts: []corev1.VolumeMount{
                                {
                                        Name:      "pull-secret",
                                        MountPath: "/etc/pull-secret",
                                        ReadOnly:  true,
                                },
                                {
                                        Name:      "result-aggregator",
                                        MountPath: "/etc/report",
                                        ReadOnly:  true,
                                },
                                {
                                        Name:      "gcs-credentials",
                                        MountPath: cioperatorapi.GCSUploadCredentialsSecretMountPath,
                                        ReadOnly:  true,
                                },
                        },
                },
        },
        Volumes: []corev1.Volume{
                {
                        Name: "pull-secret",
                        VolumeSource: corev1.VolumeSource{
                                Secret: &corev1.SecretVolumeSource{SecretName: "registry-pull-credentials"},
                        },
                },
                {
                        Name: "result-aggregator",
                        VolumeSource: corev1.VolumeSource{
                                Secret: &corev1.SecretVolumeSource{SecretName: "result-aggregator"},
                        },
                },
        },
}

Default artifact PR broke test-specific secrets

We needed to revert the PR for Default artifacts directory, set env because of reported breakage of some Prow jobs.

Example: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/periodic-ci-openshift-osde2e-master-e2e-prod-4.2/78

The reason is likely the interaction with the test-specific secret: stanza in ci-operator config.

I'm attaching different outputs of ci-operator (revision a7bff05, before revert) local execution:

ci-operator --git-ref openshift/osde2e@master --config $OSRELEASE/ci-operator/config/openshift/osde2e/openshift-osde2e-master.yaml --target=e2e-prod-4.2 --artifact-dir /tmp --dry-run

/assign @stevekuznetsov

panic in ci-tools/pkg/results/report.go

Observed the following panic in a job kicked off by cluster-bot - https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-gcp-upgrade/673/build-log.txt

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x1704aee]

goroutine 1 [running]:
github.com/openshift/ci-tools/pkg/results.(*reporter).Report(0xc000e815c0, 0x1e85480, 0xc000620690)
	/go/src/github.com/openshift/ci-tools/pkg/results/report.go:144 +0x2fe
main.(*options).Report(0xc0002062c0, 0x1e83ec0, 0xc000620690)
	/go/src/github.com/openshift/ci-tools/cmd/ci-operator/main.go:504 +0xf7
main.main()
	/go/src/github.com/openshift/ci-tools/cmd/ci-operator/main.go:201 +0x30e

ci-operator failures when using 4.3-art-latest in tag_specification

TL;DR: ci-operator fails to operate when 4.3-art-latest is used in tag_specification stanza. This is because it iterates over .Spec.Tags of that imagestream and Spec is empty/changing in 4.3-art-latest. It looks like ci-operator should iterate over .Status.Tags but it is not clear to me whether the current behavior is a bug or a feature, and whether such change would be a risk.


Gory details:

This PR openshift/release#6254 was attempting to add a ci-operator config using 4.3-art-latest imagestream in its tag_specification stanza (advised by Steve):

tag_specification:
  name: "4.3-art-latest"
  namespace: ocp

Rehearsals of test jobs generated from such configs failed with the following error (see the full error in [1]):

parameter RELEASE_IMAGE_LATEST is required and must be specified

This was reported on Slack. We discovered that ci-operator is not building a release payload like usual, accompanied by the following log output:

No latest release image necessary, stable image stream does not include a cluster-version-operator image

(This is likely a first issue to fix: if we need a release, we should either have RELEASE_IMAGE_* set, or error out on ...stable image stream does not include... instead of continuing)

Manual inspection in ci-operator namespace showed that their stable imagestreams contained no images; we expected images from 4.3-art-latest to be tagged there. This varied a lot: in other runs, we saw e.g. only machine-os-content image tagged in stable, and once, we even saw all expected images tagged there and ci-operator even attempted to assemble a release payload.

It looks like ci-operator iterates over .Spec.Tags of the source imagestream, followed by some filtering/validation based on the image's presence in its .Status:

for _, tag := range is.Spec.Tags {
if valid, image := findStatusTag(is, tag.Name); valid != nil {
if len(s.config.Cluster) > 0 {
if len(image) > 0 {
valid = &coreapi.ObjectReference{Kind: "DockerImage", Name: fmt.Sprintf("%s@%s", repo, image)}
} else {
valid = &coreapi.ObjectReference{Kind: "DockerImage", Name: fmt.Sprintf("%s:%s", repo, tag.Name)}
}
}
newIS.Spec.Tags = append(newIS.Spec.Tags, imageapi.TagReference{
Name: tag.Name,
From: valid,
})
}
}

We asked ART on Slack, Justin Pierce suggested ci-operator should use Status.Tags instead of .Spec.Tags, but given that ci-operator now basically uses both, it's not clear to me if the change is really that simple and what risk it implies.

/cc @stevekuznetsov @lioramilbaum @droslean


[1] linebreaks mine

could not wait for template instance to be ready: 
  could not determine if template instance was ready: 
  failed to create objects: 
  Template.template.openshift.io "e2e-test" is invalid: 
  template.parameters[10]: Required value: 
  template.parameters[10]: 
  parameter RELEASE_IMAGE_LATEST is required and must be specified

Cannot use repo-brancher to fast forward our code

We want to use repo-brancher tool to fast forward our code from master to our release branch (out job can be found here), but because our promotion namespace is not the ocp, the repo-brancher will be failed to check from https://github.com/openshift/ci-tools/blob/master/pkg/promotion/promotion.go#L39, from repo-brancher code, I found the promotion namespace is a const var (https://github.com/openshift/ci-tools/blob/master/pkg/promotion/promotion.go#L15)

Can we add a PromotionNamespace option for repo-brancher Options, if the PromotionNamespace option is not specified, repo-brancher will run the current logic, if the PromotionNamespace option is specified, we will check the promotion namespace option whether matches with the configuration.PromotionConfiguration.Namespace, the code maybe

func (o *Options) matches(configuration *cioperatorapi.ReleaseBuildConfiguration) bool {
        if !isDisabled(configSpec) {
              return false
        }
        promotionNamespace := extractPromotionNamespace(configSpec)
        if o.PromotionNamesapce != "" {
          return promotionNamespace == o.PromotionNamesapce && configuration.PromotionConfiguration.Name == o.CurrentRelease
        }
        promotionName := extractPromotionName(configSpec)
	return RefersToOfficialImage(promotionName, promotionNamespace) && configuration.PromotionConfiguration.Name == o.CurrentRelease
}

any comments?

PROW CI should support send Job status to multiple Slack channel

Hi CI Operator team,

Now Openshift Quay QE team and Openshift InterOP teams have a requirement to send the test report to multiple Slack channel, pls give support.

PROW CI Docs: https://docs.ci.openshift.org/docs/how-tos/notification/

Example:

`reporter_config:
    slack:
      channel: 
      - '#quay-qe1'
      - '#quay-qe2'
      - '#quay-qe3'
      job_states_to_report:
      - success
      - failure
      - error
      report_template: '{{if eq .Status.State "success"}} :rainbow: Job *{{.Spec.Job}}*
        ended with *{{.Status.State}}*. <{{.Status.URL}}|View logs> :rainbow: {{else}}
        :volcano: Job *{{.Spec.Job}}* ended with *{{.Status.State}}*. <{{.Status.URL}}|View
        logs> :volcano: {{end}}'`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.