Coder Social home page Coder Social logo

publishing-bot's Introduction

Kubernetes Publishing Bot

sig-release-publishing-bot/build

Overview

The publishing bot publishes the code in k8s.io/kubernetes/staging to their own repositories. It guarantees that the master branches of the published repositories are compatible, i.e., if a user go get a published repository in a clean GOPATH, the repo is guaranteed to work.

It pulls the latest k8s.io/kubernetes changes and runs git filter-branch to distill the commits that affect a staging repo. Then it cherry-picks merged PRs with their feature branch commits to the target repo. It records the SHA1 of the last cherrypicked commits in Kubernetes-sha: <sha> lines in the commit messages.

The robot is also responsible to update the go-mod and the vendor/ directory for the target repos.

Playbook

Publishing a new repo or a new branch

Updating rules

Adapting rules for a new branch

If you're creating a new branch, you need to update the publishing-bot rules to reflect that. For Kubernetes, this means that you need to update the rules.yaml file on the master branch.

For each repository, add a new branch to the branches stanza. If the branch is using the same Go version as the default Go version, you don't need to specify the Go version for the branch (otherwise you need to do that).

Adapting rules for a Go update

If you're updating Go version for the master or release branches, you need to adapt the rules.yaml file in kubernetes/kubernetes on the master branch.

  • If you're updating Go version for the master branch, you need to change the default Go version to the new version.
    • If release branches that depend on the default Go version use a different (e.g. old) Go version, you need to explicitly set Go version for those branches (e.g. like here)
  • If you're updating Go version for a previous release branch

Testing and deploying the robot

Currently we don't have tests for the bot. It relies on manual tests:

  • Fork the repos you are going the publish.

  • Run hack/fetch-all-latest-and-push.sh from the bot root directory to update the branches of your repos. This will sync your forks with upstream. CAUTION: this might delete data in your forks.

  • Use hack/create-repos.sh from the bot root directory to create any missing repos in the destination github org.

  • Create a config and a corresponding ConfigMap in configs,

  • Create a rule config and a corresponding ConfigMap in configs,

    • by copying configs/example-rules-configmap.yaml,
    • and by changing the Makefile constants in configs/<yourconfig>
    • and the ConfigMap values in configs/<yourconfig>-rules-configmap.yaml.
  • Deploy the publishing bot by running make from the bot root directory, e.g.

$ make build-image push-image CONFIG=configs/<yourconfig>
$ make run CONFIG=configs/<yourconfig> TOKEN=<github-token>

for a fire-and-forget pod. Or use

$ make deploy CONFIG=configs/<yourconfig> TOKEN=<github-token>

to run a ReplicaSet that publishes every 24h (you can change the INTERVAL config value for different intervals).

This will not push to your org, but runs in dry-run mode. To run with a push, add DRYRUN=false to your make command line.

Running in Production

  • Use one of the existing configs and
  • launch make deploy CONFIG=configs/kubernetes-nightly

Caution: Make sure that the bot github user CANNOT close arbitrary issues in the upstream repo. Otherwise, github will close, them triggered by Fixes kubernetes/kubernetes#123 patterns in published commits.

Note:: Details about running the publishing-bot for the Kubernetes project can be found in production.md.

Update rules

To add new branch rules or update go version for configured destination repos, check update-branch-rules.

Contributing

Please see CONTRIBUTING.md for instructions on how to contribute.

Known issues

  1. Testing: currently we rely on manual testing. We should set up CI for it.
  2. Automate release process (tracked at kubernetes/kubernetes#49011): when kubernetes release, automatic update the configuration of the publishing robot. This probably means that the config must move into the Kubernetes repo, e.g. as a .publishing.yaml file.

publishing-bot's People

Contributors

akhilerm avatar ameukam avatar cblecker avatar cpanato avatar daixiang0 avatar dependabot[bot] avatar dims avatar hunshcn avatar jeremyrickard avatar joelsmith avatar justaugustus avatar k8s-ci-robot avatar liggitt avatar mfojtik avatar navidshaikh avatar nikhita avatar palnabarun avatar pohly avatar saad-ali avatar sarab97 avatar saschagrunert avatar sataqiu avatar sttts avatar tao12345666333 avatar warmchang avatar xmudrii avatar yliaog avatar youhonglian avatar yue9944882 avatar zhouhaibing089 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

publishing-bot's Issues

Bot does not autodelete old comments on failure issue

The publishing-bot comments on kubernetes/kubernetes#56876 when the bot is broken, it cleans up old comments when it reopens the issue (and adds a new comment).

Right now, this behaviour seems to be broken since it is adding new comments without cleaning up the old ones.

Relevant logs:

I1120 18:28:04.646145       1 github.go:81] Skipping comment 376925524 not by me, but <nil>
I1120 18:28:04.646285       1 github.go:81] Skipping comment 376925527 not by me, but <nil>
I1120 18:28:04.646291       1 github.go:81] Skipping comment 417815662 not by me, but <nil>
I1120 18:28:04.646295       1 github.go:81] Skipping comment 417929512 not by me, but <nil>
I1120 18:28:04.646299       1 github.go:81] Skipping comment 418821608 not by me, but <nil>
I1120 18:28:04.646302       1 github.go:81] Skipping comment 419399741 not by me, but <nil>
....

godep restore does not succeed with godeps-gen

Running godep restore on client-go kubernetes-1.15.0 gives:

+ godep restore
godep: Dep (github.com/Azure/go-autorest) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/Azure/go-autorest
godep: Dep (github.com/davecgh/go-spew) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/davecgh/go-spew
godep: Dep (github.com/gogo/protobuf) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/gogo/protobuf
godep: Dep (github.com/golang/protobuf) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/golang/protobuf
godep: Dep (github.com/google/go-cmp) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/google/go-cmp
godep: Dep (github.com/mxk/go-flowrate) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/mxk/go-flowrate
godep: Dep (github.com/pmezard/go-difflib) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/github.com/pmezard/go-difflib
godep: Dep (golang.org/x/crypto) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/crypto
godep: Dep (golang.org/x/net) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/net
godep: Dep (golang.org/x/sync) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/sync
godep: Dep (golang.org/x/sys) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/sys
godep: Dep (golang.org/x/time) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/time
godep: Dep (golang.org/x/tools) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/golang.org/x/tools
godep: Dep (k8s.io/apimachinery) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/k8s.io/apimachinery
godep: Dep (k8s.io/kube-openapi) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/k8s.io/kube-openapi
godep: Dep (k8s.io/utils) restored, but was unable to load it with error:
    no buildable Go source files in /home/nikhita/go/src/k8s.io/utils
godep: Error checking some deps.

It looks like it's hitting tools/godep#535.

I think the import paths in Godeps.json should have been github.com/Azure/go-autorest/autorest, github.com/Azure/go-autorest/autorest/adal, github.com/Azure/go-autorest/autorest/azure, etc.
Instead of just github.com/Azure/go-autorest.

Compare https://github.com/kubernetes/client-go/blob/v11.0.0/Godeps/Godeps.json ("original" Godeps.json on v11.0.0) and https://github.com/kubernetes/client-go/blob/kubernetes-1.15.0/Godeps/Godeps.json ("minimal" Godeps.json generated by godeps-gen).

I think this occurs because go list -m -json all gives only github.com/Azure/go-autorest - because that's the only "module". It does not give packages, just modules.

I think we'll need to use go list -json all to list all the packages and use them in Godeps.json. Thoughts?

Also, we never caught this because:

Support running publishing bot for containerd

Currently the publishing-bot is centred towards running only on the kubernetes/kubernetes repo. We want to run publishing bot on the containerd/containerd repo to publish the api/ directory from containerd to a new containerd/api repository.

Following are the list of changes that must be made to so that this can be achieved without breaking the existing publishing bot workflow for k8s:

  • add support for branch names containing "/" as containerd uses release/1.6 #319
  • workflow to test the changes in publishing bot #334 kubernetes/test-infra#29118
  • support configurable main branch name #324
  • support for multiple subdirectories in rules #337
  • updating debian base image and backport to support git-filter-repo #336 #351
  • change filter-branch to filter-repo #369
  • modify handling go mod related changes so that containerd specific changes can be done to go mod
  • modify tagging logic so that tagging can be done as per containerd needs also
  • add checks to ensure only newly created containerd tags are pushed and old tags/refs etc are not pushed to the containerd/api repo.

Consider forcing a diff in go.mod when syncing semver tags

As part of sync-tags, we update go.mod files to refer to tagged releases of dependencies. This results in a tag-specific commit/SHA for most repos. This is desired, so that go resolves that sha to that specific semver tag.

If a repo has no peer dependencies (like k8s.io/apimachinery), no change is required to the go.mod file, so it is possible for multiple tags to be created pointing to the same commit/sha (like v0.17.0 and v0.17.1-beta.0).

While this is functionally correct (nothing breaks), it is confusing if go resolves a sha or non-semver tag (like kubernetes-1.17.0) to an unexpected semver tag (like v0.17.1-beta.0), even if it is equivalent content.

In sync-tags, when syncing a semver tag, if we also added a comment like "# Corresponds to Kubernetes v1.17.0", this would force a diff in all go.mod files, force a tag-specific commit, and reduce confusion in sha/non-semver resolution.

xref kubernetes/kubernetes#84372 (comment)

publishing-bot needs too much memory

  • sync-tags gets oom killed occasionally
  • pod gets evicted with the following
  Type     Reason   Age   From                                                 Message
  ----     ------   ----  ----                                                 -------
  Warning  Evicted  37m   kubelet, gke-development-default-pool-9fede538-zrl7  The node was low on resource: memory. Container publisher was using 2560344Ki, which exceeds its request of 2Gi.
  Normal   Killing  37m   kubelet, gke-development-default-pool-9fede538-zrl7  Killing container with id docker://publisher:Need to kill Pod
[dims@dims-mac 15:

make build-image broken

It seems like something wrong about gce_debian_mirror.

$ make build-image push-image CONFIG=configs/xxx
mkdir -p _output && GOOS=linux go build -o _output/collapsed-kube-commit-mapper ./cmd/collapsed-kube-commit-mapper
mkdir -p _output && GOOS=linux go build -o _output/publishing-bot ./cmd/publishing-bot
mkdir -p _output && GOOS=linux go build -o _output/sync-tags ./cmd/sync-tags
mkdir -p _output && GOOS=linux go build -o _output/init-repo ./cmd/init-repo
docker build -t xxx/haibzhou/tess-publishing-bot .
Sending build context to Docker daemon 51.18 MB
Step 1 : FROM google/debian:jessie
 ---> ab02b1698e6d
Step 2 : MAINTAINER Chao Xu <[email protected]>
 ---> Using cache
 ---> 7fe4490a0937
Step 3 : RUN apt-get update  && apt-get install -y -qq git=1:2.1.4-2.1+deb8u5  && apt-get install -y -qq mercurial  && apt-get install -y -qq ca-certificates wget jq vim tmux bsdmainutils tig  && wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz  && tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz  && rm -rf /var/lib/apt/lists/*
 ---> Running in 68ef5b37e925
Get:1 http://security.debian.org jessie/updates InRelease [44.9 kB]
Ign http://gce_debian_mirror.storage.googleapis.com jessie InRelease
Get:2 http://gce_debian_mirror.storage.googleapis.com jessie-updates InRelease [145 kB]
Get:3 http://gce_debian_mirror.storage.googleapis.com jessie Release.gpg [2434 B]
Get:4 http://gce_debian_mirror.storage.googleapis.com jessie Release [148 kB]
Get:5 http://security.debian.org jessie/updates/main amd64 Packages [507 kB]
E: Release file for http://gce_debian_mirror.storage.googleapis.com/dists/jessie-updates/InRelease is expired (invalid since 34d 5h 21min 5s). Updates for this repository will not be applied.
The command '/bin/sh -c apt-get update  && apt-get install -y -qq git=1:2.1.4-2.1+deb8u5  && apt-get install -y -qq mercurial  && apt-get install -y -qq ca-certificates wget jq vim tmux bsdmainutils tig  && wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz  && tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz  && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
make: *** [build-image] Error 1

go test fails with "panic: test timed out after 10m0s"

Example : kubernetes/kubernetes#56876 (comment)

	?   	k8s.io/client-go/scale/scheme/appsv1beta1	[no test files]
	?   	k8s.io/client-go/scale/scheme/appsv1beta2	[no test files]
	?   	k8s.io/client-go/scale/scheme/autoscalingv1	[no test files]
	?   	k8s.io/client-go/scale/scheme/extensionsint	[no test files]
	?   	k8s.io/client-go/scale/scheme/extensionsv1beta1	[no test files]
	ok  	k8s.io/client-go/testing	(cached)
	?   	k8s.io/client-go/third_party/forked/golang/template	[no test files]
	ok  	k8s.io/client-go/tools/auth	(cached)
	ok  	k8s.io/client-go/tools/cache	16.299s
	ok  	k8s.io/client-go/tools/cache/testing	0.057s
	ok  	k8s.io/client-go/tools/clientcmd	(cached)
	ok  	k8s.io/client-go/tools/clientcmd/api	(cached)
	?   	k8s.io/client-go/tools/clientcmd/api/latest	[no test files]
	?   	k8s.io/client-go/tools/clientcmd/api/v1	[no test files]
	panic: test timed out after 10m0s

	goroutine 50 [running]:
	testing.(*M).startAlarm.func1()
		/go-workspace/go-1.13.4/src/testing/testing.go:1377 +0xdf
	created by time.goFunc
		/go-workspace/go-1.13.4/src/time/sleep.go:168 +0x44

	goroutine 1 [chan receive]:
	testing.(*T).Run(0xc0001d0200, 0x12257f1, 0x10, 0x12b20e8, 0x488b86)
		/go-workspace/go-1.13.4/src/testing/testing.go:961 +0x377
	testing.runTests.func1(0xc0001d0100)
		/go-workspace/go-1.13.4/src/testing/testing.go:1202 +0x78
	testing.tRunner(0xc0001d0100, 0xc0000d1dc0)
		/go-workspace/go-1.13.4/src/testing/testing.go:909 +0xc9
	testing.runTests(0xc0001b8580, 0x1c08040, 0x3, 0x3, 0x0)
		/go-workspace/go-1.13.4/src/testing/testing.go:1200 +0x2a7
	testing.(*M).Run(0xc0000e7e00, 0x0)
		/go-workspace/go-1.13.4/src/testing/testing.go:1117 +0x176
	main.main()
		_testmain.go:48 +0x135

	goroutine 6 [chan receive]:
	k8s.io/klog.(*loggingT).flushDaemon(0x1c18ea0)
		/go-workspace/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x8b
	created by k8s.io/klog.init.0
		/go-workspace/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xd6

	goroutine 8 [chan receive]:
	k8s.io/client-go/tools/events.TestEventSeriesf(0xc0001d0200)
		/go-workspace/src/k8s.io/client-go/tools/events/eventseries_test.go:174 +0xd95
	testing.tRunner(0xc0001d0200, 0x12b20e8)
		/go-workspace/go-1.13.4/src/testing/testing.go:909 +0xc9
	created by testing.(*T).Run
		/go-workspace/go-1.13.4/src/testing/testing.go:960 +0x350

	goroutine 9 [chan receive]:
	k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000210d80)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/watch/mux.go:207 +0x66
	created by k8s.io/apimachinery/pkg/watch.NewBroadcaster
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/watch/mux.go:75 +0xcc

	goroutine 10 [chan send]:
	k8s.io/client-go/tools/events.TestEventSeriesf.func3(0xc000165900, 0xc0002d0000, 0x52, 0x60, 0x0, 0x0, 0x0)
		/go-workspace/src/k8s.io/client-go/tools/events/eventseries_test.go:155 +0x3e
	k8s.io/client-go/tools/events.(*testEventSeriesSink).Patch(0xc0001b85a0, 0xc000165900, 0xc0002d0000, 0x52, 0x60, 0x0, 0x0, 0x0)
		/go-workspace/src/k8s.io/client-go/tools/events/eventseries_test.go:62 +0x5b
	k8s.io/client-go/tools/events.recordEvent(0x13c12e0, 0xc0001b85a0, 0xc000165900, 0x0, 0x0)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:216 +0x622
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).refreshExistingEventSeries(0xc000230390)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:116 +0x13c
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink.func1()
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:298 +0x2a
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000167140)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5e
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000167140, 0x1396e00, 0xc000276000, 0xc0000fa001, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000167140, 0x1a3185c5000, 0x0, 0x1, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0xe2
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000167140, 0x1a3185c5000, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:297 +0x8c

	goroutine 11 [semacquire]:
	sync.runtime_SemacquireMutex(0xc00023039c, 0xc00006eb00, 0x1)
		/go-workspace/go-1.13.4/src/runtime/sema.go:71 +0x47
	sync.(*Mutex).lockSlow(0xc000230398)
		/go-workspace/go-1.13.4/src/sync/mutex.go:138 +0xfc
	sync.(*Mutex).Lock(...)
		/go-workspace/go-1.13.4/src/sync/mutex.go:81
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).finishSeries(0xc000230390)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:130 +0x32f
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink.func2()
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:301 +0x2a
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000167150)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5e
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000167150, 0x1396e00, 0xc0002303c0, 0xc000010601, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000167150, 0x53d1ac1000, 0x0, 0x1, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0xe2
	k8s.io/apimachinery/pkg/util/wait.Until(0xc000167150, 0x53d1ac1000, 0xc000090720)
		/go-workspace/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:300 +0xf7

	goroutine 12 [chan receive]:
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartEventWatcher.func1(0x13a0280, 0xc0002303f0, 0xc000167160)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:285 +0xce
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartEventWatcher
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:282 +0x72

	goroutine 13 [chan receive]:
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink.func4(0xc000090720, 0xc0001b8620)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:313 +0x34
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).StartRecordingToSink
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:312 +0x162

	goroutine 34 [semacquire]:
	sync.runtime_SemacquireMutex(0xc00023039c, 0x121d400, 0x1)
		/go-workspace/go-1.13.4/src/runtime/sema.go:71 +0x47
	sync.(*Mutex).lockSlow(0xc000230398)
		/go-workspace/go-1.13.4/src/sync/mutex.go:138 +0xfc
	sync.(*Mutex).Lock(...)
		/go-workspace/go-1.13.4/src/sync/mutex.go:81
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink.func1(0xc000230390, 0xc000274000, 0x13cf680, 0x1c35f30)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:181 +0x27d
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:156 +0x68

	goroutine 19 [semacquire]:
	sync.runtime_SemacquireMutex(0xc00023039c, 0x0, 0x1)
		/go-workspace/go-1.13.4/src/runtime/sema.go:71 +0x47
	sync.(*Mutex).lockSlow(0xc000230398)
		/go-workspace/go-1.13.4/src/sync/mutex.go:138 +0xfc
	sync.(*Mutex).Lock(...)
		/go-workspace/go-1.13.4/src/sync/mutex.go:81
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink.func1.1(0xc000230390, 0xc000278280, 0x13cf680, 0x1c35f30, 0x0)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:158 +0x402
	k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink.func1(0xc000230390, 0xc000278280, 0x13cf680, 0x1c35f30)
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:176 +0x6a
	created by k8s.io/client-go/tools/events.(*eventBroadcasterImpl).recordToSink
		/go-workspace/src/k8s.io/client-go/tools/events/event_broadcaster.go:156 +0x68
	FAIL	k8s.io/client-go/tools/events	600.020s
	ok  	k8s.io/client-go/tools/leaderelection	0.073s
	?   	k8s.io/client-go/tools/leaderelection/resourcelock	[no test files]
	?   	k8s.io/client-go/tools/metrics	[no test files]
	ok  	k8s.io/client-go/tools/pager	(cached)
	ok  	k8s.io/client-go/tools/portforward	0.013s
	ok  	k8s.io/client-go/tools/record	0.148s
	?   	k8s.io/client-go/tools/record/util	[no test files]
	ok  	k8s.io/client-go/tools/reference	0.041s
	ok  	k8s.io/client-go/tools/remotecommand	0.013s
	ok  	k8s.io/client-go/tools/watch	12.281s
	ok  	k8s.io/client-go/transport	0.017s
	?   	k8s.io/client-go/transport/spdy	[no test files]
	ok  	k8s.io/client-go/util/cert	0.022s
	ok  	k8s.io/client-go/util/certificate	0.054s
	ok  	k8s.io/client-go/util/certificate/csr	0.123s
	ok  	k8s.io/client-go/util/connrotation	(cached)
	?   	k8s.io/client-go/util/exec	[no test files]
	ok  	k8s.io/client-go/util/flowcontrol	(cached)
	?   	k8s.io/client-go/util/homedir	[no test files]
	ok  	k8s.io/client-go/util/jsonpath	(cached)
	ok  	k8s.io/client-go/util/keyutil	(cached)
	ok  	k8s.io/client-go/util/retry	(cached)
	ok  	k8s.io/client-go/util/testing	(cached)
	ok  	k8s.io/client-go/util/workqueue	(cached)
	FAIL
[05 Mar 20 20:43 UTC]: exit status 1
    	+ go build ./...
    	+ go test ./...

[05 Mar 20 20:43 UTC]: exit status 1```

sync-tags adds an empty commit when pseudoversion doesn't change

Example: kubernetes/api@7cf5895

The pseudoversion for apimachinery doesn't change in the above case, but sync-tags still adds an empty commit. This isn't harmful, but is redundant.

We should be able to avoid add that by checking the version by go list -m -json k8s.io/apimachinery and the expected pseudoversion.

/assign
will look over the weekend, don't have time to work on this + test it before then

Upgrade module to go1.17

There is any reason that we cannot bump the bot to use go1.16 go1.17?

If there is nothing against we can update it.

update-deps-in-gomod assumes no slash in base package

While running this in downstream, there is a tiny issue on sed which breaks when base package has slashes.

        + read -a deps
        + local base_package=tess.io/ebay
        + local dep_count=1
        + local dep_packages=
        + '[' 1 '!=' 0 ']'
        ++ echo api:master
        ++ tr , '\n'
        ++ sed -e 's/:.*//' -e 's/^/tess.io/ebay\//'
        ++ paste -sd , -
        sed: -e expression #2, char 14: unknown option to `s'
        + dep_packages=
E0601 19:23:26.812345      12 publisher.go:413] exit status 1
I0601 19:23:26.815109      12 main.go:202] Failed to run publisher: exit status 1

I'd like to propose a fix like this:

     if [ "$dep_count" != 0 ]; then
-      dep_packages="$(echo ${1} | tr "," "\n" | sed -e 's/:.*//' -e s/^/"${base_package}\/"/ | paste -sd "," -)"
+      dep_packages="$(echo ${1} | tr "," "\n" | sed -e 's/:.*//' -e s~^~"${base_package}/"~ | paste -sd "," -)"
     fi

References

Warning for memory limit of init container when deploying publishing bot

The default configurationfor the init container of the publishing bot, uses a fractional value (1.6Gi) for the memory limit. This causes a warning when the publishing-bot is deployed to a cluster

spec.template.spec.initContainers[0].resources.limits[memory]: fractional byte value "1717986918400m" is invalid, must be an integer

A valid memory limit value for the init container should be identified and used in the default configuration

Identify / Fix causes of corruption in .git directory

There are cases of corruption in .git directory that occurs randomly while running the publishing-bot which causes the bot to fail. Sometimes a PVC cleanup and restart fixes the issue.

This will be an umbrella issue to track such corruptions and identify possible solutions:

A few of the recurring corruption issues are listed below

The above 2 issues get fixed after cleaning up of the PVC

Update-rules modifying dir to dirs when updating existing rules

Below command
update-rules -branch release-1.25 -go 1.19.1 -o ./new-rules.yaml -rules https://github.com/kubernetes/kubernetes/raw/master/staging/publishing/rules.yaml

rules:
- destination: code-generator
  branches:
  - name: master
    source:
      branch: master
      dir: staging/src/k8s.io/code-generator
  - name: release-1.25
    go: 1.20.8
    source:
      branch: release-1.25
      dir: staging/src/k8s.io/code-generator

to

rules:
- destination: code-generator
  branches:
  - name: master
    source:
      branch: master
      dirs:
      - staging/src/k8s.io/code-generator
  - name: release-1.25
    go: 1.19.1
    source:
      branch: release-1.25
      dirs:
      - staging/src/k8s.io/code-generator

This might be caused by PR

CC : @akhilerm

Ambiguous imports with cloud.google.com/go/compute/metadata

Ref: kubernetes/kubernetes#113366, kubernetes/kubernetes#114829

On merging kubernetes/kubernetes#114822, the publishing-bot failed on running go mod tidy - kubernetes/kubernetes#56876 (comment):

+ go mod tidy
    	k8s.io/kube-aggregator/pkg/cmd/server imports
    		k8s.io/apiserver/pkg/server/options imports
    		k8s.io/apiserver/pkg/storage/storagebackend/factory imports
    		go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc tested by
    		go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.test imports
    		google.golang.org/grpc/interop imports
    		golang.org/x/oauth2/google imports
    		cloud.google.com/go/compute/metadata: ambiguous import: found package cloud.google.com/go/compute/metadata in multiple modules:
    		cloud.google.com/go v0.97.0 (/go-workspace/pkg/mod/cloud.google.com/[email protected]/compute/metadata)
    		cloud.google.com/go/compute/metadata v0.2.0 (/go-workspace/pkg/mod/cloud.google.com/go/compute/[email protected])

googleapis/google-cloud-go#6311 made compute/metadata its own module. This commit exists also exists on:

  • v0.105.0 of the root module (cloud.google.com/go)
  • v1.12.0 of cloud.google.com/go/compute

If a repo already depends on a version of cloud.google.com/go < v0.105.0 and tries to import cloud.google.com/go/compute/metadata, it'll hit the problem of ambiguous imports while running go mod tidy - golang/go#27899 (another example - ugorji/go#279).

Note: apiserver requires v0.97.0 of the root module - https://github.com/kubernetes/kubernetes/blob/293bf70916de8ef61d5f868f53959f1e15b3e091/staging/src/k8s.io/apiserver/go.mod#L60

To resolve this, we either need to:

  1. pin cloud.google.com/go to a version which includes the cloud.google.com/go/compute/metadata module i.e. > v0.105.0
  2. Run an explicit go get cloud.google.com/[email protected] before running go mod tidy.

For (1), it brings in additional changes that we might not want right now (like bumping genproto). So we go ahead with (2) until we are ready to bump cloud.google.com/go or remove the dependency on it.

/assign

Create a tool to update publishing-bot rules for new releases

See kubernetes/sig-release#1504 for more context.

The publishing-bot copies/publishes code from staging directories to their own repos. The configuration for how the branches should be published are defined in the master branch of the main k/k repo: https://github.com/kubernetes/kubernetes/blob/master/staging/publishing/rules.yaml

For example, in this code snippet - https://github.com/kubernetes/kubernetes/blob/6572fe4d90173a1e97fda3e0cc28acc95e3f560a/staging/publishing/rules.yaml#L197-L209:

  • The code in the release-1.21 branch of the client-go staging directory is published to the release-1.21 branch of the client-go repo.
  • The go.mod for the release-1.21 branch of the client-go repo uses the release-1.21 of api and apimachinery repos as dependencies.

This issue involves creating a new tool in Go (can go in the cmd/ directory) to update rules automatically. For example, the tool should automatically update rules.yaml in k/k to add the new branches for release-1.21 like in kubernetes/kubernetes#100616.

Adding new branches would basically involve copying over the master branch contents and updating the branch name. Additionally, the go: <version> field can be added to new branches too.

F0220 16:09:13.608470 1882464 main.go:551] Failed to update go.mod and go.sum for tag v0.15.11-beta.0: failed to get tag v0.15.11-beta.0 for "k8s.io/api": failed to get refs/tags/v0.15.11-beta.0: reference not found

Persistent error in kubernetes/kubernetes#56876 (comment)

The failure message comes from here:

// skip if either tag exists at origin
_, nonSemverTagAtOrigin := bTagCommits[bName]
_, semverTagAtOrigin := bTagCommits[semverTag]
if nonSemverTagAtOrigin || (publishSemverTag && semverTagAtOrigin) {
continue
}
// if any of the tag exists locally,
// delete the tags, clear the cache and recreate them
if tagExists(r, bName) {
commit, commitTime, err := taggedCommitHashAndTime(r, bName)
if err != nil {
glog.Fatalf("Failed to get tag %s: %v", bName, err)

We should not get to that point if the semver tag already exists at the origin, which it does

/assign @nikhita @sttts

Fix or update build files when syncing

A build failure was originally reported in bazelbuild/rules_go#1356. It appears that Bazel BUILD files in api, apimachinery, and client-go refer to non-existent packages.

Steps to reproduce

  • In an empty workspace, create an empty BUILD file and a WORKSPACE file like below:
http_archive(
    name = "io_bazel_rules_go",
    sha256 = "4b14d8dd31c6dbaf3ff871adcd03f28c3274e42abc855cb8fb4d01233c0154dc",
    url = "https://github.com/bazelbuild/rules_go/releases/download/0.10.1/rules_go-0.10.1.tar.gz",
)

http_archive(
    name = "bazel_gazelle",
    sha256 = "6228d9618ab9536892aa69082c063207c91e777e51bd3c5544c9c060cafe1bd8",
    url = "https://github.com/bazelbuild/bazel-gazelle/releases/download/0.10.0/bazel-gazelle-0.10.0.tar.gz",
)

load("@io_bazel_rules_go//go:def.bzl", "go_rules_dependencies", "go_register_toolchains")

go_rules_dependencies()

go_register_toolchains()

load("@bazel_gazelle//:deps.bzl", "gazelle_dependencies")

gazelle_dependencies()

load("@bazel_gazelle//:def.bzl", "go_repository")

git_repository(
    name = "io_k8s_api",
    remote = "https://github.com/kubernetes/api",
    tag = "kubernetes-1.9.3",
)

git_repository(
    name = "io_k8s_apimachinery",
    remote = "https://github.com/kubernetes/apimachinery",
    tag = "kubernetes-1.9.3",
)

git_repository(
    name = "io_k8s_client_go",
    remote = "https://github.com/kubernetes/client-go",
    tag = "v6.0.0",
)
  • Run this command:
$ bazel build @io_k8s_client_go//kubernetes/typed/core/v1:go_default_library
ERROR: /usr/local/google/home/jayconrod/.cache/bazel/_bazel_jayconrod/533bcca7b8c77451fbaf7e2a9f7692b3/external/io_k8s_client_go/kubernetes/typed/core/v1/BUILD:8:1: no such package '@io_k8s_client_go//vendor/k8s.io/apimachinery/pkg/types': BUILD file not found on package path and referenced by '@io_k8s_client_go//kubernetes/typed/core/v1:go_default_library'.
ERROR: Analysis of target '@io_k8s_client_go//kubernetes/typed/core/v1:go_default_library' failed; build aborted: no such package '@io_k8s_client_go//vendor/k8s.io/apimachinery/pkg/types': BUILD file not found on package path.

Analysis

The BUILD file for the package @io_k8s_client_go//kubernetes/typed/core/v1 refers to rules in //vendor/k8s.io/.... That directory doesn't exist in this repository.

I suspect this bot may have copied the source files from k/k without updating the build files. Not sure what the right answer is. If the bot removes build files during sync, it may be possible to regenerate them correctly with Gazelle via go_repository. If the bot updates build files using Gazelle during sync, all dependencies will need to be vendored or declared using external repositories in WORKSPACE.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Version skew on tagging

Our tagging algorithm is not 100% precise:

Assume we have the following commits in k/k (latest to earliest):

  • commit A modifying staging/src/k8s.io/api only, tagged as v1.9.0 in k/k
  • commit B modifying staging/src/k8s.io/client-go only

The pubilshing bot cherry-picks these as:

  • commit A' in k8s.io/api, tagged as kubernetes-1.9.0 in k8s.io/api
  • commit B' in k8s.io/client-go, tagged as kubernetes-1.9.0 in `k8s.io/client-go

The problem: Godeps/Godeps.json in k8s.io/client-go is generated assuming B' belongs to B in k/k. B maps to A'^ in k8s.io/api. Hence, client-go's Godeps.json at B' points to A'^ of k8s.io/api.

In https://github.com/kubernetes/client-go/releases/tag/v6.0.0 this leads to one PR (kubernetes/kubernetes#57075) being missed from k8s.io/api, compare the following k8s.io/api release-1.9 branch screenshot. The hightlighted version appears in client-go's Godeps.json, while the kubernetes-1.9.0 is one PR merge further up:

bildschirmfoto 2017-12-18 um 13 59 58

For v6.0.0 this is not critical as this is only a documentation PR. But we have to find a solution for this Godeps version skew soon and do a similar analysis before each new tag (and hope that no important PR is skipped).

k/k/staging... submodules are not processed when publishing

In k/k staging repos like api (staging/src/k8s.io/api) we find stuff like this in go.mod:

replace (
        k8s.io/api => ../api
        k8s.io/apimachinery => ../apimachinery
)

When published, that becomes (https://github.com/kubernetes/api/blob/master/go.mod#L40):

replace k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20230315054728-8d1258da8f38

I'm assuming that happens at publishing time and I found something that looks like it might be the place:

replaceCommand := exec.Command("go", "mod", "edit", "-fmt", "-replace", fmt.Sprintf("%s=%s@%s", depPkg, depPkg, pseudoVersionOrTag))
replaceCommand.Env = append(os.Environ(), "GO111MODULE=on")
replaceCommand.Stdout = os.Stdout
replaceCommand.Stderr = os.Stderr
if err := replaceCommand.Run(); err != nil {

Now the issue. We have a couple of staging repos that use sub-modules to isolate deps. Example: https://github.com/kubernetes/code-generator/tree/master/examples

In that go.mod, we find:

replace (
	k8s.io/api => ../../api
	k8s.io/apimachinery => ../../apimachinery
	k8s.io/client-go => ../../client-go
)

...but these are never reified into a real version. It's not a HUGE deal, but it means that example can ONLY be used from a GOPATH where these other repos are ALSO checked out. That's unpleasant, and if it were reasonable to solve, we should.

Generalizing publishing-bot

At least for now, publishing-bot is designed to work with kubernetes/kubernetes repo. However, as we are encouraging people to have their own apiserver repo, I believe it makes sense to export their own api and client-go repo from their own apiserver.

I have tried to use this repo to export api and client-go for my own apiserver, though there are some places to tweak, but finally I get it working for me.

So probably a question(or a feature request) here, could we make this bot general enough to handle any repo which has similar staging/src setup like kubernetes/kubernetes.

Improve PVC cleanup procedure for publishing-bot

The current PVC cleanup process https://github.com/kubernetes/publishing-bot/blob/master/production.md#how-do-i-clean-up-the-pvc has one issue:

  • If the new pod comes up faster than before the old PVC is completely deleted. The new pod will start using the old PVC which is in Terminating state, thus preventing the PVC from getting deleted and failing to do the cleanup.

One solution to solve this is via scaling down the publisher ReplicaSet, so that the pod is deleted first. and then we delete the PVC. This ensures proper deletion of PVC. The replicaset can be scaled up after the creation of PVC, and thus the new pod will use the fresh volume.

The problem with the above solution is additional permissions would have to be granted so that the scale can be patched on the replica set. Currently there is less permission to scale the replica

Error from server (Forbidden): replicasets.apps "publisher" is forbidden: User "<email>" cannot patch resource "replicasets/scale" in API group "apps" in the namespace "publishing-bot": requires one of ["container.replicaSets.updateScale" "container.replicaSets.update"] permission(s).

Handle unexpected container shutdowns

In case the container is killed (quota/node down/etc.) we might end up in weird state if the scripts crash in wrong time (?). We should probably handle the SIGINT and try to gracefully exit any critical operation that bot is currently doing.

Support publishing staging directories to subdirectories of external repos

As part of the cloud provider removal efforts, it may be beneficial for us to support publishing staged directories to only subdirectories of external repos. This gives us the flexibility to publish the cloud provider implementations developed in k8s.io/kubernetes into their respective external repos without forcing the entire repository to reflect what is in staging.

An example of this would be to publish k8s.io/kubernetes/staging/src/k8s.io/cloud-provider/gce to k8s.io/cloud-provider-gce/provider/. This would keep only k8s.io/cloud-provider-gce/provider/ in sync with what is staged but allow us to freely update any other directory in that repo.

Happy to pick up any work required for this (will need some guidance though).

cc @nikhita @cheftako @mcrute @dims

During git pull of k/k if the references changes, try again sooner

Currently when we encounter the issue below ... we end up waiting for interval worth of time before trying again. We should try sooner than that as this is just an intermittent issue.

E0213 19:10:04.033361       1 publisher.go:396] failed to fetch at /go-workspace/src/k8s.io/kubernetes: reference has changed concurrently
I0213 19:10:04.043938       1 main.go:181] Failed to run publisher: failed to fetch at /go-workspace/src/k8s.io/kubernetes: reference has changed concurrently

Guidance requested: sub-modules go.mod and relative path resolution

These k/k staging sub-modules are clearly broken:

https://github.com/kubernetes/code-generator/blob/master/examples/go.mod#L53-L57
https://github.com/kubernetes/kms/blob/master/internal/plugins/mock/go.mod#L25-L29

They happen to work when you use them in a GOPATH, which is required for other reasons TODAY, but will eventually not be required. Once GOPATH is properly dead, these will fail.

We should be reifying the .. paths into real versions. E.g. something converts https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kms/go.mod#L27-L32 into https://github.com/kubernetes/kms/blob/master/go.mod#L27-L31 - we need that same transformation for sub-modules.

Is this something the promoter bot should handle?

xref kubernetes/kubernetes#117920

go mod pseudo version uses wrong timestamp

We use TZ=GMT git show -q --pretty='format:v0.0.0-%cd-%h' --date='format:%Y%m%d%H%M%S' --abbrev=12 to get the time stamp of the last commit. But this does not work, TZ=GMT has no influence and we get (probably) the timezone of the last commit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.