MULTIRUN: container image promoter version:
Built: 2019-04-23 03:13:59+00:00
Version: v2.0.1-0-g4db722e
Commit: 4db722e801f62b206f983d0a7adfad0fee99d463
MULTIRUN: gcloud version:
Google Cloud SDK 241.0.0
alpha 2019.04.02
app-engine-go
app-engine-java 1.9.73
app-engine-python 1.9.85
app-engine-python-extras 1.9.85
beta 2019.04.02
bigtable
bq 2.0.43
cbt
cloud-datastore-emulator 2.1.0
core 2019.04.02
datalab 20190116
gsutil 4.38
kubectl 2019.04.02
pubsub-emulator 2019.04.02
MULTIRUN: running against k8s.gcr.io/k8s-staging-cluster-api/manifest.yaml
MULTIRUN: activating service account /etc/k8s-artifacts-prod-service-account/service-account.json
Activated service account credentials for: [k8s-infra-gcr-promoter@k8s-artifacts-prod.iam.gserviceaccount.com]
********** START: k8s.gcr.io/k8s-staging-cluster-api/manifest.yaml **********
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x61750d]
goroutine 10 [running]:
encoding/json.(*Decoder).refill(0xc0004242c0, 0xc000056b01, 0x100c0001aa060)
GOROOT/src/encoding/json/stream.go:159 +0xcd
encoding/json.(*Decoder).readValue(0xc0004242c0, 0x0, 0x0, 0x6f14e0)
GOROOT/src/encoding/json/stream.go:134 +0x222
encoding/json.(*Decoder).Decode(0xc0004242c0, 0x68ff80, 0xc000574000, 0xc0005c0020, 0x950268)
GOROOT/src/encoding/json/stream.go:63 +0x78
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.extractRegistryTags(0x0, 0x0, 0x0, 0x0, 0x1)
lib/dockerregistry/inventory.go:744 +0x9f
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.getRegistryTagsFrom(0x6de800, 0xc00000ee80, 0x759860, 0xc000057cc0, 0x0, 0x0, 0x0, 0x0)
lib/dockerregistry/inventory.go:433 +0x110
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.(*SyncContext).ReadRepository.func2(0xc000060900, 0xc000061200, 0xc00006a360, 0xc00012b710, 0xc00012b708)
lib/dockerregistry/inventory.go:589 +0x793
created by github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.(*SyncContext).ExecRequests
lib/dockerregistry/inventory.go:713 +0x144
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x61750d]
goroutine 15 [running]:
encoding/json.(*Decoder).refill(0xc000214420, 0xc000056b01, 0x100c0001a80c0)
GOROOT/src/encoding/json/stream.go:159 +0xcd
encoding/json.(*Decoder).readValue(0xc000214420, 0x0, 0x0, 0x6f14e0)
GOROOT/src/encoding/json/stream.go:134 +0x222
encoding/json.(*Decoder).Decode(0xc000214420, 0x68ff80, 0xc000558000, 0xc000498020, 0x950268)
GOROOT/src/encoding/json/stream.go:63 +0x78
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.extractRegistryTags(0x0, 0x0, 0x0, 0x0, 0x1)
lib/dockerregistry/inventory.go:744 +0x9f
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.getRegistryTagsFrom(0x6de800, 0xc00000ee40, 0x759860, 0xc000057c80, 0xc0000a2858, 0xc00008ab28, 0x4121d3, 0x6af540)
lib/dockerregistry/inventory.go:433 +0x110
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.(*SyncContext).ReadRepository.func2(0xc000060900, 0xc000061200, 0xc00006a360, 0xc00012b710, 0xc00012b708)
lib/dockerregistry/inventory.go:589 +0x793
created by github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.(*SyncContext).ExecRequests
lib/dockerregistry/inventory.go:713 +0x144
Where the Decoder (not our code) choked on the io.Reader handle. I don't know how to read Golang traces but it appears that when we called decoder.Decode(&tags)
, that function paniced with a nil pointer dereference. It could have been that the io.Reader we passed along was for some reason invalid/empty. The line
github.com/kubernetes-sigs/k8s-container-image-promoter/lib/dockerregistry.getRegistryTagsFrom(0x6de800, 0xc00000ee80, 0x759860, 0xc000057cc0, 0x0, 0x0, 0x0, 0x0)
looks suspicious to me because the last 4 arguments are zeroed out (compared to the call found in goroutine 15
which has the last 4 addresses as non-zero values).
Anyway, we should add robustness checks so that we don't choke like this. One option (hacky) would be to read the io.Reader handle ourselves and write it to a buffer, then (optionally) log it, and then finally use this buffer directly in extractRegistryTags()
. Then at least we would gain more interesting error logs the next time there is a nil pointer dereference in the JSON library code that we use. The disadvantage is that we lose the elegance of passing around file handles, but it's probably worth it.
listx [13 minutes ago]
ok so rerunning it made it succeed. so either (1) there is a race condition in the ReadRepository code (probably not likely) or (2) there was an error with the stdout stream from GCR which made the json decoder choke (because that's what the trace said). i wonder if we can log the stdout if possible before we send it off to the json decoder (it's a library function). will add as an issue
listx [12 minutes ago]
at least if we log the stdout before processing it, we'll be able to debug it better in the future