Coder Social home page Coder Social logo

werf / kubedog Goto Github PK

View Code? Open in Web Editor NEW
635.0 17.0 45.0 2.69 MB

Library to watch and follow kubernetes resources in CI/CD deploy pipelines

License: Apache License 2.0

Go 99.47% Shell 0.38% Makefile 0.14%
cicd helm rollout follow kubectl devops werf watcher kubernetes

kubedog's Introduction

GH Discussions Twitter Telegram chat
GoDoc Contributor Covenant Artifact Hub

werf is a CNCF Sandbox CLI tool to implement full-cycle CI/CD to Kubernetes easily. werf integrates into your CI system and leverages familiar and reliable technologies, such as Git, Dockerfile, Helm, and Buildah.

What makes werf special:

  • Complete application lifecycle management: build and publish container images, test, deploy an application to Kubernetes, distribute release artifacts and clean up the container registry.
  • Ease of use: use Dockerfiles and Helm chart for configuration and let werf handle all the rest.
  • Advanced features: automatic build caching and content-based tagging, enhanced resource tracking and extra capabilities in Helm, a unique container registry cleanup approach, and more.
  • Gluing common technologies: Git, Buildah, Helm, Kubernetes, and your CI system of choice.
  • Production-ready: werf has been used in production since 2017; thousands of projects rely on it to build & deploy various apps.

Quickstart

The quickstart guide shows how to set up the deployment of an example application (a cool voting app in our case) using werf.

Installation

The installation guide helps set up and use werf both locally and in your CI system.

Documentation

Detailed usage and reference for werf are available in documentation in multiple languages.

Developers can get all the necessary knowledge about application delivery in Kubernetes (including basic understanding of K8s primitives) in the werf guides. They provide ready-to-use examples for popular frameworks, including Node.js (JavaScript), Spring Boot (Java), Django (Python), Rails (Ruby), and Laravel (PHP).

Community & support

Please feel free to reach developers/maintainers and users via GitHub Discussions for any questions regarding werf. You're also welcome on Stack Overflow: when you tag a question with werf, our team is notified and comes to help you.

Your issues are processed carefully if posted to issues at GitHub.

For questions that may require a more detailed and prompt discussion, you can use:

  • #werf channel in the CNCF’s Slack workspace;
  • werf_io Telegram chat. (There is a Russian-speaking Telegram chat werf_ru as well.)

Follow @werf_io to stay informed about all important project's news, new articles, etc.

Contributing

This contributing guide outlines the process to help get your contribution accepted.

License

Apache License 2.0, see LICENSE.

Featured in

Console - Developer Tool of the Week Scheme

kubedog's People

Contributors

alexey-igrychev avatar diafour avatar distorhead avatar dkhachyan avatar flant-team-sysdev avatar github-actions[bot] avatar ilya-lesikov avatar nabadger avatar shurup avatar z9r5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubedog's Issues

Print final status table in multitracker

Final status table should resemble status-progress table and helm final table with all resources with pods listed. Do not print child resources (too verbose).

No logs if namespace is empty

The tracker does not display logs if the namespace is empty (default)

Reproduce the problem:

$ kubedog follow deployment example -n ''
# deploy/example appears to be ready
# deploy/example new rs/example-5795c486d added
# deploy/example rs/example-7d5d8f9446 added
# deploy/example rs/example-5764c48d46 added
# deploy/example rs/example-5795c486d(new) po/example-5795c486d-69pfb added
# deploy/example event: po/example-5795c486d-69pfb Killing: Stopping container kube-state-metrics
# deploy/example rs/example-5795c486d(new) po/example-5795c486d-8hwvz added
# deploy/example event: po/example-5795c486d-8hwvz Pulled: Container image "k8s.gcr.io/kube-state-metrics:v1.5.0" already present on machine
# deploy/example event: po/example-5795c486d-8hwvz Created: Created container kube-state-metrics
# deploy/example event: po/example-5795c486d-8hwvz Started: Started container kube-state-metrics
# deploy/example become READY

CreateContainerConfigError does not interrupt rollout tracker with the error

11:16:50  │ │ php-ml-msql                                                                   1/1           0               1                          
11:16:50  │ │ │   POD                          READY      RESTARTS      STATUS              ---                                                      
11:16:50  │ │ └── ml-msql-0                    0/1        0             CreateContainerConf Waiting for: ready 0->1                                  
11:16:50  │ │                                                           igError             
11:16:50  │ │     └── error: Failed: Error: secret "mysql" not found                          
11:16:50  │ └ Status progress

Kubernetes 1.16 support

Hello, guys!

Love your product.

It looks like kubedog 0.3.3 doesn't support the latest version of Kubernetes 1.16.0, which was released 2 days ago.

I'm getting this error:

command:

kubedog rollout track deployment nginx-ingress-controller -n ingress-nginx

output:

E0921 00:01:45.823083   18836 reflector.go:131] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99: Failed to list *v1beta1.Deployment: the server could not find the requested resource

Track a service with kubedog

Hi! I'm fairly new to the kubernetes world. I was wondering if it would be feasible to add service tracking to Kubedog? I'm using it in the Dokku world for ensuring things get deployed correctly.

Happy to provide a small bounty - lets say $100? - from our OpenCollective if this seems reasonable to you :)

Arm64 build

Can you add to release assets amr64 build?

Panic 'fatal error: concurrent map read and map write'

goroutine 2135 [running]:
runtime.throw(0x283835c, 0x21)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/panic.go:1116 +0x72 fp=0xc0020cbd40 sp=0xc0020cbd10 pc=0x433d42
runtime.mapaccess1_faststr(0x2447cc0, 0xc0050ceb10, 0xc0015a2940, 0x7, 0xc001d99201)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/map_faststr.go:21 +0x43c fp=0xc0020cbdb0 sp=0xc0020cbd40 pc=0x412b8c
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).trackContainer(0xc001ac4f00, 0x2db6680, 0xc001081bc0, 0xc0015a2940, 0x7, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:392 +0x178 fp=0xc0020cbef0 sp=0xc0020cbdb0 pc=0x17f7e58
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers.func1(0xc001ac4f00, 0xc0015a2940, 0x7, 0x2db6680, 0xc001081bc0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:437 +0xb9 fp=0xc0020cbfb8 sp=0xc0020cbef0 pc=0x17f96f9
runtime.goexit()
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0020cbfc0 sp=0xc0020cbfb8 pc=0x463f51
created by github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:432 +0x273

goroutine 1 [select]:
github.com/werf/kubedog/pkg/trackers/rollout/multitrack.Multitrack(0x2e09ca0, 0xc001064c60, 0xc00041db80, 0x4, 0x4, 0xc0004dd900, 0x3, 0x4, 0x0, 0x0, ...)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/trackers/rollout/multitrack/multitrack.go:186 +0xe27
github.com/werf/werf/pkg/deploy/helm.(*ResourcesWaiter).WaitForResources.func1(0xc0000d2001, 0xc001d88b90)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/resources_waiter.go:156 +0x12e
github.com/werf/logboek/internal/stream.(*Stream).logProcess.func1(0x24fdf00, 0xc001d88b90)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/internal/stream/process.go:149 +0x26
github.com/werf/logboek/internal/stream.(*Stream).logProcess(0xc00000c100, 0xc002862120, 0x2d, 0xc003f2c2c0, 0xc005215500, 0x538001, 0xc005215500)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/internal/stream/process.go:156 +0xfb
github.com/werf/logboek/internal/stream.(*LogProcess).DoError(0xc0003db000, 0xc005215500, 0x2d, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/internal/stream/process_types.go:199 +0xd2
github.com/werf/werf/pkg/deploy/helm.(*ResourcesWaiter).WaitForResources(0xc001080100, 0x9d29229e000, 0xc000030400, 0x58, 0x80, 0x57, 0x80)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/resources_waiter.go:155 +0x2298
k8s.io/helm/pkg/kube.(*Client).waitForResources(0xc0002f0e70, 0x9d29229e000, 0xc000030400, 0x58, 0x80, 0x80, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/kube/wait.go:46 +0x258
k8s.io/helm/pkg/kube.(*Client).UpdateWithOptions(0xc0002f0e70, 0xc0018de1e0, 0xf, 0x2d4f1c0, 0xc000a0a450, 0x2d4f1c0, 0xc000a0a480, 0x0, 0x2a30, 0x10001, ...)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/kube/client.go:779 +0xd72
k8s.io/helm/pkg/tiller.(*LocalReleaseModule).Update(0xc0005f1160, 0xc0011ee900, 0xc000c2d290, 0xc000410000, 0xc00042c300, 0x14, 0xc0018de1e0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tiller/release_modules.go:81 +0x32b
k8s.io/helm/pkg/tiller.(*ReleaseServer).performUpdate(0xc000672d80, 0xc0011ee900, 0xc000c2d290, 0xc000410000, 0x1, 0x0, 0x1)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tiller/release_update.go:297 +0x162
k8s.io/helm/pkg/tiller.(*ReleaseServer).UpdateRelease(0xc000672d80, 0x2db6740, 0xc0006736b0, 0xc000410000, 0x1, 0x1, 0xc000192000)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tiller/release_update.go:58 +0x4ec
github.com/werf/werf/pkg/deploy/helm.releaseUpdate(0x2db6740, 0xc0006736b0, 0xc000562080, 0xc0008d2920, 0x14, 0xc000b06100, 0x0, 0x0, 0x2a30, 0x101, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/tiller.go:492 +0x357
github.com/werf/werf/pkg/deploy/helm.ReleaseUpdate(0x2db6740, 0xc0006736b0, 0xc000d0c400, 0x35, 0xc0008d2920, 0x14, 0xc000d84510, 0x3, 0x3, 0xc000b01c40, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/tiller.go:380 +0x37e
github.com/werf/werf/pkg/deploy/helm.DeployHelmChart.func2(0xbfe56a23acbe0299, 0x11d7b30a2)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/release.go:387 +0x25c
github.com/werf/werf/pkg/deploy/helm.runDeployProcess(0x2db6740, 0xc0006736b0, 0xc0008d2920, 0x14, 0x7ffcaf36fda6, 0xf, 0x9d29229e000, 0x0, 0x0, 0x0, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/release.go:487 +0xcf
github.com/werf/werf/pkg/deploy/helm.DeployHelmChart(0x2db6740, 0xc0006736b0, 0xc000d0c400, 0x35, 0xc0008d2920, 0x14, 0x7ffcaf36fda6, 0xf, 0x9d29229e000, 0x0, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/release.go:462 +0x58e
github.com/werf/werf/pkg/deploy/werf_chart.(*WerfChart).Deploy(0xc0008f2420, 0x2db6740, 0xc0006736b0, 0xc0008d2920, 0x14, 0x7ffcaf36fda6, 0xf, 0x9d29229e000, 0x0, 0x0, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/werf_chart/werf_chart.go:78 +0x3fa
github.com/werf/werf/pkg/deploy.Deploy.func2(0x21630bd, 0x2569220)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/deploy.go:107 +0x14f
github.com/werf/werf/pkg/deploy/helm.WerfTemplateEngineWithExtraAnnotationsAndLabels(0xc0004a5b30, 0xc0004a5c50, 0xc0008b3418, 0xc0000b8210, 0x0)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/helm/templates.go:473 +0x69
github.com/werf/werf/pkg/deploy.Deploy(0x2db6740, 0xc0006736b0, 0xc00061e9a0, 0x4, 0xc00005c0c4, 0x2f, 0xc000d0c400, 0x35, 0xc00005c091, 0x28, ...)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/deploy/deploy.go:106 +0x5da
github.com/werf/werf/cmd/werf/deploy.runDeploy(0x0, 0x0)
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/deploy/main.go:313 +0x175a
github.com/werf/werf/cmd/werf/deploy.NewCmd.func1.1(0xbfe56a22821040a8, 0x46246e2)
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/deploy/main.go:67 +0x22
github.com/werf/werf/cmd/werf/common.LogRunningTime(0x2943538, 0xc, 0xc000517670)
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/common/common.go:1480 +0x53
github.com/werf/werf/cmd/werf/deploy.NewCmd.func1(0xc000667080, 0xc0006486c0, 0x1, 0x24, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/deploy/main.go:66 +0x126
github.com/spf13/cobra.(*Command).execute(0xc000667080, 0xc000648240, 0x24, 0x24, 0xc000667080, 0xc000648240)
	/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x453
github.com/spf13/cobra.(*Command).ExecuteC(0xc000666dc0, 0x0, 0xc00000c100, 0x405b1f)
	/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
	/home/ubuntu/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main()
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/main.go:105 +0x119

goroutine 6 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x443a200)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xd6

goroutine 1986 [select]:
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).trackContainer(0xc000b7ea00, 0x2db6680, 0xc000146100, 0xc001415c60, 0x6, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:390 +0x12c
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers.func1(0xc000b7ea00, 0xc001415c60, 0x6, 0x2db6680, 0xc000146100)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:437 +0xb9
created by github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:432 +0x273

goroutine 23 [syscall]:
os/signal.signal_recv(0x463f56)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/os/signal/signal.go:127 +0x44

goroutine 2026 [select]:
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).trackContainer(0xc000b7f400, 0x2db6680, 0xc0011b4940, 0xc001ddcf64, 0x5, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:390 +0x12c
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers.func1(0xc000b7f400, 0xc001ddcf64, 0x5, 0x2db6680, 0xc0011b4940)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:437 +0xb9
created by github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:432 +0x273

goroutine 12 [select]:
github.com/werf/werf/cmd/werf/common.EnableTerminationSignalsTrap.func1()
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/common/termination_signals.go:22 +0x9e
created by github.com/werf/werf/cmd/werf/common.EnableTerminationSignalsTrap
	/home/ubuntu/actions-runner/_work/werf/werf/cmd/werf/common/termination_signals.go:21 +0x11c

goroutine 13 [sleep]:
time.Sleep(0x3b9aca00)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/time.go:188 +0xba
github.com/werf/werf/pkg/process_exterminator.run(0xc00051e940, 0x5, 0x8)
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/process_exterminator/unix.go:52 +0x241
created by github.com/werf/werf/pkg/process_exterminator.Init
	/home/ubuntu/actions-runner/_work/werf/werf/pkg/process_exterminator/unix.go:28 +0x9f

goroutine 65 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2f8e64f18, 0x72, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc000655a18, 0x72, 0xb700, 0xb72c, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000655a00, 0xc001840000, 0xb72c, 0xb72c, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc000655a00, 0xc001840000, 0xb72c, 0xb72c, 0x203000, 0x670c20, 0xc0003144b8)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc00011a118, 0xc001840000, 0xb72c, 0xb72c, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/net.go:184 +0x8e
crypto/tls.(*atLeastReader).Read(0xc00150a640, 0xc001840000, 0xb72c, 0xb72c, 0x31bf, 0xb727, 0xc0006969a8)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/crypto/tls/conn.go:760 +0x60
bytes.(*Buffer).ReadFrom(0xc0003145d8, 0x2d4f400, 0xc00150a640, 0x40a3a5, 0x249a980, 0x2742fc0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000314380, 0x2d53700, 0xc00011a118, 0x5, 0xc00011a118, 0x31ae)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/crypto/tls/conn.go:782 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc000314380, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/crypto/tls/conn.go:589 +0x115
crypto/tls.(*Conn).readRecord(...)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/crypto/tls/conn.go:557
crypto/tls.(*Conn).Read(0xc000314380, 0xc00056f000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/crypto/tls/conn.go:1233 +0x15b
bufio.(*Reader).Read(0xc001062120, 0xc00066e118, 0x9, 0x9, 0x47be7c, 0xc001338880, 0x31a5)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bufio/bufio.go:226 +0x24f
io.ReadAtLeast(0x2d4f180, 0xc001062120, 0xc00066e118, 0x9, 0x9, 0x9, 0xc001338870, 0x2947928, 0xc001338868)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/io/io.go:310 +0x87
io.ReadFull(...)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/io/io.go:329
golang.org/x/net/http2.readFrameHeader(0xc00066e118, 0x9, 0x9, 0x2d4f180, 0xc001062120, 0x0, 0x0, 0xc001a697d0, 0x0)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x87
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00066e0e0, 0xc001a697d0, 0x0, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0xa1
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000696fa8, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1794 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000501800)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1716 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:695 +0x64a

goroutine 1677 [chan receive]:
k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0006a3ec0, 0xc000e5d800)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:124 +0x34
created by k8s.io/client-go/tools/cache.(*controller).Run
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:123 +0xac

goroutine 68 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2f8e64e38, 0x72, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc00063b998, 0x72, 0x1000, 0x1000, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00063b980, 0xc000730000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc00063b980, 0xc000730000, 0x1000, 0x1000, 0xc000115500, 0xc000699d98, 0xc000699c20)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc00011ad78, 0xc000730000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/net.go:184 +0x8e
net/http.(*persistConn).Read(0xc001066ea0, 0xc000730000, 0x1000, 0x1000, 0xc000699eb0, 0x461370, 0xc000699eb0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1884 +0x75
bufio.(*Reader).fill(0xc001062660)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bufio/bufio.go:100 +0x103
bufio.(*Reader).Peek(0xc001062660, 0x1, 0x2, 0x0, 0x0, 0x0, 0xc00062c480)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bufio/bufio.go:138 +0x4f
net/http.(*persistConn).readLoop(0xc001066ea0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:2037 +0x1a8
created by net/http.(*Transport).dialConn
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1706 +0xc56

goroutine 69 [select]:
net/http.(*persistConn).writeLoop(0xc001066ea0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:2336 +0x11c
created by net/http.(*Transport).dialConn
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1707 +0xc7b

goroutine 44 [IO wait]:
internal/poll.runtime_pollWait(0x7fa2f8e64d58, 0x72, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc000562198, 0x72, 0x1000, 0x1000, 0xffffffffffffffff)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000562180, 0xc00057e000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc000562180, 0xc00057e000, 0x1000, 0x1000, 0xc000115560, 0xc000694d98, 0xc000694c20)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc000626720, 0xc00057e000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/net.go:184 +0x8e
net/http.(*persistConn).Read(0xc0010c17a0, 0xc00057e000, 0x1000, 0x1000, 0xc000694eb0, 0x461370, 0xc000694eb0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1884 +0x75
bufio.(*Reader).fill(0xc000115a40)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bufio/bufio.go:100 +0x103
bufio.(*Reader).Peek(0xc000115a40, 0x1, 0x2, 0x0, 0x0, 0x0, 0xc000062fc0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/bufio/bufio.go:138 +0x4f
net/http.(*persistConn).readLoop(0xc0010c17a0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:2037 +0x1a8
created by net/http.(*Transport).dialConn
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1706 +0xc56

goroutine 45 [select]:
net/http.(*persistConn).writeLoop(0xc0010c17a0)
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:2336 +0x11c
created by net/http.(*Transport).dialConn
	/home/ubuntu/actions-runner/_work/_tool/go/1.14.10/x64/src/net/http/transport.go:1707 +0xc7b

goroutine 2205 [select]:
golang.org/x/net/http2.awaitRequestCancel(0xc000990e00, 0xc000e944e0, 0x43d176, 0x29476c8)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:309 +0x120
golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc0006d6c60, 0xc000990e00)
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:335 +0x40
created by golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
	/home/ubuntu/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:2029 +0x749

goroutine 1544 [select]:
k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc001d860d0, 0xbfe56a24e0906700, 0x23b538701, 0x4439960, 0x2d75b80, 0xc001e0e680, 0xc0007dbb40, 0xc001626540, 0xc001375aa0, 0x0, ...)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:432 +0x1a1
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc001d860d0, 0xc001375aa0, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:396 +0xae2
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:177 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc002688ee0)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007dbee0, 0x2d52e60, 0xc001694320, 0xc002688f01, 0xc001375aa0)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xa3
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc001d860d0, 0xc001375aa0)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:176 +0x17e
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:56 +0x2e
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00129a010, 0xc000a62020)
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x51
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/home/ubuntu/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x62

goroutine 2023 [select]:
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).trackContainer(0xc000b7f180, 0x2db6680, 0xc000414cc0, 0xc001e8bfc0, 0x7, 0x0, 0x0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:390 +0x12c
github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers.func1(0xc000b7f180, 0xc001e8bfc0, 0x7, 0x2db6680, 0xc000414cc0)
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:437 +0xb9
created by github.com/werf/kubedog/pkg/tracker/pod.(*Tracker).runContainersTrackers
	/home/ubuntu/go/pkg/mod/github.com/werf/[email protected]/pkg/tracker/pod/tracker.go:432 +0x273

More detailed info about containers being waited

│ │ ┌ Status progress
│ │ │ JOB                                              ACTIVE   SUCCEEDED  FAILED   DURATION AGE   
│ │ │ rails-unit-tests                                 1        0          0        9s       9s    
│ │ │ │   POD           READY  STATUS    RESTART AGE   ---                                         
│ │ │ │                                  S             Waiting for: pods should be complete,       
│ │ │ └── unit-tests-wx 0/1    Init:1/2  0       9s    succeeded 0->1                              
│ │ │     97d                                          
│ │ └ Status progress

Support flagger canary resources

Hi,

Flagger is a great tool for performing automated canary deployments in k8s: https://github.com/weaveworks/flagger/

Is it possible to support the flagger canary custom resource so that kubedog will output the events from the resource?

example:

kubectl -n test describe canary/podinfo

Status:
  Canary Weight:         0
  Failed Checks:         10
  Phase:                 Failed
Events:
  Type     Reason  Age   From     Message
  ----     ------  ----  ----     -------
  Normal   Synced  3m    flagger  Starting canary deployment for podinfo.test
  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 5
  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 10
  Normal   Synced  3m    flagger  Advance podinfo.test canary weight 15
  Normal   Synced  3m    flagger  Halt podinfo.test advancement success rate 69.17% < 99%
  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 61.39% < 99%
  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 55.06% < 99%
  Normal   Synced  2m    flagger  Halt podinfo.test advancement success rate 47.00% < 99%
  Normal   Synced  2m    flagger  (combined from similar events): Halt podinfo.test advancement success rate 38.08% < 99%
  Warning  Synced  1m    flagger  Rolling back podinfo.test failed checks threshold reached 10
  Warning  Synced  1m    flagger  Canary failed! Scaling down podinfo.test

Thanks!

Implement show logs until modes

Kubedog show logs until: ControllerIsDone|PodIsDone(default)|EndOfDeployProcess.

For now it is always PodIsDone, add more modes.

Option to change the display prefix

Rather than have the output be prefixed with #, I would like to indent the output by some arbitrary number of characters. This would make the output look much better when used as a step within a build pipeline in my case.

Would you take a PR to modify the display prefix when tracking rollouts?

Configurable kube-config

Give user the ability to use arbitrary kube-config even if running in-cluster mode.
Also add --kube-config, --kube-context cli options.

`rollout.TrackJobTillDone` bad error msg: `%!s(MISSING)`

ERROR job/project-mysql-migrate-job-lj29w po/mysql-migrate-job CrashLoopBackOff: Back-off 10s restarting failed container=mysql-migrate-job pod=project-mysql-migrate-job-lj29w_bush-test(3528920b-2280-11e9-8efd-4ab856982a7a) failed: %!s(MISSING)

Use reduced resource names in commands

kubectl rollout track deployment -n namespace deployment-name is not in sync with kubectl commands.

kubectl rollout track -n namespace deploy/deployment-name is much better.

kubectl follow -n namespace po/application-1341511-w53r1 is much simpler to use after kubectl get po invocatin.

Weird default value

Hello!

Just saw a strange message:

kubedog help

  -t, --timeout int            Timeout of operation in seconds. 0 is wait forever. Default is 0. (default -1)

Default is 0. (default -1) this part looks weird to me, not clear what value will be used by default.

Fail fast when resource readiness probe failed

Kubedog should handle fails of readiness-probes as errors and fail tracking. For now these fails only printed to log.

There should be statuses of child pods in the status-reports of non-ready controllers.

There is info about readiness-probes failures in the pod's status in conditions fields.

Track an entire helm release

Hello,

I'm using helm to deploy on kubernetes and sometimes with hooks. During my CD jobs, if one job failed, helm display message like "Backoff limit exceed" (saying that one job is failed and exceed its limit of retry).

In order to ease the work of developers, I want to display logs of failed job directly in CI output.
Currently, with Kubedog, I need to specify name of each resources that I want to track. It would be really great to be able to use it to follow an entire helm release (with all resources created by it).

For example:
running kubedog rollout track helm <release_name> which runs a kubedog rollout track on each resources created in this release.

Rollout tracker could hang on sts track

Have a chart with statefulset:

$ cat .helm/requirements.yaml 
dependencies:                                                                                       
- name: mariadb                                                                                     
  version: 5.x.x                                                                                    
  repository: https://kubernetes-charts.storage.googleapis.com/                                     
  condition: mariadb.enabled                                                                        

Deploy sometime hangs on Pod, which is not exists, and will be never created:

...
==> v1beta1/Deployment
mydeploy2  6s

==> v1beta1/StatefulSet
ex-dapp-deployment-watcher-dev-mariadb-master  6s
ex-dapp-deployment-watcher-dev-mariadb-slave   6s

==> v1/Pod(related)

NAME                                             READY  STATUS             RESTARTS  AGE
mydeploy2-5b9cd4d486-6ncp2                       0/1    Init:0/1           0         6s
mydeploy2-5b9cd4d486-99prm                       0/1    Init:0/1           0         6s
mydeploy2-5b9cd4d486-dg2pq                       0/1    Init:0/1           0         6s
mydeploy2-5b9cd4d486-gpbb7                       0/1    Init:0/1           0         6s
mydeploy2-5b9cd4d486-mrlgj                       0/1    Init:0/1           0         6s
mydeploy2-5b9cd4d486-p8vsp                       0/1    Init:0/1           0         6s
ex-dapp-deployment-watcher-dev-mariadb-master-0  0/1    ContainerCreating  0         6s
ex-dapp-deployment-watcher-dev-mariadb-slave-0   0/1    ContainerCreating  0         6s

# Run watch for pod 'ex-dapp-deployment-watcher-dev-mariadb-test-scdoh'
HANG

Seems pod ex-dapp-deployment-watcher-dev-mariadb-test-scdoh has some tests, which runs fast.

As solution we should catch resource deletion event and terminate corresponding pod tracker. We even have the code, which commented and should do this. Uncomment this in Pod informer and handle signals properly:

            switch e.Type {
            case watch.Added:
                p.PodAdded <- object
                // case watch.Modified:
                //     d.resourceModified <- object
                // case watch.Deleted:
                //     d.resourceDeleted <- object
            }

https://github.com/flant/kubedog/blob/master/pkg/tracker/pod_informer.go#L92

Warnings in multitrack output: Reflector ListAndWatch Objects listed

I0819 16:57:30.605052   19492 trace.go:81] Trace[1201317408]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99 ListAndWatch" (started: 2019-08-19 16:57:18.588504865 +0300 MSK m=+44.620805719) (total time: 12.016527903s):
Trace[1201317408]: [12.016456842s] [12.016456842s] Objects listed
I0819 16:57:30.793991   19492 trace.go:81] Trace[798899788]: "Reflector pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:99 ListAndWatch" (started: 2019-08-19 16:57:19.00052773 +0300 MSK m=+45.032828586) (total time: 11.793444041s):

Error is when following by pod

ERROR: logging before flag.Parse: W1225 16:03:56.419831 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.
ERROR: logging before flag.Parse: W1225 16:12:36.563961 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.
ERROR: logging before flag.Parse: W1225 16:31:59.683147 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.
ERROR: logging before flag.Parse: W1225 16:39:27.725860 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.

ERROR: logging before flag.Parse: W1225 16:54:45.818893 28319 reflector.go:270] k8s.io/client-go/tools/watch/informerwatcher.go:110: watch of *v1.Event ended with: The resourceVersion for the provided watch is too old.

Kubedog gets wrong ReplicaSet

I get deployment error sometimes.

2019/02/15 17:55:49 # deploy/marketplace-api rs/marketplace-api-7b6bc88574 added
2019/02/15 17:55:49 # deploy/marketplace-api po/marketplace-api-7b6bc88574-phpjd added
2019/02/15 17:55:52 # deploy/marketplace-api FAIL: resource deleted

Deploy added at 17:55:49 and it gets ReplicaSet marketplace-api-7b6bc88574 at 17:55:49. But it old RS, because new RS was created only in 2s (cluster worked slowly in that time).

That's correct rs.

NAME                                                      DESIRED   CURRENT   READY   AGE
marketplace-api-7c5f548fdc                                1         1         1       2d
marketplace-api-fc5cfcf58                                 0         0         0       2d
Name:           marketplace-api-fc5cfcf58
Namespace:      mapi
Selector:       app=marketplace-api,pod-template-hash=971797914,release=marketplace-api
Labels:         app=marketplace-api
                date=2019-02-15T175551Z\

So we have a case when kubedog got earlier RS because right RS was not create fast enough.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.