Coder Social home page Coder Social logo

client's People

Contributors

aaron-lerner avatar adrcunha avatar arghya88 avatar bbolroc avatar boaz0 avatar buildbricks avatar cardil avatar chizhg avatar danielhelfand avatar dprotaso avatar dsimansk avatar hemanrnjn avatar itsmurugappan avatar kaustubh-pande avatar knative-automation avatar markusthoemmes avatar mattmoor avatar maximilien avatar mchmarny avatar mgencur avatar mibc avatar nak3 avatar navidshaikh avatar psschwei avatar rhuss avatar sixolet avatar toshi0607 avatar toversus avatar vyasgun avatar xiangpingjiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

client's Issues

Package kn in a container image

This would enable users of Tekton and Google Cloud Build to more easily invoke kn, e.g.:

...
steps
- name: kn-deploy-image
  image: gcr.io/knative-something/kn:v0.X.Y
  command: ['kn', 'service', 'update', 'foo', '--image', 'gcr.io/my/image']
...

Assuming kn doesn't depend on any other tools, you could build and push the image using ko publish (and crane copy to shorten the image name):

$ KO_DOCKER_REPO=gcr.io/knative-something/ ko publish -P github.com/knative/client/cmd/kn
$ crane copy gcr.io/knative-something/github.com/knative/client/cmd/kn gcr.io/knative-something/kn:${RELEASE}

Support wildcards when selecting resources

When using commands which requires a resource name as argument like kn revision get <revision> or kn service get <srv> (or kn service list if not merged together with kn service get as proposed in #48), then having a wildcard evaluation on the argument would be very helpful, especially when dealing with resource names that have autogenerated suffixes (like revisions).

More general, a semantic like for general Unix commands like ls would be very useful IMO.

  • kn revision get --> get all revisions
  • kn revision get * --> same as above
  • kn revision get myrevision-1-9865 --> get specific revision
  • kn revision get myrevision-1-* --> get revision with wildcard
  • kn revision get myrevision* --> filter on revisions, possibly returning multiple entries.

Go panic from kn service delete command

Version: 9c21b8d

Steps to produce:

1. Create knative service

(e.g)

$ ./kn service create hello-example --image=gcr.io/knative-samples/helloworld-go 

2. Delete the service by kn service delete command

$ ./kn service delete hello-example 
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x10b00f4]

goroutine 1 [running]:
github.com/knative/client/pkg/kn/commands.NewServiceDeleteCommand.func1(0xc00011fb80, 0xc0003f9800, 0x1, 0x1, 0x0, 0x0)
	/home/knakayam/.go/src/github.com/knative/client/pkg/kn/commands/service_delete.go:44 +0x104
github.com/spf13/cobra.(*Command).execute(0xc00011fb80, 0xc0003f97a0, 0x1, 0x1, 0xc00011fb80, 0xc0003f97a0)
	/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:762 +0x465
github.com/spf13/cobra.(*Command).ExecuteC(0xc000155680, 0x0, 0x0, 0xc000155680)
	/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
	/home/knakayam/.go/src/github.com/knative/client/cmd/kn/main.go:29 +0x45

Actual result

  • Please refer to step-2

Proposal patch

#73

service replace

Write the command to replace a service! It should take all new arguments just like create!

Every error is printed twice

kn service list
Error: Get https://192.168.39.212:8443/apis/serving.knative.dev/v1alpha1/services: dial tcp 192.168.39.212:8443: connect: no route to host
Get https://192.168.39.212:8443/apis/serving.knative.dev/v1alpha1/services: dial tcp 192.168.39.212:8443: connect: no route to host

allow all builtin auth providers

instead of

_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"

let's do

_ "k8s.io/client-go/plugin/pkg/client/auth"

since that includes all client-go known auth providers

Tests don't pass

I'm not too sure what I'm doing wrong, I'm using go 1.11.5 and this is on master:

dev/k8s/knative-client  master ✔                                                                                                                                            15h29m  ⍉
▶ go test ./pkg/...                                  
go: finding k8s.io/client-go/tools/metrics latest
go: finding k8s.io/client-go/transport latest
go: finding k8s.io/client-go/util/jsonpath latest
go: finding k8s.io/client-go/util/flowcontrol latest
go: finding k8s.io/client-go/testing latest
go: finding k8s.io/client-go/util/homedir latest
go: finding k8s.io/client-go/tools/clientcmd latest
go: finding k8s.io/client-go/util/cert latest
go: finding k8s.io/client-go/util/connrotation latest
go: finding k8s.io/client-go/tools/clientcmd/api latest
go: finding k8s.io/client-go/tools latest
go: finding k8s.io/client-go/util latest
go: finding k8s.io/client-go/tools/cache latest
# github.com/knative/serving/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_configuration.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_revision.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_route.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_service.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
FAIL    github.com/knative/client/pkg/kn/commands [build failed]

and using the other command go test ./...

dev/k8s/knative-client  master ✗                                                                                                                                          15h29m ⚑  ⍉
▶ go test ./...
# github.com/knative/serving/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_configuration.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_revision.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_route.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_service.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
        have (schema.GroupVersionResource, string, string, []byte, []string...)
        want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
?       github.com/knative/client/cmd/kn        [no test files]
FAIL    github.com/knative/client/pkg/kn/commands [build failed]

Extra error messages showing up before actual error message

KUBECONFIG='' ./kn service list
ERROR: logging before flag.Parse: W0302 11:05:09.100519   14619 client_config.go:548] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
ERROR: logging before flag.Parse: W0302 11:05:09.100769   14619 client_config.go:553] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Error: invalid configuration: no configuration has been provided
[..]

Lines starting with 'ERROR: logging before' seem to be some kind of internal error messaging that's popping up. It should be fixed.

Control (default?) help usage printing by cobra upon CLI operational errors

If there is an error in an operation directed via CLI, CLI prints the error and prints the help usage as well.
This makes reading the error about operation a bit more difficult. One has to locate what went wrong in the text printed on stdout. Apparently, it seems its a default behavior of cobra. This issue is to track controlling the default stdout text printing behavior.

Create a service from pre-built container (no build involved)

As a user I want to deploy a pre-built container from a container registry into OpenShift using Knative APIs so that my container can be auto-scale based on events and be accessible through a URL.

Input:
Given: Arg : Service name (mandatory)
Given: Flag: Container repository URL (mandatory)
Given: Flag: Env vars (strings) (optional)
Given: Flag: Concurrency (optional)

Output:
Create a kservice that deploy such container and expose it by a URL (assume web app UC)

Sample command:

  1. Create a service from container at IMAGE_URL
kn service create -s <SERVICE_NAME> --image <IMAGE_URL>
  1. Create a service from container at IMAGE_URL with environment variable ENV1=ENV1_VALUE
kn service create -s <SERVICE_NAME> --image <IMAGE_URL> --env ENV1=EV1_VALUE
  1. Create a service and configure to handle 5 concurrent requests
kn service create -s <SERVICE_NAME> --image <IMAGE_URL> --concurrency 5

Acceptance:

  • User must be able to issue a command to deploy a container
  • User must be able to specify env vars(s) (strings)
  • User must be able to specify concurrency

listing services and revisions should respect namespace

➜  kubectl create namespace n2        
namespace/n2 created

➜./kn service create svc1 -n n2 --image dev.local/ns/i:v1    

# bug: list without any option returning services from all the namespaces
➜./kn service list
s1
s2
svc1

➜ ./kn service list -n n2
svc1

➜ ./kn service list -n default
s1
s2

service update

Write the command to update a service! It should never change the code you are running unless you specifically specify a new image/build.

Consider keeping serving->CI/CD control

This is a proposal that is a counterpoint to #25

It's also drawing heavily from @mattmoor and his thoughts about our next Serving API rev, even if it disagrees about this one point.

Motivation

Live your truth.

When you start using Knative, your day 1 experience probably doesn't involve a GitOps repository. On day one, you're using kn to deploy directly, and using the etcd of your Kubernetes cluster as your source of truth about what should be serving, and at what percentages, and under what subdomains.

As you grow, you may end up setting up a github repository, and transitioning your source of truth over to that.

Both of these are valid workflows for various orgs and maturities. Transitioning between them should be smooth, and not require a phase change (@evankanderson's point 1 in #25)

Well I do declare!

The serving API a language for describing what programs you want to run, accessible under what names, with what traffic split, configured how. Knative contains a way of interpereting that language into action on a kubernetes cluster too, but the language itself is useful even without the particular controllers in the serving repo. The PodSpec-able nature of @mattmoor's proposed v1 API is great for when you want to declare the thing you run as a fully-formed image, but sometimes the image is just a sideeffect, the actual main description of the program you want to be running is a particular commit to a source git repo.

A CI/CD system, on the other hand, describes a process, not a declaration of what you want running. Each run of the CI/CD pipeline might build a thing to run, and set some percentages, but what you actually run is something like the most recent of those to complete successfully, or even worse, ill-defined.

In your declarative serving-API language, you should declare exactly what code you want to be running. It should be up to your CI system to make a tested deployable artifact for you, safely.

In your declarative serving-API language, you should declare exactly what percent of traffic should be going to your program. It should be up to your CD system to figure out how to make that so, safely.

Therefore, the serving API specifies what a CI/CD system should accomplish. The serving API knows what, the CI/CD system knows how.

Concrete v1 API suggestions

(with reference to @mattmoor v1 API principles/opinions)

Specifying source to run in the serving API language

Let Service inline the inputs and outputs parts of TaskRun/PipelineRun from the Pipelines API. This is for specifying source instead of image to run, and is how the CI system knows what to build and test. (Subresource specs can be inlined vs. embedded, [Use] [known] idioms to evoke API familiarity)

When a Service specifies source to run, the outputs section can be considered to by default include an image in an operator-configured image repo named something sensible based on the service name, tagged with the commit hash, and the image of the container to run is also by default that image. ([Use] defaulting to graduate complexity)

Integrating with CI/CD

This is the part where I'm suggesting we NOT invert the flow control, or at least allow a non-inverted flow control.

Service can also inline the PipelineRef/TaskRef field from the Pipelines API, but generalized so you could point to anything (it shouldn't have to be a Pipeline or Task, because we shouldn't limit ourselves to Pipelines as a CI/CD system). (Henceforth when I say Pipeline please read Pipeline, Task, or other CI/CD primitive; they should be pluggable)

When you specify no PipelineRef, you get the behavior you get today for images, and an error if you attempted to specify source. Specifying a particular traffic split, Knative will immediately make the relevant route(s) reflect it.

When you specify a PipelineRef, that pipeline is in charge of all subresource manipulations. That means that the Service is treated as the declarative goal for what should be deployed; the Pipeline is in charge of manipulating the Service's Configuration and Route(s) to match. They're still owned by the Service, but the Service's controller won't touch them directly. Instead, it'll instantiate that Pipeline , and the Pipeline will do its thing.

Regarding the pipeline doing its thing, that thing can even be manipulations to another canary Knative service, or even manipulations to another three Knative clusters. (Embrace Gitops, or die: We must anticipate workflows where the same resource definitions may be applied across multiple clusters, including release scenarios.)

Knative comes with a special ExternalOperation pipeline. When you specify your Service has that pipeline, it does nothing, and expects a human or system outside Knative to be manipulating the child Configuration and Route(s) of the service. This is the Manual Mode missing from @mattmoor 's presentation.

Example

Here's a source-based service using @mattmoor 's suggestions for v1 as a jumping-off point:

apiVersion: serving.knative.dev/v1beta1
kind: Service
metadata:
  name: thingservice
spec:
  inputs:
    resources:
    - resourceSpec:
      type: git
      params:
      - name: url
        value: https://github.com/wizzbangcorp/thing.git
      - name: revision
        value: lolcommithash
  pipelineRef:
    name: build-it-and-roll-it-out-slowly
    kind: Pipeline
    apiVersion: tekton.dev/v1

This specifies the source. The cluster has gcr.io/stuff configured as the image repo for this namespace, so the image is going to be by default gcr.io/stuff/thingservice:lolcommithash. The pipeline build-it-and-roll-it-out-slowly is invoked with the source input, the image output, and a reference to this Service, which will build the relevant image and then roll it out slowly on this service (or even several others!).

Transitioning from etcdops to full-on gitops

You download your services form etcd into Git. You set up your same relevant pipelines to trigger off commits to the git repo, with the extra initial Task of kubectl apply-ing the directory. You change all the pipelineRef fields to ExternalOperation.


I plan to edit the above as discussion is ongoing, if we end up with discussion that would make this idea better. Please comment on anything that is either a bad idea or unclear.

kubectl plugin

Hi,
I have no real idea what a kubectl plugin is, but I heard of them and want to open the dialog, mostly to hear opinions and capture the rationale.
I also learnt about krew, a kubectl plugin manager.

Would it make sense to have a kubectl plugin for Knative?

If so, should it be in addition to kn? or could kn be delivered as a kubectl plugin? or should it replace kn?

Having a kubectl plugin that facilitates the usage of Knative resources could be very Kubernetes... native :)

CC @sixolet

Regroup the command implementations in respective sub-packages

Presently, all the command implementations reside under pkg/kn/commands/, we should re-group the implementations based on command-group.

for eg:

pkg/kn/commands/service/ <-- implementing all the sub-commands under kn service
pkg/kn/commands/revision/
pkg/kn/commands/build/
pkg/kn/commands/

and so on..

kn plugins

Add ability for the kn CLI to have plugins. This is particularly useful for three important scenarios:

  1. allows users of kn to extend it with functionality
  2. allows experimentation of commands that we don't want to include as a core command, e.g., $kn doctor
  3. allows those wanting to package kn with their own offering of Knative to extend kn with additional commands that either:
    a. is not yet included in the current release
    b. are experimental
    c. are specific to a cloud environment, e.g., IBM's Cloud or Google Cloud

There could be additional reasons to have plugins to kn so please add them here. Also if you are against such a feature, please comment your reasoning here too.

Add DEVELOPMENT.md

It would be good for the repo to have a document to help new contributors to ramp up for example how to setup their development workstation, any tools and requirements.
It would be useful to align with the Go version, for example serving requires Go 1.12 (released on Feb 25 2019)

Doc cloud include how to build, configure and run the CLI.
Running tests and maybe a TDD approach
References to doc/tutorials on libraries being leverage like knative/pkg or k8s client-go

Here are the DEVELOPMENT.md files in other repos as examples:
https://github.com/knative/serving/blob/master/DEVELOPMENT.md
https://github.com/knative/eventing/blob/master/DEVELOPMENT.md

Consider inverting build/serving control

A more detailed proposal coming; so far this is a thought experiment.

The current Knative Serving stack sometimes performs a build as a side-effect when deploying a new Revision. When this happens, the Revision is given a reference to the build (which must be a resource in the same cluster) which should block the Revision activation until the build reaches the Succeeded=True condition. This has a number of unfortunate side effects:

  1. The initial "getting started" usage suggests starting with Service and having build be orchestrated by Serving. When applications reach higher levels of maturity and begin using CI systems, this ordering becomes reversed, and the build system generates an image and then applies it to one or more clusters (the rollout could include deployment to both a staging and a production cluster, or to multiple production clusters for redundancy). This creates a "jump" where users throw away their old knowledge rather than building on it.

  2. When Serving orchestrates the build, it is more difficult to feed insights from the build steps (i.e. OpenAPI specifications, resource requirements, etc) into the Serving deployment. This has a few possible mitigations, but reversing this control would make it easier for builds to contribute resource information to Serving:

    1. This information could be stored in the container image in the container registry. This has the possible benefit of having these settings follow the container image, and the drawback that this is a somewhat obscure location without many good tools for examining it.
    2. This information could be stored in the output build resource and then observed by the Serving controller. This ends up adding requirements to the build API contract as well as additional coupling to the Serving controller.
    3. A build step could find a reference to the requesting Service and modify it, possibly causing an additional Revision to be created. Also, this is really ugly and will probably break badly.
  3. I posit that most users know whether or not they want to deploy new source code, and it might be okay to have different commands for "push new code and configuration" vs "only update configuration". With the current Serving-orchestrates-build, this occasionally means we need client conventions like updating an annotation to kick off a build where it might not otherwise be known (i.e. if the same zipfile is used but has new contents). Separating these into "update via build" and "direct update" might simplify things for both client and server.

I prototyped this a bit in https://github.com/evankanderson/pyfun/blob/build-experiment/packaging/build-template.yaml#L33, but that's not a very "production" solution.

A benefit of either Serving-orchestrates-build or build-orchestrates-Serving is that it is possible to deploy new code with a single API call, which reduces the total amount of client workflow and the changes of partially-created changes when compared with a "do a build, then do a deploy" client-driven workflow.

/cc @duglin

pickup config from KUBECONFIG

At the moment the client doen't pick anything up from the KUBECONFIG env.

dev/k8s/knative-client  master ✗                                                                                                                                                                                                                                          1d ⚑  ⍉
▶ export KUBECONFIG=~/admin.conf

dev/k8s/knative-client  master ✗                                                                                                                                                                                                                                           1d ⚑  
▶ ./kn service list             
Error: Get https://192.168.42.79:8443/apis/serving.knative.dev/v1alpha1/namespaces/default/services: dial tcp 192.168.42.79:8443: connect: no route to host
Usage:
  kn service list [flags]

Flags:
      --allow-missing-template-keys   If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
  -h, --help                          help for list
  -o, --output string                 Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. (default "jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}")
      --template string               Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

Global Flags:
      --config string       config file (default is $HOME/.kn.yaml)
      --kubeconfig string   kubectl config file (default is $HOME/.kube/config)
  -n, --namespace string    Namespace to use. (default "default")

Get https://192.168.42.79:8443/apis/serving.knative.dev/v1alpha1/namespaces/default/services: dial tcp 192.168.42.79:8443: connect: no route to host

dev/k8s/knative-client  master ✗                                                                                                                                                                                                                                          1d ⚑  ⍉
▶ kubectl get pods
No resources found.

Add integration tests to match the Basic workflow

This should be structured as start of e2e tests and for the first cut cover the proposed Basic workflow. So includes:

  1. create service
  2. service get, service describe newly created service
  3. service list that service in in the list
  4. delete service and verify deleted

Subsequence e2e tests can follow additional advanced workflows with more commands, e.g., service update, service replace

build error

  1. git clone https://github.com/knative/client.git knative-client (currently at 38932b1)
  2. cd knative-client
  3. go install ./cmd/kn
  4. errors out with output:
...
...
go: downloading github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c
go: downloading github.com/emicklei/go-restful v2.8.0+incompatible
go: downloading github.com/go-openapi/spec v0.18.0
go: verifying github.com/knative/[email protected]: checksum mismatch
	downloaded: h1:kALSwD+P5GLdRj2mtZ2W4ZRRD//Q7nKjLqCPSpihkTY=
	go.sum:     h1:O8llkx4VL7C7AZm5wCxcQ9abz8yq02I62ZU++EAwBHk=

similarly go clean -modcache doesn't work (fails w/ the same error).

Is this only me (go1.11.4)?

Add a command to install Knative

It would be awesome if kn provided an easy way to install Knative. Even more awesome if it could work with a real Kubernetes cluster or let me choose to spin up Knative locally with minikube.

Comand usage, aliases, flags and rules

Lets separate the constant, usage-text, aliases, flags and their respective operations and re-organize them to be usable in command implementations.

Presently, the command implementations defines them all together in the same file.

We should have these items organized properly and imported as needed. This should help in testing as well.

  • Command usage text

  • Command example text

  • Help section structure

  • Aliases

  • flags

  • Common operations

Help me diagnose common issues with `kn doctor`

Like any distributed system, a Knative install includes a lot of moving parts. An error in any one of them can impact the behavior of the system. It would be nice to have a tool that can audit a cluster and check for common configuration issues.

Simple things to check:

  • is knative serving installed
    • are the controller, webhook, activator and autoscaler ready
    • does the knative-ingressgateway service have an external ip (assuming it's a load balancer)
      • does the cluster's dns name resolve to that external ip
  • is knative build installed
    • are the controller and webhook ready
  • is knative eventing installed
    • are the controller and webhook ready
    • is the default channel provisioner installed
  • is Istio installed
    • are the Istio deployments ready

A first pass can be fairly trivial so long as there is the ability to add in more advanced checks in the future.

Sample API Usage page is a bit confusing

/kind doc

Expected Behavior

Sample API documentation that works

Actual Behavior

Sample API documentation page here:
https://github.com/knative/serving/blob/master/docs/spec/normative_examples.md#4-deploy-a-revision-from-source

talks about knative tool that doesn't exist and users find it confusing. It would seem clearer to update to use a kubectl example files rather than the tool itself.

Steps to Reproduce the Problem

  1. Bring up this page: https://github.com/knative/serving/blob/master/docs/spec/normative_examples.md#4-deploy-a-revision-from-source
  2. Though it mentions that these are samples for CLI it's a bit odd to talk about Sample API Usage and then demo it with a CLI tool that doesn't exist.

Additional Info

service describe

Write the command to describe a service! It should by default print yaml, but have a configurable printer.

Gitops mode

Let's have a mode where you can provide a directory instead of a server to connect to. Instead of modifying objects on the server, kn will modify yaml files in the directory instead, for the kubectl apply -f pleasure of your friendly local CI/CD system.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.