knative / client Goto Github PK
View Code? Open in Web Editor NEWKnative developer experience, docs, reference Knative CLI implementation
License: Apache License 2.0
Knative developer experience, docs, reference Knative CLI implementation
License: Apache License 2.0
We'll need a human-friendly printer for resource listing along with all the printers from genericclioptions https://godoc.org/k8s.io/cli-runtime/pkg/genericclioptions .
For eg: As its done in kube https://github.com/kubernetes/kubernetes/tree/master/pkg/printers
kn service create
command doesn't have example listed to be shown in help output.
Lets add it.
This would enable users of Tekton and Google Cloud Build to more easily invoke kn
, e.g.:
...
steps
- name: kn-deploy-image
image: gcr.io/knative-something/kn:v0.X.Y
command: ['kn', 'service', 'update', 'foo', '--image', 'gcr.io/my/image']
...
Assuming kn
doesn't depend on any other tools, you could build and push the image using ko publish
(and crane copy
to shorten the image name):
$ KO_DOCKER_REPO=gcr.io/knative-something/ ko publish -P github.com/knative/client/cmd/kn
$ crane copy gcr.io/knative-something/github.com/knative/client/cmd/kn gcr.io/knative-something/kn:${RELEASE}
When using commands which requires a resource name as argument like kn revision get <revision>
or kn service get <srv>
(or kn service list
if not merged together with kn service get
as proposed in #48), then having a wildcard evaluation on the argument would be very helpful, especially when dealing with resource names that have autogenerated suffixes (like revisions).
More general, a semantic like for general Unix commands like ls
would be very useful IMO.
kn revision get
--> get all revisionskn revision get *
--> same as abovekn revision get myrevision-1-9865
--> get specific revisionkn revision get myrevision-1-*
--> get revision with wildcardkn revision get myrevision*
--> filter on revisions, possibly returning multiple entries.(e.g)
$ ./kn service create hello-example --image=gcr.io/knative-samples/helloworld-go
$ ./kn service delete hello-example
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x10b00f4]
goroutine 1 [running]:
github.com/knative/client/pkg/kn/commands.NewServiceDeleteCommand.func1(0xc00011fb80, 0xc0003f9800, 0x1, 0x1, 0x0, 0x0)
/home/knakayam/.go/src/github.com/knative/client/pkg/kn/commands/service_delete.go:44 +0x104
github.com/spf13/cobra.(*Command).execute(0xc00011fb80, 0xc0003f97a0, 0x1, 0x1, 0xc00011fb80, 0xc0003f97a0)
/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:762 +0x465
github.com/spf13/cobra.(*Command).ExecuteC(0xc000155680, 0x0, 0x0, 0xc000155680)
/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
/home/knakayam/.go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
/home/knakayam/.go/src/github.com/knative/client/cmd/kn/main.go:29 +0x45
Change:
Service hello in namespace default deleted
to
Service 'hello' in namespace 'default' deleted
In a way that doesn't seem to relate to any client code
Write the command to create a service!
implement kn service get <service_name>
Currently, the client's dependency on knative serving is still 0.3.0 in go.mod
.
Write the command to replace a service! It should take all new arguments just like create!
kn service list
Error: Get https://192.168.39.212:8443/apis/serving.knative.dev/v1alpha1/services: dial tcp 192.168.39.212:8443: connect: no route to host
Get https://192.168.39.212:8443/apis/serving.knative.dev/v1alpha1/services: dial tcp 192.168.39.212:8443: connect: no route to host
Implement kn service delete <service_name>
instead of
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
let's do
_ "k8s.io/client-go/plugin/pkg/client/auth"
since that includes all client-go known auth providers
I'm not too sure what I'm doing wrong, I'm using go 1.11.5
and this is on master:
dev/k8s/knative-client master ✔ 15h29m ⍉
▶ go test ./pkg/...
go: finding k8s.io/client-go/tools/metrics latest
go: finding k8s.io/client-go/transport latest
go: finding k8s.io/client-go/util/jsonpath latest
go: finding k8s.io/client-go/util/flowcontrol latest
go: finding k8s.io/client-go/testing latest
go: finding k8s.io/client-go/util/homedir latest
go: finding k8s.io/client-go/tools/clientcmd latest
go: finding k8s.io/client-go/util/cert latest
go: finding k8s.io/client-go/util/connrotation latest
go: finding k8s.io/client-go/tools/clientcmd/api latest
go: finding k8s.io/client-go/tools latest
go: finding k8s.io/client-go/util latest
go: finding k8s.io/client-go/tools/cache latest
# github.com/knative/serving/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_configuration.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_revision.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_route.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_service.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
FAIL github.com/knative/client/pkg/kn/commands [build failed]
and using the other command go test ./...
dev/k8s/knative-client master ✗ 15h29m ⚑ ⍉
▶ go test ./...
# github.com/knative/serving/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_configuration.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_revision.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_route.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
../../go/pkg/mod/github.com/knative/[email protected]/pkg/client/clientset/versioned/typed/serving/v1alpha1/fake/fake_service.go:131:44: not enough arguments in call to testing.NewPatchSubresourceAction
have (schema.GroupVersionResource, string, string, []byte, []string...)
want (schema.GroupVersionResource, string, string, types.PatchType, []byte, ...string)
? github.com/knative/client/cmd/kn [no test files]
FAIL github.com/knative/client/pkg/kn/commands [build failed]
KUBECONFIG='' ./kn service list
ERROR: logging before flag.Parse: W0302 11:05:09.100519 14619 client_config.go:548] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
ERROR: logging before flag.Parse: W0302 11:05:09.100769 14619 client_config.go:553] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Error: invalid configuration: no configuration has been provided
[..]
Lines starting with 'ERROR: logging before' seem to be some kind of internal error messaging that's popping up. It should be fixed.
If there is an error in an operation directed via CLI, CLI prints the error and prints the help usage as well.
This makes reading the error about operation a bit more difficult. One has to locate what went wrong in the text printed on stdout. Apparently, it seems its a default behavior of cobra. This issue is to track controlling the default stdout text printing behavior.
As a user I want to deploy a pre-built container from a container registry into OpenShift using Knative APIs so that my container can be auto-scale based on events and be accessible through a URL.
Input:
Given: Arg : Service name (mandatory)
Given: Flag: Container repository URL (mandatory)
Given: Flag: Env vars (strings) (optional)
Given: Flag: Concurrency (optional)
Output:
Create a kservice that deploy such container and expose it by a URL (assume web app UC)
Sample command:
IMAGE_URL
kn service create -s <SERVICE_NAME> --image <IMAGE_URL>
IMAGE_URL
with environment variable ENV1=ENV1_VALUE
kn service create -s <SERVICE_NAME> --image <IMAGE_URL> --env ENV1=EV1_VALUE
kn service create -s <SERVICE_NAME> --image <IMAGE_URL> --concurrency 5
Acceptance:
➜ kubectl create namespace n2
namespace/n2 created
➜./kn service create svc1 -n n2 --image dev.local/ns/i:v1
# bug: list without any option returning services from all the namespaces
➜./kn service list
s1
s2
svc1
➜ ./kn service list -n n2
svc1
➜ ./kn service list -n default
s1
s2
Write the command to update a service! It should never change the code you are running unless you specifically specify a new image/build.
kn service list
lists only the names of services. It could add new columns in the output to print
additional information about the service.
This is a proposal that is a counterpoint to #25
It's also drawing heavily from @mattmoor and his thoughts about our next Serving API rev, even if it disagrees about this one point.
When you start using Knative, your day 1 experience probably doesn't involve a GitOps repository. On day one, you're using kn
to deploy directly, and using the etcd of your Kubernetes cluster as your source of truth about what should be serving, and at what percentages, and under what subdomains.
As you grow, you may end up setting up a github repository, and transitioning your source of truth over to that.
Both of these are valid workflows for various orgs and maturities. Transitioning between them should be smooth, and not require a phase change (@evankanderson's point 1 in #25)
The serving API a language for describing what programs you want to run, accessible under what names, with what traffic split, configured how. Knative contains a way of interpereting that language into action on a kubernetes cluster too, but the language itself is useful even without the particular controllers in the serving repo. The PodSpec-able nature of @mattmoor's proposed v1 API is great for when you want to declare the thing you run as a fully-formed image, but sometimes the image is just a sideeffect, the actual main description of the program you want to be running is a particular commit to a source git repo.
A CI/CD system, on the other hand, describes a process, not a declaration of what you want running. Each run of the CI/CD pipeline might build a thing to run, and set some percentages, but what you actually run is something like the most recent of those to complete successfully, or even worse, ill-defined.
In your declarative serving-API language, you should declare exactly what code you want to be running. It should be up to your CI system to make a tested deployable artifact for you, safely.
In your declarative serving-API language, you should declare exactly what percent of traffic should be going to your program. It should be up to your CD system to figure out how to make that so, safely.
Therefore, the serving API specifies what a CI/CD system should accomplish. The serving API knows what, the CI/CD system knows how.
(with reference to @mattmoor v1 API principles/opinions)
Let Service inline the inputs
and outputs
parts of TaskRun
/PipelineRun
from the Pipelines API. This is for specifying source instead of image to run, and is how the CI system knows what to build and test. (Subresource specs can be inlined vs. embedded, [Use] [known] idioms to evoke API familiarity)
When a Service specifies source to run, the outputs
section can be considered to by default include an image in an operator-configured image repo named something sensible based on the service name, tagged with the commit hash, and the image
of the container to run is also by default that image. ([Use] defaulting to graduate complexity)
This is the part where I'm suggesting we NOT invert the flow control, or at least allow a non-inverted flow control.
Service can also inline the PipelineRef
/TaskRef
field from the Pipelines API, but generalized so you could point to anything (it shouldn't have to be a Pipeline or Task, because we shouldn't limit ourselves to Pipelines as a CI/CD system). (Henceforth when I say Pipeline please read Pipeline, Task, or other CI/CD primitive; they should be pluggable)
When you specify no PipelineRef
, you get the behavior you get today for images, and an error if you attempted to specify source. Specifying a particular traffic split, Knative will immediately make the relevant route(s) reflect it.
When you specify a PipelineRef
, that pipeline is in charge of all subresource manipulations. That means that the Service is treated as the declarative goal for what should be deployed; the Pipeline is in charge of manipulating the Service's Configuration and Route(s) to match. They're still owned by the Service, but the Service's controller won't touch them directly. Instead, it'll instantiate that Pipeline , and the Pipeline will do its thing.
Regarding the pipeline doing its thing, that thing can even be manipulations to another canary Knative service, or even manipulations to another three Knative clusters. (Embrace Gitops, or die: We must anticipate workflows where the same resource definitions may be applied across multiple clusters, including release scenarios.)
Knative comes with a special ExternalOperation pipeline. When you specify your Service has that pipeline, it does nothing, and expects a human or system outside Knative to be manipulating the child Configuration and Route(s) of the service. This is the Manual Mode missing from @mattmoor 's presentation.
Here's a source-based service using @mattmoor 's suggestions for v1 as a jumping-off point:
apiVersion: serving.knative.dev/v1beta1
kind: Service
metadata:
name: thingservice
spec:
inputs:
resources:
- resourceSpec:
type: git
params:
- name: url
value: https://github.com/wizzbangcorp/thing.git
- name: revision
value: lolcommithash
pipelineRef:
name: build-it-and-roll-it-out-slowly
kind: Pipeline
apiVersion: tekton.dev/v1
This specifies the source. The cluster has gcr.io/stuff configured as the image repo for this namespace, so the image is going to be by default gcr.io/stuff/thingservice:lolcommithash. The pipeline build-it-and-roll-it-out-slowly is invoked with the source input, the image output, and a reference to this Service, which will build the relevant image and then roll it out slowly on this service (or even several others!).
You download your services form etcd into Git. You set up your same relevant pipelines to trigger off commits to the git repo, with the extra initial Task of kubectl apply
-ing the directory. You change all the pipelineRef
fields to ExternalOperation
.
I plan to edit the above as discussion is ongoing, if we end up with discussion that would make this idea better. Please comment on anything that is either a bad idea or unclear.
Hi,
I have no real idea what a kubectl plugin is, but I heard of them and want to open the dialog, mostly to hear opinions and capture the rationale.
I also learnt about krew
, a kubectl plugin manager.
Would it make sense to have a kubectl plugin for Knative?
If so, should it be in addition to kn
? or could kn
be delivered as a kubectl plugin? or should it replace kn
?
Having a kubectl plugin that facilitates the usage of Knative resources could be very Kubernetes... native :)
CC @sixolet
Presently, all the command implementations reside under pkg/kn/commands/
, we should re-group the implementations based on command-group.
for eg:
pkg/kn/commands/service/
<-- implementing all the sub-commands under kn service
pkg/kn/commands/revision/
pkg/kn/commands/build/
pkg/kn/commands/
and so on..
Add ability for the kn
CLI to have plugins. This is particularly useful for three important scenarios:
kn
to extend it with functionality$kn doctor
kn
with their own offering of Knative to extend kn
with additional commands that either:There could be additional reasons to have plugins to kn
so please add them here. Also if you are against such a feature, please comment your reasoning here too.
We have --force
option while creating a service, the ConfigurationEditFlags
structure can be restructured as suggested here #79 (comment)
Something like https://github.com/knative/serving/blob/1034bf9b3fc304a944cfccb078198bb26c36ce31/DEVELOPMENT.md#iterating
Especially since we're different from other Knative repos - I was able to figure out, with some internet searching, that to update test-infra in my particular case I needed:
GO111MODULE=on go get -u github.com/knative/test-infra
It would be good for the repo to have a document to help new contributors to ramp up for example how to setup their development workstation, any tools and requirements.
It would be useful to align with the Go version, for example serving requires Go 1.12 (released on Feb 25 2019)
Doc cloud include how to build, configure and run the CLI.
Running tests and maybe a TDD approach
References to doc/tutorials on libraries being leverage like knative/pkg or k8s client-go
Here are the DEVELOPMENT.md files in other repos as examples:
https://github.com/knative/serving/blob/master/DEVELOPMENT.md
https://github.com/knative/eventing/blob/master/DEVELOPMENT.md
Use --async
to not wait.
A more detailed proposal coming; so far this is a thought experiment.
The current Knative Serving stack sometimes performs a build as a side-effect when deploying a new Revision. When this happens, the Revision is given a reference to the build (which must be a resource in the same cluster) which should block the Revision activation until the build reaches the Succeeded=True
condition. This has a number of unfortunate side effects:
The initial "getting started" usage suggests starting with Service and having build be orchestrated by Serving. When applications reach higher levels of maturity and begin using CI systems, this ordering becomes reversed, and the build system generates an image and then applies it to one or more clusters (the rollout could include deployment to both a staging and a production cluster, or to multiple production clusters for redundancy). This creates a "jump" where users throw away their old knowledge rather than building on it.
When Serving orchestrates the build, it is more difficult to feed insights from the build steps (i.e. OpenAPI specifications, resource requirements, etc) into the Serving deployment. This has a few possible mitigations, but reversing this control would make it easier for builds to contribute resource information to Serving:
I posit that most users know whether or not they want to deploy new source code, and it might be okay to have different commands for "push new code and configuration" vs "only update configuration". With the current Serving-orchestrates-build, this occasionally means we need client conventions like updating an annotation to kick off a build where it might not otherwise be known (i.e. if the same zipfile is used but has new contents). Separating these into "update via build" and "direct update" might simplify things for both client and server.
I prototyped this a bit in https://github.com/evankanderson/pyfun/blob/build-experiment/packaging/build-template.yaml#L33, but that's not a very "production" solution.
A benefit of either Serving-orchestrates-build or build-orchestrates-Serving is that it is possible to deploy new code with a single API call, which reduces the total amount of client workflow and the changes of partially-created changes when compared with a "do a build, then do a deploy" client-driven workflow.
/cc @duglin
At the moment the client doen't pick anything up from the KUBECONFIG env.
dev/k8s/knative-client master ✗ 1d ⚑ ⍉
▶ export KUBECONFIG=~/admin.conf
dev/k8s/knative-client master ✗ 1d ⚑
▶ ./kn service list
Error: Get https://192.168.42.79:8443/apis/serving.knative.dev/v1alpha1/namespaces/default/services: dial tcp 192.168.42.79:8443: connect: no route to host
Usage:
kn service list [flags]
Flags:
--allow-missing-template-keys If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
-h, --help help for list
-o, --output string Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. (default "jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}")
--template string Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
Global Flags:
--config string config file (default is $HOME/.kn.yaml)
--kubeconfig string kubectl config file (default is $HOME/.kube/config)
-n, --namespace string Namespace to use. (default "default")
Get https://192.168.42.79:8443/apis/serving.knative.dev/v1alpha1/namespaces/default/services: dial tcp 192.168.42.79:8443: connect: no route to host
dev/k8s/knative-client master ✗ 1d ⚑ ⍉
▶ kubectl get pods
No resources found.
This should be structured as start of e2e tests and for the first cut cover the proposed Basic workflow. So includes:
create service
service get
, service describe
newly created serviceservice list
that service in in the listdelete service
and verify deletedSubsequence e2e tests can follow additional advanced workflows with more commands, e.g., service update
, service replace
kn revision list
lists only the names of the revisions. Other useful information about the revision could be printed.
git clone https://github.com/knative/client.git knative-client
(currently at 38932b1)cd knative-client
go install ./cmd/kn
...
...
go: downloading github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c
go: downloading github.com/emicklei/go-restful v2.8.0+incompatible
go: downloading github.com/go-openapi/spec v0.18.0
go: verifying github.com/knative/[email protected]: checksum mismatch
downloaded: h1:kALSwD+P5GLdRj2mtZ2W4ZRRD//Q7nKjLqCPSpihkTY=
go.sum: h1:O8llkx4VL7C7AZm5wCxcQ9abz8yq02I62ZU++EAwBHk=
similarly go clean -modcache
doesn't work (fails w/ the same error).
Is this only me (go1.11.4)?
It would be awesome if kn
provided an easy way to install Knative. Even more awesome if it could work with a real Kubernetes cluster or let me choose to spin up Knative locally with minikube.
Lets separate the constant, usage-text, aliases, flags and their respective operations and re-organize them to be usable in command implementations.
Presently, the command implementations defines them all together in the same file.
We should have these items organized properly and imported as needed. This should help in testing as well.
Command usage text
Command example text
Help section structure
Aliases
flags
Common operations
Like any distributed system, a Knative install includes a lot of moving parts. An error in any one of them can impact the behavior of the system. It would be nice to have a tool that can audit a cluster and check for common configuration issues.
Simple things to check:
A first pass can be fairly trivial so long as there is the ability to add in more advanced checks in the future.
Write a command to describe a revision!
/kind doc
Sample API documentation that works
Sample API documentation page here:
https://github.com/knative/serving/blob/master/docs/spec/normative_examples.md#4-deploy-a-revision-from-source
talks about knative tool that doesn't exist and users find it confusing. It would seem clearer to update to use a kubectl example files rather than the tool itself.
kn service create
returns without any hint, lets add a message about service creation status.
For eg:
Created <SERVICE_NAME> in namespace.
Write the command to describe a service! It should by default print yaml, but have a configurable printer.
As we do that, we should conform it to the commands we plan to build in the reference CLI.
Adding this issue to track setting up CI-testing for this repo.
Let's have a mode where you can provide a directory instead of a server to connect to. Instead of modifying objects on the server, kn
will modify yaml files in the directory instead, for the kubectl apply -f
pleasure of your friendly local CI/CD system.
Provide a standard mechanism for tools and libraries to self-report their identity and version. This will allow platforms built on knative to identify what developers are using to deploy services.
None
Have docs for CLI in the docs repo publish to website
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.