Coder Social home page Coder Social logo

k8s-conformance's Introduction

Certified Kubernetes Conformance Program

All vendors are invited to submit conformance testing results for review and certification by the CNCF. If your company provides software based on Kubernetes, we encourage you to get certified today. For more information please checkout cncf.io/ck.

Prepare

Learn about the certification requirements and technical instructions to prepare your product for certification.

Run the tests

The submission requires four files, two of which need to be generated from either of the following two applications.

For a number of years Sonobuoy has been used to generate both the e2e.log and junit_01.xml. Please follow the documentation provided in instructions.md.

A lightweight runner for kubernetes tests. Uses the conformance image(s) released by the kubernetes release team to either run individual tests or the entire Conformance suite. Check out the project README to learn more.

PR Submit

Please check the instructions for details about how to prepare your PR. Also, note that any submission made to this repo will need to first pass a number of checks that are verified by the verify conformance bot before its reviewed by the CNCF.

Relocating Historical Conformance Files

In our ongoing effort to optimize the k8s-conformance repository size and enhance the user experience, we have relocated older conformance files to the archive repository. This ensures smoother navigation and access to current content. Details about how this was done can be found in the CNCF blog post, Scaling down a Git repo. A tidy up of cncf/k8s-conformance.

Helpful Resources

Reviewing, approving, and driving changes to the conformance test suite; reviewing, guiding, and creating new conformance profiles.

To help the Kubernetes community understand the range of tests required for a release to be conformant. Each KubeConformance release document contains a list of conformance tests required for that release of Kubernetes. Refer to SIG-Architecture: Conformance Testing in Kubernetes for details around the testing requirements.

The bot currently checks 14 scenarios and updates the PR with the results. This automation provides timely feedback and reduces the time required by the CNCF to confirm that the PR meets all policy requirements.

APISnoop tracks the testing and conformance coverage of Kubernetes by analyzing the audit logs created by e2e test runs.

k8s-conformance's People

Contributors

aojea avatar bobymcbobs avatar bsctl avatar dankohn avatar dghubble avatar dmitry-irtegov avatar embik avatar gardener-robot-ci-1 avatar gardener-robot-ci-2 avatar gardener-robot-ci-3 avatar gjmzj avatar gwaines avatar hswong3i avatar jgiola avatar kbarnard10 avatar kwmonroe avatar legacyrj avatar liggitt avatar rtheis avatar shylajadevadiga avatar sknop-cgn avatar smira avatar soltysh avatar taylorwaggoner avatar thdrnsdk avatar timothysc avatar tnorlin avatar vitaliy-sn avatar vivek-shilimkar avatar williamdenniss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-conformance's Issues

Conformance tests with DenyEscalatingExec failing

Hey,

when you created a cluster with the DenyEscalatingExec Admission Controller, then you have failing test [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]

Output Logs:

/workspace/anago-v1.12.1-beta.0.52+4ed3216f3ec431/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
failed to execute command in pod test-host-network-pod, container busybox-1: pods "test-host-network-pod" is forbidden: cannot exec into or attach to a container using host network
Expected error:
    <*errors.StatusError | 0xc421c81680>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "pods \"test-host-network-pod\" is forbidden: cannot exec into or attach to a container using host network",
            Reason: "Forbidden",
            Details: {
                Name: "test-host-network-pod",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 403,
        },
    }
    pods "test-host-network-pod" is forbidden: cannot exec into or attach to a container using host network
not to have occurred
/workspace/anago-v1.12.1-beta.0.52+4ed3216f3ec431/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104

Related Issue on Sonobuoy: vmware-tanzu/sonobuoy#558
The question I and @andrewrynhard has, is the result still okay for acceptance or need we other solutions to resolve this? Maybe disable Admission controller, run test, enable again?

Can provide alternate image location instead of gcr.io

As we all know, user can not visit gcr.io in China. So, it is hard to run k8s conformance program.
Can provide alternate image location (e.g.: hub.docker) and support passing parameters to use alternate image location.
This need to make changes to sonobuoy and e2e.

Prow Job to Automate Verification of Conformance Log Submissions

It might help to have some automation around Conformance Log Submissions

The verification job should:

  • Ensure the correct version of the server is tested
  • Ensure the all Conformance tests have passed by comparing the list of tests for that release to junit.xml

@johnSchnake is looking into using an e2e dry-run reporter combined with the AST walker we use for conformance documentation generation to generate the list of expected tests per release.

/assign @hh

the lack of 1.7.10 release is blocking a product from becoming conformant

We've worked to identify an issue in 1.7.x and backport a fix that is already available for 1.8.x to 1.7.x. Then, I asked the 1.7 branch release manager when we would have a release and was told it would happen this week, most probably yesterday. I've pinged the same release manager and sig-release over Slack but the former seems missing-in-action and the latter can't seem to unblock the situation.

From a product perspective, we want to be part of the initial batch of 1.7.x conformant solutions and right now we're blocked because of this.

Tracking Issue - Conformance Coverage for Graceful Termination

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Tracking Issue - Conformance Coverage for Networking

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Specifically address CNI networking, identify tests that tests pod-pod communication

Client version matching server version doesn't seem like it captures conformance appropriately

https://github.com/cncf/k8s-conformance/blob/master/reviewing.md

says

Look at version.txt. Make sure the client and server versions match each other to the minor version level. Make sure this also matches the vX.Y subdirectory that the PR is in.

That won't be possible if the client is in a container and is produced at different intervals (using sonobuoy 1.7.3 for example). And last I saw we were recommending release-1.8 as the conformance target?

Can we clarify what version.txt is supposed to be in instructions?

Kubectl logs should be able to retrieve and filter logs

When I used sonobuoy for Software conformance, the following failures occurred

• Failure [10.441 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
    should be able to retrieve and filter logs  [Conformance] [It]
    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692

    Expected
        <int>: 21
    to equal
        <int>: 1

    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1232

This problem has been mentioned on this page, but it has been solved.
Why did I encounter this problem?
Who can help me with this? I'm very grateful.

Suggestion in the folder structure

I have a suggestion in the folder structure:
currently, we have this:

  • vX.Y/$dir/
    X.Y refers to the kubernetes major and minor version, and $dir is a short subdirectory name to hold the results for your product. Examples would be gke or openshift.

In our case, we can support multiple platforms like aws, gcp, azure, metal...
My suggestion is to add another level inside $dir to set the platform:

  • vX.Y/$dir/$plat
    which $plat can be aws, azure, vmware, ...

this is relevant?
thanks!

Tracking issue - Identify API-Machinery e2e tests to be promoted to conformance and evaluate gaps in coverage

As per guidance from Sig-Arch we will focus our Conformance coverage efforts first on areas where functionality can be swapped out by providers, specifically around testing etcd, watch. Towards this we will be working with Sig-API-Machinery to
(i) Add conformance coverage for following 2 scenarios in 1.11 - PR#63947, PR#61424
(ii)Identify API endpoints to prioritize and define meaningful test scenarios / user journeys to automate in 1.12

/cc @jennybuckley @fedebongio @jagosan @timothysc @mithrav
/sig api-machinery

Tracking Issue - Conformance Coverage for Container Image updates

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

conformance test result tar.gz does not contain version.txt

I ran conformance test on 3 kubernetes clusters deployed on AWS, Azure and GCE and consistently the plugins/e2e/results/ folder in the tar archive created by sonobuoy does not contain the version.txt file.

Steps I followed to run the tests:

  • create kubernetes cluster, I deployed kubernetes v1.8.2
  • run $ curl -L https://raw.githubusercontent.com/cncf/k8s-conformance/master/sonobuoy-conformance.yaml | kubectl apply -f - to start the tests
  • run $ k logs -f -n sonobuoy sonobuoy and wait until it shows "level=info msg="no-exit was specified, sonobuoy is now blocking"
  • run $ kubectl cp sonobuoy/sonobuoy:tmp/sonobuoy/201711031447_sonobuoy_58c39447-19d7-49eb-81a7-5b0ad0bb6de6.tar.gz result.tar.gz to download the test result
  • unpack result.tar.gz
  • the archive does not contain a version.txt file but an empty nethealth.txt in ./plugins/e2e/results/:
$ find .
.
./meta
./meta/config.json
./meta/query-time.json
./meta/run.log
./plugins
./plugins/e2e
./plugins/e2e/results
./plugins/e2e/results/e2e.log
./plugins/e2e/results/junit_01.xml
./plugins/e2e/results/nethealth.txt
./resources
./resources/ns
./resources/ns/default
./resources/ns/kube-public
./resources/ns/kube-system
./resources/ns/sonobuoy

Dictionary on the conformance tests (& results)

A dictionary/ meta-data about the failing conformance tests, with a bit of description would be really helpful for the operator running e2e tests.
At the moment some test results' description are quite comprehensible, others are vague.

Can we add conformance results for k8s 1.6.11?

hi,
We're currently using k8s 1.6.11 as the base version of our product, is it valid to run against this version and get the test results?

As I can see the test results seems fine:

Ran 123 of 697 Specs in 2818.596 seconds
SUCCESS! -- 123 Passed | 0 Failed | 0 Pending | 574 Skipped PASS

Conformance test fails if Docker is started with --log-driver=journald option.

The [sig-cli] test Kubectl logs should be able to retrieve and filter logs fails when Docker is started with --log-driver=journald option. The --limit-bytes option of the kubectl logs command is ignored, and all the pod's log is returned instead of just the expected number of bytes, which causes the test to fail. Everything works ok if Docker is started with --log-driver=json-file option. (This is on CentOs 3.10.0-1062.1.2.el7.x86_64 )
@dankohn

panic when running conformance

#kubectl logs -f -n sonobuoy sonobuoy

time="2018-07-12T18:06:01Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2018-07-12T18:06:01Z" level=info msg="unknown template type" filename=..2018_07_12_18_05_58.722885278
time="2018-07-12T18:06:01Z" level=info msg="unknown template type" filename=..data
time="2018-07-12T18:06:01Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2018-07-12T18:06:01Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2018-07-12T18:06:01Z" level=info msg="Scanning plugins in /sonobuoy/plugins.d (pwd: /)"
time="2018-07-12T18:06:01Z" level=info msg="Directory (
/sonobuoy/plugins.d) does not exist"
time="2018-07-12T18:06:01Z" level=info msg="Loading plugin driver Job"
time="2018-07-12T18:06:01Z" level=info msg="Filtering namespaces based on the following regex:.*|sonobuoy"
panic: Get https://10.0.0.1:443/api/v1/namespaces: dial tcp 10.0.0.1:443: i/o timeout

goroutine 1 [running]:
github.com/heptio/sonobuoy/pkg/discovery.FilterNamespaces(0x18ce0c0, 0xc420136b60, 0xc420019e40, 0xb, 0x1, 0xc420019e40, 0xb)
/go/src/github.com/heptio/sonobuoy/pkg/discovery/utils.go:44 +0x480
github.com/heptio/sonobuoy/pkg/discovery.Run(0x18ce0c0, 0xc420136b60, 0xc4200e68c0, 0x0)
/go/src/github.com/heptio/sonobuoy/pkg/discovery/discovery.go:77 +0x5c4
github.com/heptio/sonobuoy/cmd/sonobuoy/app.runMaster(0xc4203fa900, 0xc42007bb00, 0x0, 0x4)
/go/src/github.com/heptio/sonobuoy/cmd/sonobuoy/app/master.go:63 +0x7c
github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra.(*Command).execute(0xc4203fa900, 0xc42007ba00, 0x4, 0x4, 0xc4203fa900, 0xc42007ba00)
/go/src/github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra/command.go:654 +0x2a2
github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x19174c0, 0x1, 0x1917940, 0x1917b80)
/go/src/github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra/command.go:729 +0x2fe
github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra.(*Command).Execute(0x19174c0, 0xc420501f70, 0xf3ecd4)
/go/src/github.com/heptio/sonobuoy/vendor/github.com/spf13/cobra/command.go:688 +0x2b
main.main()
/go/src/github.com/heptio/sonobuoy/cmd/sonobuoy/main.go:25 +0x2d

K8S: 1.10.5
Docker 1.13.1
OS: CentOS Linux release 7.4.1708

In my setup after upgrading docker 1.12 to 1.13 I had default setting for docker when FORWARDING DROP on all hosts which lead to that error. So, if you want to repro - apply DROP instead of ACCEPT.

sudo iptables -P FORWARD ACCEPT

Tracking Issue - Conformance Coverage for Scheduler

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Tracking Issue - Conformance Coverage for Shared Container Namespaces

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Automate the generation of the Conformance Document

As part of the release process we should generate the conformance document and check it into kubernetes documents

  • the release build should trigger a prow plugin to create conformance document
  • the generated conformance document should be attached to the release (like a release note)
  • the generated document should also be checked under cncf/k8s-conformance/docs (optional)

Tracking Issue - Conformance Coverage for HostAliases

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

latest version of Sonobuoy does not support v1.15.0

Running plugins: e2e, systemd-logs
ERRO[0000] Preflight checks failed
ERRO[0000] maximum kubernetes version is 1.14.99, got v1.15.0

But the question is: How did Microsoft/aks-engine pass the conformance test?

Accepted conformance tests run without full test suite

While looking through past conformance test results for comparison of our own I noticed that some results show not the full test suit being run.

Here is a set of notable examples in the 1.15 conformance tests :

  • sap-cp-aws log
  • sap-cp-azure log
  • sap-cp-openstack log
  • gardener-azure log
  • gardener-aws log
  • gardener-gcp log
  • gardener-openstack log

All of these tests have in common that they run 212 test cases instead of the 215 test cases the official suit runs. Additionally the configuration is visible in the log immediately at the beginning because the line

Conformance test: not doing test setup.

Does not appear when running the official conformance e2e test. Here is a valid example from digitalocean for comparison: https://raw.githubusercontent.com/cncf/k8s-conformance/master/v1.15/digitalocean/e2e.log

This brings up several questions to me:

  1. How come these runs got accepted as k8s conformant if the logs quite clearly state that they did not run the full test suit?
  2. How can the process be improved in order to prevent issues like this occurring on accident?

In general this seems to be a very unfortunate / accidental error to me - it however lessens the value of certified kubernetes in my mind at least.
I do not believe that the parties submitting these conformance tests acted in bad faith but do however think that the verification process for conformance tests should be improved.

Tracking Issue - Conformance Coverage for PreStop Lifecycle Hooks

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Interaction between preStop lifecycle hooks and terminationGracePeriodSeconds activeDeadlineSeconds (the only test that mentions it is "should allow activeDeadlineSeconds to be updated")

Tracking Issue - Conformance Coverage for Etcd

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Identify tests the tests general and pluggable behavior for persistence wrt etcd.

Best way to run with user-interactive system?

We want to certify OpenUnison's integration with k8s. When I run the test cases to generate the certification data, should I be interacting with our system (and in turn interacting with the API server?) (PS: I'm at kubecon if someone is here to talk to)

Thanks

Tracking issue - Identify top Pod APIs to prioritize and increase Conformance coverage in v1.12

As per guidance from Sig-Arch we will focus our Conformance coverage efforts first on areas where functionality can be swapped out by providers, specifically around Pod functionality(Node). Towards this for v1.12 we are working with Sig-Node to identify API endpoints to prioritize in the first round of conformance test authoring and define meaningful test scenarios / user journeys to automate.

This issue exists as a tracking reference for the community to reference on issues/PRs related to Node e2e tests needed for conformance efforts(s).

/cc @wangzhen127 @dchen1107 @timothysc @mithrav
/sig node
@kubernetes/sig-node-bugs

How can I replace the image registry address to my private registry address?

Hi buddy,I want to run "sonobuoy run",but it's not success,
because the server in our company can't accesse the image registry address,for example gcr.io;


Events:
  Type     Reason     Age                From                   Message
  ----     ------     ----               ----                   -------
  Normal   Scheduled  28s                default-scheduler      Successfully assigned heptio-sonobuoy/sonobuoy-e2e-job-dcc218c3d2284cfe to kf-app-40-14
  Warning  Failed     27s                kubelet, kf-app-40-14  Failed to pull image "gcr.io/heptio-images/kube-conformance:v1.13": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on [::1]:53: read udp [::1]:15151->[::1]:53: read: connection refused
  Normal   Pulled     27s                kubelet, kf-app-40-14  Container image "gcr.io/heptio-images/sonobuoy:v0.15.1" already present on machine
  Normal   Created    27s                kubelet, kf-app-40-14  Created container
  Normal   Started    27s                kubelet, kf-app-40-14  Started container
  Normal   BackOff    25s (x2 over 26s)  kubelet, kf-app-40-14  Back-off pulling image "gcr.io/heptio-images/kube-conformance:v1.13"
  Warning  Failed     25s (x2 over 26s)  kubelet, kf-app-40-14  Error: ImagePullBackOff
  Warning  Failed     11s (x2 over 27s)  kubelet, kf-app-40-14  Error: ErrImagePull
  Normal   Pulling    11s (x2 over 27s)  kubelet, kf-app-40-14  pulling image "gcr.io/heptio-images/kube-conformance:v1.13"
  Warning  Failed     11s                kubelet, kf-app-40-14  Failed to pull image "gcr.io/heptio-images/kube-conformance:v1.13": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on [::1]:53: read udp [::1]:17586->[::1]:53: read: connection refused

so I want to replace it to my private registry address,can you give me some advise? Thanks.
In addition .I also want to change "imagePullPolicy: IfNotPresent" to "imagePullPolicy: IfNotPresent",Thanks

New output?

The output of the tests it used to be sth like:

Ran 125 of 782 Specs in 2891.461 seconds
SUCCESS! -- 125 Passed | 0 Failed | 0 Pending | 657 Skipped PASS

Now we noticed that there are 2 additional lines at the end:

Ran 125 of 782 Specs in 2877.143 seconds
SUCCESS! -- 125 Passed | 0 Failed | 0 Pending | 657 Skipped PASS

Ginkgo ran 1 suite in 47m57.777851116s
Test Suite Passed

I would like to ask you where this string Test Suite Passed comes from and if it would be just enough to grep for this string to verify a successful test-run.

Tracking Issue - Conformance Coverage for Kube-proxy

This is a place holder to track Conformance Coverage on the Topic.
This issue should remain open until all related work is accomplished in the k/k repo
This issue will contain analysis of the coverage. Existing tests and additional tests requested with links to those issues/PR's

Conformance test names and descriptions are redundant, missing or inconsistent

ref: https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md#sample-conformance-test

I would like all of the existing conformance tests to have consistent formatting, and descriptions. Someone should be able to read this doc and feel like all behaviors of a given resource are adequately captured in this doc.

I am interested first and foremost on these changes being made for 1.12 and forward. I am open to these changes being backported to earlier release branches just to update docs, so that by reading this report someone can see what new conformance functionality was added (vs. the entire doc changing). I am not open to any test functionality changes being backported.

I plan on having issues in k/k link back this as an umbrella issue


egs from https://github.com/cncf/k8s-conformance/blob/master/docs/KubeConformance-1.9.md

ServiceAccounts should allow opting out of API token automount

Has no description

configmap-in-env-field
Make sure config map value can be used as an environment variable in the container (on container.env field)

Title isn't human readable

Release : v1.9

Nowhere do any of these display what release they were added in.


One nit I have to pick is the redundancy we have in the example in the linked guidelines

  • the ginko Describe(...) title: Kubelet
  • the optional ginko Context(...) info: when scheduling a busybox command in a pod
  • the ginko It(...) behavior: it should print the output to logs
  • the recommended Testname comment: Kubelet: log output
  • the recommended Description comment: By default the stdout and stderr from the process being executed in a pod MUST be sent to the pod's logs.

That's five pieces of redundant info to somehow keep in sync, let alone the actual test code. It's worth noting the suggested comments aren't even in the codebase today, and the test in question doesn't actually verify the stderr part.

Can we simplify this at all before I unleash the hounds on standardizing everything?

How to submit conformance test result multiple times based on maintenance version

If one Kubernetes release distributed two released based on Kubernetes 1.11, one is based on Kubernetes 1.11.2 and another is based on Kubernetes 1.11.3, then how to submit the conformance test?

Let us say the version is as following format:

major.minor.maintenance

I found that the conformance test only include version like major.minor but if I want to submit conformance test twice with same major.minor version but different maintenance version, how to proceed this?

Conformance tests fail with unexpected EOF

Steps I executed

  1. Deployed the yaml in my cluster sonobuoy-conformance.yaml
    kubectl apply -f sonobuoy-conformance.yaml
    Output
    namespace "sonobuoy" configured
    serviceaccount "sonobuoy-serviceaccount" unchanged
    clusterrolebinding "sonobuoy-serviceaccount" configured
    clusterrole "sonobuoy-serviceaccount" configured
    configmap "sonobuoy-config-cm" unchanged
    configmap "sonobuoy-plugins-cm" unchanged
    pod "sonobuoy" configured
    service "sonobuoy-master" unchanged

Now when I run the following command it just fails with unexpected EOF. It is as if E2E tests just close the connection. Is 0.0.0.0:8080 the right IP:port to listen on within the pod.
kubectl logs -f -n sonobuoy sonobuoy

time="2018-05-31T00:54:22Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2018-05-31T00:54:22Z" level=info msg="unknown template type" filename=..2018_05_31_00_54_04.304406279
time="2018-05-31T00:54:22Z" level=info msg="unknown template type" filename=..data
time="2018-05-31T00:54:22Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2018-05-31T00:54:22Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2018-05-31T00:54:22Z" level=info msg="Scanning plugins in /sonobuoy/plugins.d (pwd: /)"
time="2018-05-31T00:54:22Z" level=info msg="Directory (
/sonobuoy/plugins.d) does not exist"
time="2018-05-31T00:54:22Z" level=info msg="Loading plugin driver Job"
time="2018-05-31T00:54:22Z" level=info msg="Filtering namespaces based on the following regex:.*|sonobuoy"
time="2018-05-31T00:54:22Z" level=info msg="Namespace default Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace ingress-nginx Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace jx Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace jx-production Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace jx-staging Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace kube-public Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace kube-system Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Namespace sonobuoy Matched=true"
time="2018-05-31T00:54:22Z" level=info msg="Starting server Expected Results: [{ e2e}]"
time="2018-05-31T00:54:22Z" level=info msg="Running (e2e) plugin"
time="2018-05-31T00:54:22Z" level=info msg="Listening for incoming results on 0.0.0.0:8080\n"
error: unexpected EOF

AKS naming collision

It looks like in 1.13 conformance that the aks directory name (that was used by Azure Kubernetes Service) prior to 1.13 has now been used by Alauda Kubernetes (AKS).
Reference:

https://github.com/cncf/k8s-conformance/tree/master/v1.13/aks
https://github.com/cncf/k8s-conformance/tree/master/v1.12/aks
https://github.com/cncf/k8s-conformance/tree/master/v1.11/aks
https://github.com/cncf/k8s-conformance/tree/master/v1.10/aks

I suggest that we prefix the aks with the provider name: eg. azure-aks and alauda-aks.

cc @taylorwaggoner @jing2uo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.