Coder Social home page Coder Social logo

doitintl / kube-no-trouble Goto Github PK

View Code? Open in Web Editor NEW
2.9K 18.0 158.0 786 KB

Easily check your clusters for use of deprecated APIs

License: MIT License

Makefile 2.18% Go 85.27% Open Policy Agent 9.39% Shell 2.77% Dockerfile 0.40%
hacktoberfest gke kubernetes k8s kube cluster

kube-no-trouble's Introduction

Kubent (Kube-No-Trouble) logo

Easily check your clusters for use of deprecated APIs

Kubernetes 1.16 is slowly starting to roll out, not only across various managed Kubernetes offerings, and with that come a lot of API deprecations1.

Kube No Trouble (kubent) is a simple tool to check whether you're using any of these API versions in your cluster and therefore should upgrade your workloads first, before upgrading your Kubernetes cluster.

This tool will be able to detect deprecated APIs depending on how you deploy your resources, as we need the original manifest to be stored somewhere. In particular following tools are supported:

  • file - local manifests in YAML or JSON
  • kubectl - uses the kubectl.kubernetes.io/last-applied-configuration annotation
  • Helm v3 - uses Helm manifests stored as Secrets or ConfigMaps directly in individual namespaces

Additional resources:

Install

Run the following command in your terminal to install kubent using a shell script:

sh -c "$(curl -sSL https://git.io/install-kubent)"

(The script will download latest version and unpack to /usr/local/bin).

Manual Installation

You can download the latest release for your platform and unpack manually.

Third-Party Installation

Please note that third-party installation methods are maintained by the community. The packages may not always be up-to-date with the latest releases of kubent.

Homebrew

kubent is available as a formula on Homebrew. If you're using macOS or Linux, you can run the following command to install kubent:

brew install kubent

Usage

Configure Kubectl's current context to point to your cluster, kubent will look for the kube .config file in standard locations (you can point it to custom location using the -k switch).

kubent will collect resources from your cluster and report on found issues.

Please note that you need to have sufficient permissions to read Secrets in the cluster in order to use Helm* collectors.

$./kubent
6:25PM INF >>> Kube No Trouble `kubent` <<<
6:25PM INF Initializing collectors and retrieving data
6:25PM INF Retrieved 103 resources from collector name=Cluster
6:25PM INF Retrieved 0 resources from collector name="Helm v3"
6:25PM INF Loaded ruleset name=deprecated-1-16.rego
6:25PM INF Loaded ruleset name=deprecated-1-20.rego
__________________________________________________________________________________________
>>> 1.16 Deprecated APIs <<<
------------------------------------------------------------------------------------------
KIND         NAMESPACE     NAME                    API_VERSION
Deployment   default       nginx-deployment-old    apps/v1beta1
Deployment   kube-system   event-exporter-v0.2.5   apps/v1beta1
Deployment   kube-system   k8s-snapshots           extensions/v1beta1
Deployment   kube-system   kube-dns                extensions/v1beta1
__________________________________________________________________________________________
>>> 1.20 Deprecated APIs <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME           API_VERSION
Ingress   default     test-ingress   extensions/v1beta1

Arguments

You can list all the configuration options available using --help switch:

$./kubent -h
Usage of ./kubent:
  -A, --additional-annotation strings   additional annotations that should be checked to determine the last applied config
  -a, --additional-kind strings         additional kinds of resources to report in Kind.version.group.com format
  -c, --cluster                         enable Cluster collector (default true)
  -x, --context string                  kubeconfig context
  -e, --exit-error                      exit with non-zero code when issues are found
  -f, --filename strings                manifests to check, use - for stdin
      --helm3                           enable Helm v3 collector (default true)
  -k, --kubeconfig string               path to the kubeconfig file
  -l, --log-level string                set log level (trace, debug, info, warn, error, fatal, panic, disabled) (default "info")
  -o, --output string                   output format - [text|json|csv] (default "text")
  -O, --output-file string        output file, use - for stdout (default "-")
  -t, --target-version string           target K8s version in SemVer format (autodetected by default)
  -v, --version                         prints the version of kubent and exits
  • --additional-annotation Check additional annotations for the last applied configuration. This can be useful if a resource was applied with a tool other than kubectl. The flag can be used multiple times.

  • -a, --additional-kind Tells kubent to flag additional custom resources when found in the specified version. The flag can be used multiple times. The expected format is full Kind.version.group.com form - e.g. -a ManagedCertificate.v1.networking.gke.io.

  • -x, --context Select context from kubeconfig file (current-context from the file is used by default).

  • k, --kubeconfig Path to kubeconfig file to use. This takes precedence over KUBECONFIG environment variable, which is also supported and can contain multiple paths, and default ~.kube/config.

  • -t, --target-version Kubent will try to detect K8S cluster version and display only relevant findings. This flag allows to override this version for scenarios like use in CI with the file collector only, when detection from an actual cluster is not possible. Expected format is major.minor[.patch], e.g. 1.16 or 1.16.3.

Docker Image

We also publish official container image, which can be found at: ghcr.io/doitintl/kube-no-trouble:latest (also available tagged with each individual release version).

To run locally, you'll need to provide credentials, e.g. by sharing your kubectl config:

$ docker run -it --rm \
    -v "${HOME}/.kube/config:/.kubeconfig" \
    ghcr.io/doitintl/kube-no-trouble:latest \
    -k /.kubeconfig

You can use kubectl run to run inside a K8S cluster, as a one-off. In that case the credentials will be picked up via the pod's service account from the environment, but you'll want to grant relevant permissions first (see docs/k8s-sa-and-role-example.yaml):

$ kubectl run kubent --restart=Never --rm -i --tty \
    --image ghcr.io/doitintl/kube-no-trouble:latest \
    --overrides='{"spec": {"serviceAccount": "kubent"}}'

Use in CI

Exit codes

kubent will by default return 0 exit code if the program succeeds, even if it finds deprecated resources, and non-zero exit code if there is an error during runtime. Because all info output goes to stderr, it's easy to check in shell if any issues were found:

test -z "$(kubent)"                 # if stdout output is empty, means no issues were found
                                    # equivalent to [ -z "$(kubent)" ]

It's actually better so split this into two steps, in order to differentiate between runtime error and found issues:

if ! OUTPUT="$(kubent)"; then       # check for non-zero return code first
  echo "kubent failed to run!"
elif [ -n "${OUTPUT}" ]; then       # check for empty stdout
  echo "Deprecated resources found"
fi

You can also use --exit-error (-e) flag, which will make kubent to exit with non-zero return code (200) in case any issues are found.

Alternatively, use the json output and smth. like jq to check if the result is empty:

kubent -o json | jq -e 'length == 0'

Scanning all files in directory

If you want to scan all files in a given directory, you can use the following shell snippet:

FILES=($(ls *.yaml)); kubent ${FILES[@]/#/-f} --helm3=false -c=false

Development

The simplest way to build kubent is:

# Clone the repository
git clone https://github.com/doitintl/kube-no-trouble.git
cd kube-no-trouble/
# Build
go build -o bin/kubent cmd/kubent/main.go

Otherwise there's Makefile

$ make
make
all                            Cean, build and pack
help                           Prints list of tasks
build                          Build binary
generate                       Go generate
release-artifacts              Create release artifacts
clean                          Clean build artifacts

Commit messages

We enforce simple version of Conventional Commits in the form:

<type>: <summary>

[optional body]

[optional footer(s)]

Where type is one of:

  • build - Affects build and/or build system
  • chore - Other non-functional changes
  • ci - Affects CI (e.g. GitHub actions)
  • dep - Dependency update
  • docs - Documentation only change
  • feat - A new feature
  • fix - A bug fix
  • ref - Code refactoring without functionality change
  • style - Formatting changes
  • test - Adding/changing tests

Use imperative, present tense (Add, not Added), capitalize first letter of summary, no dot at the and. The body and footer are optional. Relevant GitHub issues should be referenced in the footer in the form Fixes #123, fixes #456.

Changelog

Changelog is generated automatically based on merged PRs using changelog-gen. Template can be found in scripts/changelog.tmpl.

PRs are categorized based on their labels, into following sections:

  • Announcements - announcement label
  • Breaking Changes - breaking-change label
  • Features - feature label
  • Changes - change label
  • Fixes - fix label
  • Internal/Other - everything else

PR can be excluded from changelog with no-release-note label. PR title is used by default, however, the copy can be customized by including following block in the PR body:

```release-note
This is an example release note!
```

Issues and Contributions

Please open any issues and/or PRs against github.com/doitintl/kube-no-trouble repository.

Please ensure any contributions are signed with a valid gpg key. We use this to validate that you have committed this and no one else. You can learn how to create a GPG key here.

Feedback and contributions are always welcome!

kube-no-trouble's People

Contributors

andreassko avatar dark0dave avatar david-ortiz-saez avatar denis-test-4 avatar dependabot[bot] avatar devenes avatar fabioantunes avatar johngmyers avatar justdan96 avatar kkopachev avatar liggitt avatar meringu avatar ryanrolds avatar stepanstipl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-no-trouble's Issues

Native support for CustomResourceDefinition

First of all thanks a lot for this nice utility.

Problem Statement

I was just wandering why the CustomResourceDefinition object isn't natively supported. So basically if I have something like:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: mystuff.crd.k8s.myorga.com
spec:
  group: crd.k8s.myorga.com
  names:
    kind: MyStuff
...

Attempting to apply the following to a cluster with a recent version of k8s e.g. 1.20 I get a message like:

Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition

That being said, running kubent doesn't print out this object at all.

Proposed Solution

I would love to see a native support for the CRD definition object itself (CustomResourceDefinition) similar to other k8s objects like Ingress, ClusterRole, Service, .. etc.

Current Workaround

The current workaround is to use the -a flag of kubent e.g.:

$  kubent -a CustomResourceDefinition.v1beta1.apiextensions.k8s.io 

In such case I get:

4:17PM INF >>> Kube No Trouble `kubent` <<<
4:17PM INF version 0.4.0 (git sha 3d82a3f0714c97035c27374854703256b3d69125)
4:17PM INF Initializing collectors and retrieving data
4:17PM INF Retrieved 207 resources from collector name=Cluster
4:17PM INF Retrieved 0 resources from collector name="Helm v2"
4:17PM INF Retrieved 266 resources from collector name="Helm v3"
4:17PM INF Loaded ruleset name=custom.rego.tmpl
4:17PM INF Loaded ruleset name=deprecated-1-16.rego
4:17PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Additional resources (custom) <<<
------------------------------------------------------------------------------------------
KIND                       NAMESPACE     NAME                                         API_VERSION                    REPLACE_WITH (SINCE)
CustomResourceDefinition   <undefined>   mystuff.crd.k8s.myorga.com             apiextensions.k8s.io/v1beta1           <na> (<na>+)
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE     NAME                     API_VERSION          REPLACE_WITH (SINCE)
Ingress   kube-system   ingress-kube-dashboard   extensions/v1beta1   networking.k8s.io/v1beta1 (1.14+)
Ingress   kube-system   ingress-master           extensions/v1beta1   networking.k8s.io/v1beta1 (1.14+)

Tracing bad yaml code - eval_builtin_error: json.unmarshal: invalid character ']'

Thank you for this fantastic tool!

This has worked in several clusters, but in one I receive this message. It may be indicative of some odd yaml somewhere in our cluster, but I'm not sure how to trace it, and since it's being retrieved from the cluster it suggests that it may be valid yaml but misinterpreted. Also the rule returned whether 1.16/1.20 rotates randomly between the 2, and --debug doesn't seem to print any more to screen.

$ kubent --debug -k ~/Kubeconfig-prd1 --helm2=false --helm3=false
10:38AM INF >>> Kube No Trouble kubent <<<
10:38AM INF version 0.2.0 (git sha b9b45f3)
10:38AM INF Initializing collectors and retrieving data
10:38AM INF Retrieved 3843 resources from collector name=Cluster
10:38AM INF Loaded ruleset name=deprecated-1-16.rego
10:38AM INF Loaded ruleset name=deprecated-1-20.rego
10:38AM FTL Failed to evaluate input error="deprecated-1-16.rego:14: eval_builtin_error: json.unmarshal: invalid character ']' after object key:value pair" name=Rego

rbac.authorization.k8s.io/v1beta1 should be flagged deprecated also

All resources within the rbac.authorization.k8s.io/v1beta1 API groups are deprecated in favor of rbac.authorization.k8s.io/v1

$ kubectl get --raw /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings > /dev/null
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

https://v1-17.docs.kubernetes.io/docs/setup/release/notes/#deprecations-and-removals

All resources within the rbac.authorization.k8s.io/v1alpha1 and rbac.authorization.k8s.io/v1beta1 API groups are deprecated in favor of rbac.authorization.k8s.io/v1, and will no longer be served in v1.20. (#84758, @liggitt)

Continue processing other manifests, even if one fails

Even if one or more manifests fail to be parsed, we should still continue to process others. This is improvement after adding better error reporting in #20 to address #13

This will require some ref. around handling errors, I imagine the collector should return multiple "non-fatal" errors in this case, and these should be logged to user.

Dependencies - Helm v2 blocking from upgrading to k8s.io v0.18+

Helm v2 is not compatible with k8s.io v0.18+ modules.

At the same time, this is stopping us from upgrading to latest Helm v3 (last compatible seems to be v3.1.3).

Go does not seem to support multiple minor versions of the same package. Solutions:

  • Not upgrade
  • Drop Helm v2 support
  • Fork helm v2 & lock dependencies by replace & importing them under a different name

We should also be careful as not to lose support for older K8s clusters like v 1.15 -> this calls for integration testing.

networking.k8s.io/v1beta1 should be flagged deprecated also

Since 1.19 has moved ingress to v1 could we alert on both v1beta1 strings?

	"Ingress": {
		"old": ["extensions/v1beta1"],
		"new": "networking.k8s.io/v1beta1"

https://v1-19.docs.kubernetes.io/docs/setup/release/notes/#deprecations-and-removals
Ingress and IngressClass resources have graduated to networking.k8s.io/v1. Ingress and IngressClass types in the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions are deprecated and will no longer be served in 1.22+

Once again thanks for the fantastic tool!

Collect all resources in cluster collector

Currently, when adding/updating new resources, changes have to be made at two places, in-sync - the rego rules, and cluster collector. This is difficult to keep in sync. Also, the specific API versions hardcoded in cluster collector are expected to cause issues when running against a cluster that does not support these.

I see 3 options to move forward:
a) Collect all resources, in all API versions
b) Collect specific resource, in all API versions
c) Come up with a better/automated mechanism that wouldn't require editing things at two places. Potentially connected with #154 - consider coming with some format to describe rule metadata, incl. what resources are we interested in.

KUBECONFIG env variable would be nice to support

Firstly thanks for a great tool!

Then minor issue and a workaround for anyone having the same problem

Actual result:

$ kubent
10:54AM INF >>> Kube No Trouble `kubent` <<<
10:54AM INF version 0.3.2 (git sha 919129b596890475965cda2b972cb6fded71f40b)
10:54AM INF Initializing collectors and retrieving data
10:54AM ERR Failed to initialize collector: <nil> error="stat /home/<redacted>/.kube/config: no such file or directory"
...
$ env | grep KUBECONFIG
KUBECONFIG=/path/to/my/kube.config

Expected result:
using the KUBECONFIG - env variable for the path (like kubectl does)

Workaround:

$ kubent -k ${KUBECONFIG}

Add a "dev" build from master

It would be nice to have a binary for the latest version of the master published, so it's easy to download and test latest features before a release is cut.

CI mode

The tool should provide a way for CI to fail. This is typically done by returning a non-zero exit code.

Doesn't find all deprecated APIs

kubent didn't find all the deprecated APIs...

The list of all deprecated APIs is here:
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

kubent finds this:

$ ./kubent -k kubeconfig
3:57PM INF >>> Kube No Trouble `kubent` <<<
3:57PM INF version 0.3.2 (git sha 919129b596890475965cda2b972cb6fded71f40b)
3:57PM INF Initializing collectors and retrieving data
3:57PM INF Retrieved 8 resources from collector name=Cluster
3:57PM INF Retrieved 0 resources from collector name="Helm v2"
3:57PM INF Retrieved 636 resources from collector name="Helm v3"
3:57PM INF Loaded ruleset name=deprecated-1-16.rego
3:57PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.16  <<<
------------------------------------------------------------------------------------------
KIND         NAMESPACE     NAME                   API_VERSION          
Deployment   <undefined>   metabase-metabase      extensions/v1beta1   
Deployment   <undefined>   sonarqube-postgresql   extensions/v1beta1   
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE     NAME                                 API_VERSION          
Ingress   <undefined>   harbor-harbor-ingress-notary         extensions/v1beta1   
Ingress   <undefined>   alice-frontend-generic-app   extensions/v1beta1   
Ingress   <undefined>   frontend-ingress              extensions/v1beta1   
Ingress   <undefined>   frontend-generic-app                 extensions/v1beta1   
Ingress   <undefined>   ganache-cli-generic-app              extensions/v1beta1   
Ingress   <undefined>   harbor-harbor-ingress                extensions/v1beta1   
Ingress   <undefined>   api-generic-app                      extensions/v1beta1   
Ingress   <undefined>   metabase-metabase                    extensions/v1beta1   
Ingress   <undefined>   alice-api-generic-app        extensions/v1beta1   
Ingress   <undefined>   pomerium                             extensions/v1beta1   
Ingress   <undefined>   chartmuseum-chartmuseum              extensions/v1beta1   
Ingress   <undefined>   sonarqube-sonarqube                  extensions/v1beta1   
Ingress   <undefined>   temp-api-generic-app                 extensions/v1beta1   
Ingress   <undefined>   temp-frontend-generic-app            extensions/v1beta1   
Ingress   <undefined>   verdaccio-verdaccio                  extensions/v1beta1  

but there is no mention of other deprecated APIs like NetworkPolicy:

$ k get networkpolicy -A -o json | jq '.items[] | select(.apiVersion != "networking.k8s.io/v1") | .apiVersion,.metadata.name,.metadata.namespace'
"extensions/v1beta1"
"redis"
"alice-dev"
"extensions/v1beta1"
"redis"
"alice-prod"
"extensions/v1beta1"
"redis"
"alice-test"
"extensions/v1beta1"
"redis"
"barry-dev"
"extensions/v1beta1"
"redis"
"barry-prod"

Detect K8s version and suggest only relevant issues

We can try to detect a version of the cluster a user is running and suggest only updates relevant for the existing/next minor version, since we have info on since what version are the replacement resources available.

I.e. if user runs 1.15, and is dealing with upgrade to 1.16, no point flagging deprecation in 1.22 that are only available for replacement in 1.18.

display target api to use

Hello

I did run your script and it's nice, but would be cool to specify some helpful information if possible.

1:24PM INF >>> Kube No Trouble `kubent` <<<
1:24PM INF version 0.2.0 (git sha b9b45f347c7e67cf488d8dd6e20c8b848c595a3e)
1:24PM INF Initializing collectors and retrieving data
1:24PM INF Retrieved 70 resources from collector name=Cluster
1:24PM INF Retrieved 0 resources from collector name="Helm v2"
1:24PM INF Retrieved 0 resources from collector name="Helm v3"
1:24PM INF Loaded ruleset name=deprecated-1-16.rego
1:24PM INF Loaded ruleset name=deprecated-1-20.rego
________________________________________________________________________________________
>>> 1.20 Deprecated APIs <<<
----------------------------------------------------------------------------------------
KIND      NAMESPACE    NAME                 API_VERSION    NEW_API_VERSION
Ingress   test         test-ingress    extensions/v1beta1  networking.k8s.io/v1beta1

or event a link to the current version for deprecation api.
Example : https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/ or https://v1-14.docs.kubernetes.io/docs/concepts/services-networking/ingress/

Fix very slow `TestNewClusterCollectorValidEmptyCollector` (330.810s)

This collector tests take currently 5m+. that's a bit too slow ;), it seems to be the TestNewClusterCollectorValidEmptyCollector test:

=== RUN   TestNewClusterCollectorValidEmptyCollector
....
--- PASS: TestNewClusterCollectorValidEmptyCollector (330.04s)
ok      github.com/doitintl/kube-no-trouble/pkg/collector       330.810s        coverage: 45.4% of statements

Unable to run binary

Hi Stepan,

I downloaded Linux release and after unpacking and executing ./kubent nothing happens.
/chmod 777 kubent did not help
executing ./kubent -h also shows nothing

I have admin role in cluster and running
kube config is in default location
I'm using 18.04.4 LTS (Bionic Beaver)

Feature request: Customizing ruleset

It would be good to have functionalities like checking objects deployed using some specific Helm version i.e. Helm v2 which is gonna be deprecated soon and report that also. In short, allowing to incorporate custom rules also on top of K8s deprecation rules.

File collector not reporting all resources or reporting them inconsistently

Hi. Thanks for the great tool. We were trying to run it on manifests using the -f option (the File collector) and found it to not always pick up deprecations in all the resources in the files specified or to pick them up differently when the tool is run again on the same files.

To reproduce, start with these 3 manifest files (I tried to trim them so they may not actually work as resources in a real cluster but they demonstrate this issue):

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app
  namespace: prod
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - backend:
          serviceName: backend
          servicePort: 8080
        path: /

stateful-set.yaml

---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: memecached
  namespace: prod
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: memcached
      app.kubernetes.io/instance: memcached
  serviceName: memcached
  template:
    metadata:
      labels:
        app.kubernetes.io/name: memcached
        app.kubernetes.io/instance: memcached
    spec:
      securityContext:
        fsGroup: 1001
      containers:
      - name: memcached
        image: memcached:1.5.20
        imagePullPolicy: ""
        securityContext:
          runAsUser: 1001
        command:
        - memcached
        - -m 64
        - -o
        - modern
        - -v
        ports:
        - name: memcache
          containerPort: 11211
        livenessProbe:
          tcpSocket:
            port: memcache
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          tcpSocket:
            port: memcache
          initialDelaySeconds: 5
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
  updateStrategy:
    type: RollingUpdate

combined.yaml - this is just stateful-set.yaml appended to the end of ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app
  namespace: prod
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - backend:
          serviceName: backend
          servicePort: 8080
        path: /
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: memecached
  namespace: prod
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: memcached
      app.kubernetes.io/instance: memcached
  serviceName: memcached
  template:
    metadata:
      labels:
        app.kubernetes.io/name: memcached
        app.kubernetes.io/instance: memcached
    spec:
      securityContext:
        fsGroup: 1001
      containers:
        - name: memcached
          image: memcached:1.5.20
          imagePullPolicy: ""
          securityContext:
            runAsUser: 1001
          command:
            - memcached
            - -m 64
            - -o
            - modern
            - -v
          ports:
            - name: memcache
              containerPort: 11211
          livenessProbe:
            tcpSocket:
              port: memcache
            initialDelaySeconds: 30
            timeoutSeconds: 5
          readinessProbe:
            tcpSocket:
              port: memcache
            initialDelaySeconds: 5
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 50m
              memory: 64Mi
  updateStrategy:
    type: RollingUpdate

Running on just the ingress.yaml file seems to always work:

$ kubent -c=false --helm2=false --helm3=false -f ingress.yaml
3:47PM INF >>> Kube No Trouble `kubent` <<<
3:47PM INF version 0.3.1 (git sha dev)
3:47PM INF Initializing collectors and retrieving data
3:47PM INF Retrieved 1 resources from collector name=File
3:47PM INF Loaded ruleset name=deprecated-1-16.rego
3:47PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME      API_VERSION
Ingress   prod        app       extensions/v1beta1
$

As does running on just stateful-set.yaml:

$ kubent -c=false --helm2=false --helm3=false -f stateful-set.yaml
3:48PM INF >>> Kube No Trouble `kubent` <<<
3:48PM INF version 0.3.1 (git sha dev)
3:48PM INF Initializing collectors and retrieving data
3:48PM INF Retrieved 1 resources from collector name=File
3:48PM INF Loaded ruleset name=deprecated-1-16.rego
3:48PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.16  <<<
------------------------------------------------------------------------------------------
KIND          NAMESPACE   NAME         API_VERSION
StatefulSet   prod        memecached   apps/v1beta2
$

But running on both files only ever seems to pick up the resource in the last file, never both:

$ kubent -c=false --helm2=false --helm3=false -f ingress.yaml,stateful-set.yaml
3:50PM INF >>> Kube No Trouble `kubent` <<<
3:50PM INF version 0.3.1 (git sha dev)
3:50PM INF Initializing collectors and retrieving data
3:50PM INF Retrieved 2 resources from collector name=File
3:50PM INF Loaded ruleset name=deprecated-1-16.rego
3:50PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.16  <<<
------------------------------------------------------------------------------------------
KIND          NAMESPACE   NAME         API_VERSION
StatefulSet   prod        memecached   apps/v1beta2
$ kubent -c=false --helm2=false --helm3=false -f stateful-set.yaml,ingress.yaml
3:50PM INF >>> Kube No Trouble `kubent` <<<
3:50PM INF version 0.3.1 (git sha dev)
3:50PM INF Initializing collectors and retrieving data
3:50PM INF Retrieved 2 resources from collector name=File
3:50PM INF Loaded ruleset name=deprecated-1-16.rego
3:50PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME      API_VERSION
Ingress   prod        app       extensions/v1beta1
$

(Notice the flipping of the order of the files in the -f option in the above 2 calls)

And weirdest of all, when run on the combined.yaml file, sometimes it reports 1 resource and sometimes the other (but I've never seen it report both)

$ kubent -c=false --helm2=false --helm3=false -f combined.yaml
3:52PM INF >>> Kube No Trouble `kubent` <<<
3:52PM INF version 0.3.1 (git sha dev)
3:52PM INF Initializing collectors and retrieving data
3:52PM INF Retrieved 2 resources from collector name=File
3:52PM INF Loaded ruleset name=deprecated-1-16.rego
3:52PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.16  <<<
------------------------------------------------------------------------------------------
KIND          NAMESPACE   NAME         API_VERSION
StatefulSet   prod        memecached   apps/v1beta2
$ kubent -c=false --helm2=false --helm3=false -f combined.yaml
3:52PM INF >>> Kube No Trouble `kubent` <<<
3:52PM INF version 0.3.1 (git sha dev)
3:52PM INF Initializing collectors and retrieving data
3:52PM INF Retrieved 2 resources from collector name=File
3:52PM INF Loaded ruleset name=deprecated-1-16.rego
3:52PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME      API_VERSION
Ingress   prod        app       extensions/v1beta1
$

Sometimes this last one is not easy to reproduce. I had to run it over 10 times (all identical and on the same file) to get it to show the Ingress resource instead of the StatefulSet resource. Other times it seems to flip back and forth more often.

Notice that the resource count its reporting always seems to be correct even if its not detecting or reporting all the deprecations.

I tried to see if I could spot where it was but I don't know go and couldn't see anything obvious.

Thanks!

Refactor to support resource version evolution

As we want to provide most relevant advice to users, current implementation has its limitation as it's not able to support incremental recommendations, example being Ingress:

"Ingress": {
    "old": ["extensions/v1beta1", "networking.k8s.io/v1beta1"],
    "new": "networking.k8s.io/v1",
    "since": "1.14",
},

This does not allow to reflect the real situation where networking.k8s.io/v1 is only available since 1.19, and networking.k8s.io/v1beta1 since 1.14.

I.e. for someone running 1.18 we should ideally recommend upgrade to networking.k8s.io/v1beta1, but not v1, as that is not available yet. To allow this type of recommendations we need to capture version evolution properly, perhaps smth. like:

"Ingress": {
    "versions": [
        "extensions/v1beta1": { "since": "", deprecated: "1.16", removed: "1.22" },
        "networking.k8s.io/v1beta1": {"since": "1.19", deprecated: "1.16", removed: "1.22"},
        "networking.k8s.io/v1": {"since": "1.19", deprecated: "1.", "removed": "1.22"},
    ],
},

Maybe smth. like might be easier to work with:

"Ingress": {
    "versions": [
        [ "1.0": "extensions/v1beta1"],
        [ "1.14": "networking.k8s.io/v1beta1"],
        [ "1.19": "networking.k8s.io/v1"],
    ],
},
  • We also need to support the case when a version is removed but does not have any replacement.

  • Do we need info when version is removed? or deprecated really? We only care if it has replacement, and perhaps removal without replacement could be smth. like:

    [ "1.22": ""],
    
  • Implementation-wise:

    • How easy is it to find if resource needs to be replaced?
    • And with what version?
    • Is this still a good use for rego? Or should we move this to Go...?

Create ARM Build

Lots of M1 mac chips and people running arm servers, means we should probably make an arm build

panic: interface conversion: interface {} is nil, not string

Running into the below issue while running kubent, can someone help?

panic: interface conversion: interface {} is nil, not string

goroutine 1 [running]:
github.com/doitintl/kube-no-trouble/pkg/judge.(*RegoJudge).Eval(0xc00015fc10, 0xc000273500, 0x29c, 0x2a0, 0x29c, 0x2a0, 0x0, 0x1f4, 0xc0002f4390)
	/__w/kube-no-trouble/kube-no-trouble/pkg/judge/rego.go:76 +0x7c2
main.main()
	/__w/kube-no-trouble/kube-no-trouble/cmd/kubent/main.go:97 +0x3be

Go version - go1.15.5 darwin/amd64

kubent version - 0.3.2

Add tests for empty output format

Add tests for empty output format to make sure we keep consistent output with regards to use in CI/integration (#31)

  • text stdout output should be empty if no issues are found
  • json output should return an empty array

Cluster/context selection and multi-file KUBECONFIG

I have access to multiple k8s clusters, they are defined in multiple config files, all pointed-to by the KUBECONFIG env var, separated with ::

$ echo $KUBECONFIG
/home/foo/.kube/config:/home/foo/.minikube/kubeconfig

kubent doesn't support context selection, it seems to use the default context on the first config file, and if it points to a context defined in another file it fails: ERR Failed to initialize collector: <nil> error="invalid configuration: [context was not found for specified context: bar-context, cluster has no server defined]"

Currently I workaround that by setting the current context and edit KUBECONFIG, that's not ideal.

Ideally:

  • kubent supports multi-file KUBECONFIG
  • kubent supports context selection (technically as an indirect selector for cluster) with --context (alas the standard -c is already used for something else)

Fix Git sha in build

When running, kubent should display correst sha from which it was built, instead of:

3:47PM INF version 0.3.1 (git sha dev)

Be possible to run kubent against manifest

One issue I found in kubent is that it requires to run it against the cluster, I would prefer using local manifests.

This will avoid putting credentials inside a CI/CD and be able to detect deprecated api during development or allow to prepare migration to new version not yet deployed.

I heard from @drewboswell that kubeval could provide the model.
e.x:

kubent --local 1.6.1

Testing and ENV variables

These are some notes on tests with Env variables, mostly discussed in context of #127, so not to forget:

Tests preferably should not rely on ENV variables, however, there are some, where we test functionality that is directly related to env variable (such as TestNewFromFlagsKubeconfigHome). In such cases it would be nice to:

  • Test should not rely on external environment, i.e. it should set/unset its variables at the beginning (i.e. should be self-contained)

  • Test should return the env to a previous state, i.e. no variables should be leaked (it should not just unset the var, but actually revert, also there's a diff between var being empty and unset(

  • Ideally, we keep in mind tests can be run in parallel -> i.e. in case tests happen to use same variable this should be accounted for, perhaps some locking mechanism might be needed.

    • By default go does not run tests in the same file in parallel, unless t.Parallel() is called
    • However, tests in different packages are by default run in parallel

    @dark0dave added some notes just not to forget ;)

Using Kube-no-trouble in windows OS

Good morning,

There're some way to use kubent into windows OS system?
unfortunadle, I use windows to work with kubernetes clusters and we want to upgrade our clusters to 1.16 and need some way to use kubent to locate used deprecated API versions in 1.16.

Good job and thanks!, regards.

Add simple integration test

Start with checking one resource that should be flagged as deprecated, and make sure it is detected, and same resource with updated API version, and make sure it isn't. Perhaps use Kind to run this inside GH Actions.

kubent 0.3.0 NOT reporting 1.16 usage of deprecated apis that ARE reported by kubent 0.2.0

Version 0.2.0 reports ...

10:33AM INF >>> Kube No Trouble `kubent` <<<
10:33AM INF version 0.2.0 (git sha b9b45f347c7e67cf488d8dd6e20c8b848c595a3e)
...
10:33AM INF Loaded ruleset name=deprecated-1-16.rego
10:33AM INF Loaded ruleset name=deprecated-1-20.rego
..
>>> 1.16 Deprecated APIs <<<
------------------------------------------------------------------------------------------
KIND         NAMESPACE     NAME                          API_VERSION
DaemonSet    <REDACTED>   <REDACTED>        extensions/v1beta1
Deployment   <REDACTED>   <REDACTED>         extensions/v1beta1
Deployment   <REDACTED>   <REDACTED>         apps/v1beta2
Deployment   <REDACTED>   <REDACTED>         extensions/v1beta1
Deployment   <REDACTED>   <REDACTED>         extensions/v1beta1
Deployment   <REDACTED>   <REDACTED>         extensions/v1beta1
Deployment   <REDACTED>   <REDACTED>         extensions/v1beta1
__________________________________________________________________________________________
...
>>> 1.20 Deprecated APIs <<<
------------------------------------------------------------------------------------------
A long list similar to that for 1.16 above

BUT
Latest version 0.3.0 for same cluster reports

$ kubent_30 -k kubeconfig-readonly.yaml
10:40AM INF >>> Kube No Trouble `kubent` <<<
10:40AM INF version 0.3.0 (git sha dev)
...
10:40AM INF Loaded ruleset name=deprecated-1-16.rego
10:40AM INF Loaded ruleset name=deprecated-1-22.rego
________________________________________________________________________________________
>>> Deprecated APIs removed in 1.22 <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME                          API_VERSION          
Ingress   <REDACTED> <REDACTED>             extensions/v1beta1  

showing NOTHING for 1.16 Deprecated APIs

Am I doing something wrong ?

panic: runtime error: invalid memory address or nil pointer dereference

Error on WSL1 windows OS

`โฏ kubent
2:44PM INF >>> Kube No Trouble kubent <<<
2:44PM INF version 0.3.0 (git sha dev)
2:44PM INF Initializing collectors and retrieving data
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1154c15]

goroutine 1 [running]:
github.com/doitintl/kube-no-trouble/pkg/collector.(*ClusterCollector).Name(0x0, 0x1823920, 0xc000361b80)
:1 +0x5
main.storeCollector(0x182ac00, 0x0, 0x1823920, 0xc000361b80, 0x2331290, 0x0, 0x0, 0x0, 0x0, 0x16649b5)
/__w/kube-no-trouble/kube-no-trouble/cmd/kubent/main.go:40 +0x151
main.initCollectors(0xc000223630, 0x16649b5, 0x2b, 0xc0002ce540)
/__w/kube-no-trouble/kube-no-trouble/cmd/kubent/main.go:51 +0x3a7
main.main()
/__w/kube-no-trouble/kube-no-trouble/cmd/kubent/main.go:88 +0x31c`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.