Coder Social home page Coder Social logo

k8s's People

Contributors

7oku avatar adrienpoupa avatar angelin01 avatar caleb-devops avatar chenghu2 avatar dhduvall avatar gaima8 avatar leodiazl avatar montmanu avatar mpkorstanje avatar nooop3 avatar ozbillwang avatar robbie-demuth avatar rofreytag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

k8s's Issues

Vulnerabilities on this image

This image has some high vulnerabilities:

  • CVE-2023-38039 (alpine version should be update)
    The library github.com/docker/docker version 23.0.1+incompatible was detected in Golang binary located at /usr/bin/helm and is vulnerable to CVE-2023-28840, which exists in versions >= 23.0.0, < 23.0.3.
  • CVE-2023-28840 (helm version should be updated)
    The package curl version 8.2.1-r0 was detected in APK package manager on a container image running Alpine 3.18.3 is vulnerable to CVE-2023-38039, which exists in versions < 8.3.0-r0.

docker no latest tag found

docker pull alpine/k8s command always error out because we don't have :latest tag. Can we tag the last pushed image to tag as latest also?

Compatibility of Helm with K8s

I tried out the image version 1.20.15 (because that matches my cluster) and found that this contains Helm in version 3.9.3. According to this matrix the correct Helm versions for Kubernetes 1.20 would be 3.5 to 3.8, not 3.9. Is this intentionally "incompatible" in this image?

kubectl broken in latest builds

It looks like kubectl is not built into the latest builds correctly. I get

/usr/bin/kubectl: line 1: syntax error: unexpected redirection

when I run kubectl. The file in /usr/bin/kubectl contains:

<?xml version='1.0' encoding='UTF-8'?><Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Details>No such object: kubernetes-release/release/v1.23/bin/linux/amd64/kubectl</Details></Error>/apps 

add helmfile

Please add

  • helmfile
  • ENV PATH=$PATH:/root/.krew/bin
  • completions, like so:
RUN apk add --no-cache bash-completion \
  && mkdir -p /home/$USER/bin \
  && curl --fail --show-error --location --output /home/$USER/bin/complete_alias \
    https://raw.githubusercontent.com/cykerway/complete-alias/master/complete_alias \
  && curl --fail --show-error --location --output /home/$USER/.kubectl_aliases \
    https://raw.githubusercontent.com/ahmetb/kubectl-aliases/master/.kubectl_aliases

Please include openssl

Can we include openssl as part of this image? It is useful for handling certificates, keys, and other SSL/TLS related things within the Kubernetes cluster. An example use-case is a cronjob that checks self-signed certificates in Kubernetes secrets that can update the certificates if near expiry.

Can't pull alpine/k8s from docker hub

When trying to pull alpine/k8s from docker hub image is not found.

$ docker pull alpine/k8s
Using default tag: latest
Error response from daemon: manifest for alpine/k8s:latest not found: manifest unknown: manifest unknown

add latest tag to docker image

can a latest tag be added to the docker image, so docker pull alpine/k8s works correctly. this is the current output of that command:

C:\Users\james.tupper> docker pull alpine/k8s
Using default tag: latest
Error response from daemon: manifest for alpine/k8s:latest not found: manifest unknown: manifest unknown

build fail at helm-unittest

after merge #12 and #14, the build is failed at helm-unittest in tag 1.19.8

Step 11/19 : RUN helm plugin install https://github.com/quintush/helm-unittest && rm -rf /tmp/helm-*
 ---> Running in e0c1e7fbb23d
Support linux-amd64
Retrieving https://api.github.com/repos/quintush/helm-unittest/releases/latest
No download_url found only searching for linux
Downloading https://github.com/quintush/helm-unittest/releases/download/v0.2.6/helm-unittest-linux-amd64-0.2.6.tgz https://github.com/quintush/helm-unittest/releases/download/v0.2.6/helm-unittest-linux-arm64-0.2.6.tgz to location /tmp/_dist/
curl: (3) URL using bad/illegal format or missing URL
Error: plugin install hook for "unittest" exited with error
Failed to install helm-unittest

and kubeseal not found

cc: @gaima8

Adding yq as the sibling to jq

I saw that jq is implemented. I have a script like the following, which extracts a kubeconfig which is bas64 enconded. In that kubeconfig I've to extract the server address, but that is only possible with yq.

I thought if jq is a wrench in the toolbox, why not yq? :-)

Is this possible to add?

kubectl get secret cluster-details-staging -o jsonpath="{.data.kubeconfig}" | base64 -d | yq e ".clusters[0].cluster.server" -

about azure-tool

Could you also install the Azure CLI tool in addition to the mentioned tools?
Because I need both kubectl, Helm, and Azure CLI tools during the GitLab CI/CD pipeline process to deploy my applications.

Helm plugins are no longer installed

Helm plugins are installed to ~/.local/share/helm/plugins, however PR #42 seems to delete /root/.local, deleting the plugins along with it:

> docker run --rm -ti alpine/k8s:1.23.15 helm unittest
Error: unknown command "unittest" for "helm"
Run 'helm --help' for usage.
> docker run --rm -ti alpine/k8s:1.23.14 helm unittest
Error: requires at least 1 arg(s), only received 0
Usage:
  unittest [flags] CHART [...]

Flags:
      --color                  enforce printing colored output even stdout is not a tty. Set to false to disable color
  -d, --debug                  enable debug logging
  -q, --failfast               direct quit testing, when a test is failed
  -f, --file stringArray       glob paths of test files location, default to tests/*_test.yaml (default [tests/*_test.yaml])
  -3, --helm3                  parse helm charts as helm3 charts
  -h, --help                   help for unittest
  -o, --output-file string     output-file the file where testresults are written in JUnit format, defaults no output is written to file
  -t, --output-type string     output-type the file-format where testresults are written in, accepted types are (JUnit, NUnit, XUnit) (default "XUnit")
      --strict                 strict parse the testsuites
  -u, --update-snapshot        update the snapshot cached if needed, make sure you review the change before update
  -v, --values stringArray     absolute or glob paths of values files location, default no values files
  -s, --with-subchart charts   include tests of the subcharts within charts folder (default true)

requires at least 1 arg(s), only received 0
Error: plugin "unittest" exited with error

You can easily find the location using find:

> docker run --rm -ti alpine/k8s:1.23.14 find / -name 'unittest'
/usr/lib/python3.10/unittest
/root/.cache/helm/plugins/https-github.com-quintush-helm-unittest/pkg/unittest
/root/.local/share/helm/plugins/helm-unittest/pkg/unittest

Environment variables not expanded.

I am passing environment variables via ConfigMap into my Deployment that uses alpine/k8s:1.26.0

        - name: scan-rbac
          image: alpine/k8s:1.26.0
          imagePullPolicy: IfNotPresent
          command: ["/bin/sh"]
          args:
            - "-c"
            - >-
              env &&
              echo "Scanning objects for cluster ${CLUSTER_NAME}" &&

The env does list CLUSTER_NAME correctly as set in ConfigMap. However, Scanning objects for cluster ${CLUSTER_NAME} does not expand CLUSTER_NAME. What am I missing?

Latest versions are not working

Hello.
I was using this image in gitlab pipelines and it broke two days ago. It gives me error when i'm trying to connect to k8s cluster.
Command i ran:
kubectl apply --kubeconfig=kubeconfig.yaml --filename https://github.com/knative/serving/releases/download/v0.14.0/serving-crds.yaml
Error i got:
unable to recognize "https://github.com/knative/serving/releases/download/v0.14.0/serving-crds.yaml": Get https://server.region.eks.amazonaws.com/api?timeout=32s: getting credentials: exec: fork/exec /usr/bin/aws-iam-authenticator: exec format error

And strange thing is that i tried to replicate the issue on local machine by ssh into local docker image and running above command but it ran just fine inside local docker image, but same thing fails in pipeline.

kubeseal broken, download URL needs to be fixed in Dockerfile

kubeseal is not working in the latest images (we tested with alpine/k8s:1.22.10):

/ # kubeseal
/usr/bin/kubeseal: line 1: Not: not found

That is because kubeseal seems to have renamed their binary releases and also packaged them into an tar.gz. During build of the docker image curl is not being able to fetch it correctly and stores the 404 Not found in the file.

The old curl expanded to:

curl -sL https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.18.1/kubeseal-li
nux-amd64 -o kubeseal

but should now expand to something like

curl -L https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.18.1/kubeseal-0.1
8.1-linux-amd64.tar.gz -o - | tar xz -C /usr/bin/

support azure cli as well

support azure cli as well in this image to cover Azure AKS in automation pipeline as well

Azure CLI doesn't work as AWS, sometimes you need install extra extensions for managing some services. In this image, I will only install the basic Azure CLI to save the image size.

If you want to manage extra extensions for your pipepline, please add the command in your pipeline tasks

az extension add --name <extension-name>

Use latest aws-iam-authenticator

Hello there!

Can you please update aws-iam-authenticator to the latest version for your next releases? (aws-iam-authenticator_0.5.9).

When running

docker run -it --rm alpine/k8s:1.22.10 aws-iam-authenticator version

I get

{"Version":"v0.5.0","Commit":"1cfe2a90f68381eacd7b6dcfa2bf689e76eb8b4b"}

It may be that AWS is not hosting the correct image at this URL:

s3://amazon-eks/1.23.9/2022-07-27/bin/linux/amd64/aws-iam-authenticator

We are currently getting the latest from this URL

https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.5.9/aws-iam-authenticator_0.5.9_linux_amd64

Much appreciated!

Weekly (nor nightly) build not published

The README mentions a weekly build:

### Weekly build

Build job runs weekly

Yet there's been no release at Dockerhub for the past 3 months.
The latest changes in GitHub render the image unusable due to a stale openssh version:

error cloning repository: unknown error: ERROR: You're using an RSA key with SHA-1, which is no longer allowed. Please use a newer client or a different key type.

Add AWS CLI v2

Hey,

It would be great if you can add AWS CLI v2 in the docker image. Currently, it is having awscli v1.18.*

ansible

Can we add ansible so all commands will run from ansible playbook?
Thanks.

Connection refused

Hello,

I used the alpine/k8s image. Getting the below error.
image

Error: The connection to the server localhost:8080 was refused - did you specify the r
ight host or port?

Expected Output: able to see the list of services in the cluster
can you help me what i am missing here?

Is it possible to remove test data keys from the image build to avoid container security scanning alerts?

alpine-k8s:1.23.14 contains four keys under a testdata directory:

/root/.local/share/helm/plugins/helm-push/testdata/tls/server.key
/root/.local/share/helm/plugins/helm-push/testdata/tls/client.key
/root/.cache/helm/plugins/https-github.com-chartmuseum-helm-push/testdata/tls/client.key
/root/.cache/helm/plugins/https-github.com-chartmuseum-helm-push/testdata/tls/server.key

Is it possible to modify future builds so these keys are not present?

While I understand they do not represent a security risk, they tend to come up as high severity issues during container security scanning which causes lots of effort around arranging exceptions with InfoSec.

Thanks! :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.