Coder Social home page Coder Social logo

k8s's Introduction

Screen Shot 2020-10-12 at 4 59 32 PM

License

Running NATS on K8S

In this repository you can find several examples of how to deploy NATS, NATS Streaming and other tools from the NATS ecosystem on Kubernetes.

Getting started with NATS using Helm

In this repo you can find the Helm 3 based charts to install NATS.

> helm repo add nats https://nats-io.github.io/k8s/helm/charts/
> helm repo update

> helm repo list
NAME          	URL 
nats          	https://nats-io.github.io/k8s/helm/charts/

> helm install my-nats nats/nats

License

Unless otherwise noted, the NATS source files are distributed under the Apache Version 2.0 license found in the LICENSE file.

k8s's People

Contributors

1995parham avatar andreib1 avatar bruth avatar caleblloyd avatar calmera avatar chris13524 avatar colinsullivan1 avatar dependabot[bot] avatar drpebcak avatar edribeirojunior avatar faisal-khalique avatar filhodanuvem avatar gnarea avatar icelynjennings avatar jarema avatar jdolce avatar jonasmatthias avatar julian-dolce-form3 avatar kylos101 avatar marcosflobo avatar michaellzc avatar mvisonneau avatar nicklarsennz avatar nickpoorman avatar nsurfer avatar samuel-form3 avatar smlx avatar variadico avatar wallyqs avatar xromk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

Install fails on EKS cluster w/ private networking only

Looks like it's trying to provison an external IP, which is not available in our configuration. Is there any option to use a ClusterIP instead of an ExternalIP?

$ ./setup.sh --without-tls
serviceaccount/nats-setup unchanged
clusterrolebinding.rbac.authorization.k8s.io/nats-setup-binding unchanged
clusterrole.rbac.authorization.k8s.io/nats-setup unchanged
pod/nats-setup created
pod/nats-setup condition met
Defaulting container name to nats-setup.
Use 'kubectl describe pod/nats-setup -n default' to see all of the containers in this pod.

##############################################
#                                            #
#  _   _    _  _____ ____   _  _____ ____    #
# | \ | |  / \|_   _/ ___| | |/ ( _ ) ___|   #
# |  \| | / _ \ | | \___ \ | ' // _ \___ \   #
# | |\  |/ ___ \| |  ___) || . \ (_) |__) |  #
# |_| \_/_/   \_\_| |____(_)_|\_\___/____/   #
#                                            #
#                    nats-setup (v0.1.6)     #
##############################################

 +---------------------+---------------------+
 |                 OPTIONS                   |
 +---------------------+---------------------+
         nats server   | true
         nats surveyor | true
         nats tls      | false
        enable auth    | true
  install cert_manager | false
      nats streaming   | true
 +-------------------------------------------+
 |                                           |
 | Starting setup...                         |
 |                                           |
 +-------------------------------------------+

[PRIVATE KEYS REDACTED]

[ OK ] generated user creds file "/nsc/nkeys/creds/KO/STAN/stan.creds"
[ OK ] added user "stan" to account "STAN"
secret/nats-sys-creds created
secret/nats-test-creds created
secret/nats-test2-creds created
secret/stan-creds created
configmap/nats-accounts created
Unable to connect to the server: read tcp 10.124.5.152:32930->151.101.52.133:443: read: connection reset by peer
Retrying in 3 seconds...
Unable to connect to the server: read tcp 10.124.5.152:32954->151.101.52.133:443: read: connection reset by peer
Retrying in 3 seconds (2 attempts so far)...
Unable to connect to the server: read tcp 10.124.5.152:33002->151.101.52.133:443: read: connection reset by peer
Retrying in 3 seconds (3 attempts so far)...
Unable to connect to the server: read tcp 10.124.5.152:33028->151.101.52.133:443: read: connection reset by peer
Retrying in 3 seconds (4 attempts so far)...
Unable to connect to the server: EOF
Retrying in 3 seconds (5 attempts so far)...
Retrying in 3 seconds (6 attempts so far)Unable to connect to the server: read tcp 10.124.5.152:33102->151.101.52.133:443: read: connection reset by peer
...
Retrying in 3 seconds (7 attempts so far)Unable to connect to the server: read tcp 10.124.5.152:33142->151.101.52.133:443: read: connection reset by peer
...
Unable to connect to the server: EOF
Retrying in 3 seconds (8 attempts so far)...
Unable to connect to the server: read tcp 10.124.5.152:33216->151.101.52.133:443: read: connection reset by peer
Retrying in 3 seconds (9 attempts so far)...
Unable to connect to the server: EOF
Retrying in 3 seconds (10 attempts so far)...
Could not finish setting up NATS due to errors in the cluster
command terminated with exit code 1

helm: running into issue upgrading nats

Hi,

I am running into an issue upgrading charts for nats using helm upgrade.

Is there something I am doing wrong?

helm list

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
nats-nfs-client nats            1               2020-07-06 15:50:45.867044 +1200 NZST   deployed        nfs-client-provisioner-1.2.8    3.1.0      
nats-server     nats            1               2020-07-06 15:55:17.151972 +1200 NZST   deployed        nats-0.4.2                      2.1.7      
stan-server     nats            2               2020-07-23 16:22:02.502656 +1200 NZST   deployed        stan-0.5.0                      0.18.0

helm upgrade nats-server nats/nats --namespace=nats --reuse-values

Error: UPGRADE FAILED: cannot patch "nats-server-box" with kind Pod: Pod "nats-server-box" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
  core.PodSpec{
-       Volumes: nil,
+       Volumes: []core.Volume{
+               {
+                       Name: "default-token-7f8wf",
+                       VolumeSource: core.VolumeSource{
+                               Secret: &core.SecretVolumeSource{SecretName: "default-token-7f8wf", DefaultMode: &420},
+                       },
+               },
+       },
        InitContainers: nil,
        Containers: []core.Container{
                {
                        ... // 7 identical fields
                        Env:          []core.EnvVar{{Name: "NATS_URL", Value: "nats-server"}},
                        Resources:    core.ResourceRequirements{Limits: core.ResourceList{"cpu": {i: resource.int64Amount{value: 1}, s: "1", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 524288000}, s: "500Mi", Format: "BinarySI"}}, Requests: core.ResourceList{"cpu": {i: resource.int64Amount{value: 100, scale: -3}, s: "100m", Format: "DecimalSI"}, "memory": {i: resource.int64Amount{value: 104857600}, s: "100Mi", Format: "BinarySI"}}},
-                       VolumeMounts: nil,
+                       VolumeMounts: []core.VolumeMount{
+                               {
+                                       Name:      "default-token-7f8wf",
+                                       ReadOnly:  true,
+                                       MountPath: "/var/run/secrets/kubernetes.io/serviceaccount",
+                               },
+                       },
                        VolumeDevices: nil,
                        LivenessProbe: nil,
                        ... // 10 identical fields
                },
        },
        EphemeralContainers: nil,
        RestartPolicy:       "Always",
        ... // 24 identical fields
  }

helm list

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
nats-nfs-client nats            1               2020-07-06 15:50:45.867044 +1200 NZST   deployed        nfs-client-provisioner-1.2.8    3.1.0      
nats-server     nats            2               2020-07-23 16:25:43.205476 +1200 NZST   failed          nats-0.5.0                      2.1.7      
stan-server     nats            2               2020-07-23 16:22:02.502656 +1200 NZST   deployed        stan-0.5.0                      0.18.0

helm: enabling clustering

Hi there, I submitted #50 to fix the documentation for enabling clustering via helm.

The documentation mentions stan.cluster.enabled, but the helm chart is configured to reference store.cluster.enabled.

Backing up stan file store

When using the file store configuration, how can I back up the rotated files on a regular basis?

In other charts I've seen something like this done by adding another container to the pod - it could be as simple as supporting that in the chart, with a script to upload to s3 and remove the old segments from the PVC.

Helm: provide a means to specify ca.crt for STAN

Hi,

I recently setup NATS Operator, and secured NATS with TLS following the cert-manager examples. Thank you so much for those! Very slick.

One thing I noticed, is that the Helm chart for STAN lacks a means to make it possible for users to share the ca.crt with STAN. As a result, on start-up I get this:

kubectl logs stan-0 stan
[1] 2020/07/13 13:28:37.149271 [INF] STREAM: Starting nats-streaming-server[stan] version 0.17.0
[1] 2020/07/13 13:28:37.149378 [INF] STREAM: ServerID: 59VFuL42d7arsWQkUJUCa7
[1] 2020/07/13 13:28:37.149382 [INF] STREAM: Go version: go1.13.7
[1] 2020/07/13 13:28:37.149384 [INF] STREAM: Git commit: [f4b7190]
[1] 2020/07/13 13:28:37.166684 [INF] STREAM: Shutting down.
[1] 2020/07/13 13:28:37.166760 [FTL] STREAM: Failed to start: x509: certificate signed by unknown authority

I am guessing that I am getting that, because my stan-0 pod has no knowledge of the related ca.crt. For the interim, I'll setup STAN w/o using Helm, but thought this would be a nice add for the chart.

Take care,

Kyle

Provide documentation about setting nats with own credentials/certificates/configuration

It would be extremely useful if you provide additional details how one line installer can use user own credentials/certificates/conf files during k8s deployment. If I read correctly https://github.com/nats-io/k8s/blob/master/nats-server/nats-server-with-auth-and-tls.yml file I see that someone should provide proper secret files which should contain proper file names, etc.

I expect that if someone wants to use own set of certificates/conf we need to create first proper secrets and then run one line installer. If so could you please document all used secrets and provide examples how to create them.

Thanks,
Valentin.

Helm: STAN replicas fail to start when there's 2 or more

Hi,

There appears to be an issue when using two replicas,

I don't quite understand it...I am able to recreate it when using the NATS Operator.

For example, when I install STAN like this (where I've cloned this repo):

helm upgrade --install stan $HOME/k8s/helm/charts/stan \
--set stan.replicas=2 \
--set store.type=file,store.file.storageSize=1Gi,store.volume.storageClass=rook-ceph-block \
--set stan.nats.url=nats.default.svc:4222 \
--set stan.logging.debug=true \
--set stan.nats.serviceRoleAuth.enabled=true,stan.nats.serviceRoleAuth.natsClusterName=nats

I get this error:

[1] 2020/07/18 00:32:43.866045 [INF] STREAM: Starting nats-streaming-server[stan] version 0.17.0
[1] 2020/07/18 00:32:43.866139 [INF] STREAM: ServerID: J6zdHu1BZispFbuanU03re
[1] 2020/07/18 00:32:43.866142 [INF] STREAM: Go version: go1.13.7
[1] 2020/07/18 00:32:43.866145 [INF] STREAM: Git commit: [f4b7190]
[1] 2020/07/18 00:32:43.884804 [INF] STREAM: Recovering the state...
[1] 2020/07/18 00:32:43.884923 [INF] STREAM: No recovered state
[1] 2020/07/18 00:32:43.903360 [INF] STREAM: Shutting down.
[1] 2020/07/18 00:32:43.903518 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "stan"

I would assume when using replicas (instead of clusters), that the streaming.id must match, but nodes cannot share the same streaming.cluster.node_id?

If you can point me in the right direction, I might be able to help.

Helm: store.file.path does not seem to be working on NFS

Hi,

I have attached an NFS to our k8s cluster at path /var/mnt

I have installed nats + stan using the latest helm charts (as of this writing) like this

helm install nats-server nats/nats --namespace=nats 

helm install stan-server nats/stan --namespace=nats \
  --set stan.nats.url=nats://nats-server:4222 \
  --set store.type=file \
  --set store.file.path=/var/mnt/nats \
  --set store.file.storageSize=1Gi

The startup log from stan shows the NFS path being set

[1] 2020/05/13 04:03:38.932852 [INF] STREAM: Starting nats-streaming-server[stan-server] version 0.17.0
[1] 2020/05/13 04:03:38.933013 [INF] STREAM: ServerID: xDec25eoUVPr0wxKKaj7xS
[1] 2020/05/13 04:03:38.933029 [INF] STREAM: Go version: go1.13.7
[1] 2020/05/13 04:03:38.933041 [INF] STREAM: Git commit: [f4b7190]
[1] 2020/05/13 04:03:38.943212 [INF] STREAM: Recovering the state...
[1] 2020/05/13 04:03:38.945227 [INF] STREAM: No recovered state
[1] 2020/05/13 04:03:39.196686 [INF] STREAM: Message store is FILE
[1] 2020/05/13 04:03:39.196697 [INF] STREAM: Store location: /var/mnt/nats
[1] 2020/05/13 04:03:39.196721 [INF] STREAM: ---------- Store Limits ----------
[1] 2020/05/13 04:03:39.196723 [INF] STREAM: Channels:                  100 *
[1] 2020/05/13 04:03:39.196725 [INF] STREAM: --------- Channels Limits --------
[1] 2020/05/13 04:03:39.196726 [INF] STREAM:   Subscriptions:          1000 *
[1] 2020/05/13 04:03:39.196727 [INF] STREAM:   Messages     :       1000000 *
[1] 2020/05/13 04:03:39.196728 [INF] STREAM:   Bytes        :     976.56 MB *
[1] 2020/05/13 04:03:39.196730 [INF] STREAM:   Age          :     unlimited *
[1] 2020/05/13 04:03:39.196731 [INF] STREAM:   Inactivity   :     unlimited *
[1] 2020/05/13 04:03:39.196732 [INF] STREAM: ----------------------------------
[1] 2020/05/13 04:03:39.196734 [INF] STREAM: Streaming Server is ready

However, when I look at my NFS, clients.dat and server.dat and the subjects are being created at /var/mnt

I even created the folder at /var/mnt/nats but that didn't seem to help either. I even tried setting securityContext to null but that didn't work either.

Is there something I am doing wrong?

Thanks.

helm: add support for Prometheus service discovery

The default Prometheus service discovery configuration does not discover the endpoints at port 7777 exposed by prometheus-nats-exporter. One can add the endpoint manually to the Prometheus instance. Adding the Prometheus annotations enabling the scraping by default to the helm charts of stan and nats would remove this step.

E.g.

"prometheus.io/scrape": "true"
"prometheus.io/path": "/metrics"
"prometheus.io/port": "7777"

Is it possible to add this feature?

Allow overriding of service name via values.yaml

Can we modify the name of the service object so that it can be customized via values.yaml? i.e. change it from this:

 {{- default .Release.Name -}}

To something like so:

 {{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" | trimSuffix "." -}}

I have a top-level chart that pulls in nats-operator alongside other charts, and there are cases where a top-level chart has two third-party sub-charts that use {{- default .Release.Name -}} for the name of a service object (resulting in a naming collision). It'd be great to be able to override this outright.

Helm: Unable to pass credentials to Stan

It is currently not possible to pass credentials to the nats streaming server. The STAN configuration file syntax supports it though, just the helm chart is not aware of it.

Grafana unreachable because targetPorts do not match

The targetPort for the grafana Service doesn't match the targetPort for the container in the nats-surveyor-grafana Deployment, e.g. "web" versus "wep"

---
apiVersion: v1
kind: Service
metadata:
  name: grafana
...
  ports:
  - name: web
    # nodePort: 30300
    port: 3000
    protocol: TCP
    targetPort: web

versus

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nats-surveyor-grafana
...
        ports:
        - containerPort: 3000
          name: wep

Microk8s can't bring up nats/stan from minimal setup

I followed the docs at https://github.com/nats-io/nats.docs/blob/master/nats-on-kubernetes/minimal-setup.md and ran the nats-box. But stan is not available.

I already checked the status of the pod:

NAME     READY   STATUS    RESTARTS   AGE
nats-0   1/1     Running   0          89s
stan-0   0/1     Pending   0          5m12s

To exclude this as a cause I downloaded the provided yaml files and updated them to use the 2.1.7-alpine image of nats and the 0.17.0-linux image. But unfortunately I still see the same effect.

NATS / K8s : One broker in a 3 broker Kubernetes cluster does not receive requests

Defect

Make sure that these boxes are checked before submitting your issue -- thank you!

Versions of nats-server and affected client libraries used:

nats-server version 2.1.0

OS/Container environment:

Kubernetes (see simple-nats-* yaml attached)

Steps or code to reproduce the issue:

Scenario 1
  • Use a kubernetes cluster (or create an ephemeral k8s "kind" cluster using the command kind create cluster --config 5-node-cluster.yaml )
  • Install a 3 broker NATS cluster into the K8s cluster kubectl apply -f simple-nats-3.yml
  • Create 3 terminal to observe the log output of each of the pods
    • kubectl logs -f nats-0
    • kubectl logs -f nats-1
    • kubectl logs -f nats-2
  • Create a nats-box instance within the cluster to message the clusters kubectl run -i --rm --tty nats-box --image=synadia/nats-box --restart=Never
  • From within nats-box in teh k8s cluster, attempt to publish to each of the brokers
    • nats-pub -s nats-0.nats.default.svc:4222 topic.test "Hello"
    • nats-pub -s nats-1.nats.default.svc:4222 topic.test "Hello"
    • nats-pub -s nats-2.nats.default.svc:4222 topic.test "Hello"
Scenario 2

If a 2 broker config is setup instead of a 3 broker config, both nats-0 and nats-1 work (see attachment for 2 broker yaml config)

Expected result:

  • nats-pub can communicate with all 3 brokers

Actual result:

  • brokers nats-0 and nats-1 can be reached, but not nats-2
  • Lots of communication errors on brokers nats-0 and nats-2
    nats-communication

config.zip

Notes

Slack discussion with @wallyqs captured at https://natsio.slack.com/archives/C9V1869TR/p1591803566033600?thread_ts=1590350719.031900&cid=C9V1869TR

Failed to start: mkdir /data/stan/log: permission denied

Using a very simple helm deployment, Stan failed to start with message:
Failed to start: mkdir /data/stan/log: permission denied

It might be related to the drop of permissions from the security context.

store:
  type: file
  file:
    storageSize: 10Gi
  cluster:
    enabled: true

Helm: /data/stan/log: permission denied for cluster

Hi,

I tried to install stan like this:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install stan nats/stan \
--set stan.nats.url=nats://nats-client.default.svc.cluster.local:4222 \
--set store.cluster.enabled=true

It starts, and provisions a persistent volume, but the logs indicate a permission denied error:

[1] 2020/05/03 17:50:46.354899 [INF] STREAM: Starting nats-streaming-server[stan] version 0.17.0
[1] 2020/05/03 17:50:46.354992 [INF] STREAM: ServerID: e4CikyxM5FvXzfJ1EiIQdU
[1] 2020/05/03 17:50:46.354995 [INF] STREAM: Go version: go1.13.7
[1] 2020/05/03 17:50:46.354998 [INF] STREAM: Git commit: [f4b7190]
[1] 2020/05/03 17:50:46.398915 [INF] STREAM: Recovering the state...
[1] 2020/05/03 17:50:46.401310 [INF] STREAM: Recovered 1 channel(s)
[1] 2020/05/03 17:50:46.401401 [INF] STREAM: Cluster Node ID : stan-0
[1] 2020/05/03 17:50:46.401445 [INF] STREAM: Cluster Log Path: /data/stan/log
[1] 2020/05/03 17:50:46.401570 [INF] STREAM: Shutting down.
[1] 2020/05/03 17:50:46.401961 [FTL] STREAM: Failed to start: mkdir /data/stan/log: permission denied

As a workaround, I am opting to store logs as a sub-folder in the same spot as store.file.path (which resolves to /data/stan/store) like this:

helm install stan nats/stan \
--set store.type=file --set store.file.storageSize=1Gi \
--set stan.nats.url=nats://nats-client.default.svc.cluster.local:4222 \
--set store.cluster.enabled=true --set store.cluster.logPath=/data/stan/store/log

I'm not sure if there's a better solution, but wanted to share here to discuss. Thanks!

NATS supercluster in kubernetes

I am using this helm chart to deploy NATS cluster in AWS.
https://github.com/nats-io/k8s/tree/master/helm/charts/nats

I have created 2 cluster, one in AWS Singapore region and another in AWS Sydney region.

Each of the cluster has 3 nodes and uses the statefulset so that cluster routes can be hardcoded.

I need to set up communication between these 2 cluster using gateways.
What are the suggestion to set up gateway URLs ? We can't have static IPs/DNS per pod because our K8s set up is very elastic with auto-scaling enabled so nodes can come up and go down.

I have set up a kubernetes Service of type AWS Loadbalancer to target gateway port and using this loadbalancer DNS as gateway URLs. This seems to work but I see that connection getting dropped repeatedly with this message in NATS pod logs
'Gateway connection closed'.

What are the best practices to form super clusters in kubernetes elastic environments?

[Helm][STAN] Latest version of STAN does not seem to be available

Hi,

The latest (0.5.0) version of STAN's Helm chart does not seem to be available. Trying to pull it gives the following error:

$ helm repo add nats https://nats-io.github.io/k8s/helm/charts/
"nats" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nats" chart repository
$ helm pull nats/stan
Error: failed to fetch https://github.com/nats-io/k8s/releases/download/v0.5.0/stan-0.5.0.tgz : 404 Not Found

I do not exactly know how releases work here but it should be easily fixable.

Thanks for reading

Chat has compilation issues Helm Lint fails

Thanks for creating this chart!

Getting the below error when i do helm lint:
==> Linting stan
[ERROR] templates/: parse error in "stan/templates/configmap.yaml": template: stan/templates/configmap.yaml:27: function "mustToJson" not defined

nats-0 CLBackoff, nats-surveyor CLB, stan-1 pending

firstly, thx for the one-liner install :]

env: osx, docker-desktop/kubernetes latest, reset

test: % curl -sSL https://nats-io.github.io/k8s/setup.sh | sh

pods:

jwtodd> kubectl get pods
NAME                                     READY   STATUS                  RESTARTS   AGE
nats-0                                   0/3     Init:CrashLoopBackOff   5          5m38s
nats-box                                 1/1     Running                 0          5m39s
nats-setup                               0/1     Completed               0          6m9s
nats-surveyor-68fc4bbcc8-8bttt           0/1     CrashLoopBackOff        5          5m33s
nats-surveyor-grafana-55fb5584b6-vjf4f   1/1     Running                 0          5m34s
prometheus-nats-prometheus-0             3/3     Running                 1          5m25s
prometheus-nats-surveyor-0               3/3     Running                 1          5m25s
prometheus-operator-6d899fc8dd-47d78     1/1     Running                 0          5m37s
stan-0                                   2/2     Running                 1          5m38s
stan-1                                   0/2     Pending                 0          5m33s

logs:

jwtodd> kubectl logs nats-0
Error from server (BadRequest): a container name must be specified for pod nats-0, choose one of: [nats reloader metrics] or one of the init containers: [bootconfig]
jwtodd> kubectl logs nats-surveyor-68fc4bbcc8-8bttt
+ cp /etc/nats/certs/ca.crt /usr/local/share/ca-certificates
+ ls -al /usr/local/share/ca-certificates
total 12
drwxr-xr-x    1 root     root          4096 Nov 22 23:19 .
drwxr-xr-x    1 root     root          4096 Jul 18 16:53 ..
-rw-r--r--    1 root     root          1115 Nov 22 23:19 ca.crt
+ update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
+ /nats-surveyor '-s=tls://nats:4222' '-c=3' '-creds=/var/run/nats/creds/sys/sys.creds' '-observe=/etc/nats/observations/'
2019/11/22 23:19:26 couldn't start surveyor: nats: no servers available for connection
jwtodd> kubectl logs stan-1
Error from server (BadRequest): a container name must be specified for pod stan-1, choose one of: [stan metrics]

ask: is there a means to cleanup the install? to get it teardown i opted for a dd/k8s reset

Is websocket configurable with running NATS on K8S?

This is the best thing I found so far. :)
But is WebSocket configurable already with the current scripts?
I am not able to access wss://ip:port. Is there anything else I have to do apart from curl -sSL https://nats-io.github.io/k8s/setup.sh | sh?

I am able to run WebSocket from this example as we are adding this https://github.com/aricart/natsws-sandbox/blob/master/nats.conf

port: 4222
websocket: {
  port: 9222
  tls: {
    cert_file: "./certs/cert.pem"
    key_file: "./certs/key.pem"
  }
}

helm: support custom client certs in nats-box

Make it possible to mount the certs for a sample client to test mutual TLS. something like:

natsbox:
  enabled: true
  tls:
    insecure: false
    verify: true
    secret:
      name: nats-client-tls
    ca: "ca.crt"
    cert: "tls.crt"
    key: "tls.key"

nats-box: wrong usage returns exit 0 with kubectl exec

For example:

$ kubectl run -i --rm --tty nats-box --image=synadia/nats-box:0.3.0 --restart=Never
nats-box:~# nats-pub -s asdf foo bar
nats: no servers available for connection 
$ echo $?
1

But after deploying with Helm and entering the container, it does not exit 1

$ helm install my-nats nats/nats
$ kubectl exec -it my-nats-box -n default -- /bin/sh
$ nats-pub -s asdf hello world
$ echo $?
0

Retry after error does not work

So after installation fails with Could not finish setting up NATS due to errors in the cluster and I retry curl -sSL https://nats-io.github.io/k8s/setup.sh | sh this fails with Error from server (AlreadyExists): pods "nats-setup" already exists. Shoud not the pod be removed if it fails or redeployed?

No ServiceMonitors with Helm charts

Perhaps I'm missing something here regarding Prometheus monitoring but it seems you need to create the SerivceMonitors yourself for the NATS / STAN metrics to be picked up and there's no mention of it in the README.

Would be nice to have either to help people on their way.

Thanks for these charts! 🙌

"nats-0" not found when using --without-tls

On GKE (1.13.11-gke.14), this works as expected:

curl -sSL https://nats-io.github.io/k8s/setup.sh | sh

On an identical GKE (1.13.11-gke.14) this fails with pods "nats-0" not found after 10 attempts when using --without-tls:

curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls

The nats StatefulSet reports:

create Pod nats-0 in StatefulSet nats failed error: pods "nats-0" is forbidden:
error looking up service account default/nats-server: serviceaccount "nats-server"
not found
kubectl get pods
NAME                                     READY   STATUS             RESTARTS   AGE
nats-box                                 1/1     Running            0          68s
nats-setup                               1/1     Running            0          73s
nats-surveyor-5cfcb988b8-s46fb           0/1     CrashLoopBackOff   3          64s
nats-surveyor-grafana-7d79564b7c-p4kbs   1/1     Running            0          65s
prometheus-nats-prometheus-0             3/3     Running            1          54s
prometheus-nats-surveyor-0               3/3     Running            1          54s
prometheus-operator-55599c56b5-59c9s     1/1     Running            0          66s
stan-0                                   2/2     Running            0          67s
stan-1                                   2/2     Running            0          48s
stan-2                                   2/2     Running            0          29s

Log:

curl -sSL https://nats-io.github.io/k8s/setup.sh | sh -s -- --without-tls
serviceaccount/nats-setup created
clusterrolebinding.rbac.authorization.k8s.io/nats-setup-binding created
clusterrole.rbac.authorization.k8s.io/nats-setup created
pod/nats-setup created
pod/nats-setup condition met

##############################################
#                                            #
#  _   _    _  _____ ____   _  _____ ____    #
# | \ | |  / \|_   _/ ___| | |/ ( _ ) ___|   #
# |  \| | / _ \ | | \___ \ | ' // _ \___ \   #
# | |\  |/ ___ \| |  ___) || . \ (_) |__) |  #
# |_| \_/_/   \_\_| |____(_)_|\_\___/____/   #
#                                            #
#                    nats-setup (v0.1.6)     #
##############################################

 +---------------------+---------------------+
 |                 OPTIONS                   |
 +---------------------+---------------------+
         nats server   | true
         nats surveyor | true
         nats tls      | false
        enable auth    | true
  install cert_manager | false
      nats streaming   | true
 +-------------------------------------------+
 |                                           |
 | Starting setup...                         |
 |                                           |
 +-------------------------------------------+


[ OK ] generated and stored operator key "OAGZKSUZA3LQ6XVX3ZURBKDOXIUXF2VYGEDA5FYEI645GJMMJ2RHN2II"
[ OK ] added operator "KO"
[ OK ] generated and stored account key "ABXDSNI4AWZMDBNE4JKS5MHWBLALGSPGVRINYDVSVTBOKPTWBDDHBUTC"
[ OK ] added account "SYS"
[ OK ] generated and stored user key "UBMGTZZHLQA66SRYSKJHCMT3SWGJQ5IPNWV7ZU2WTKTUY7D2PW5MODMM"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/SYS/sys.creds"
[ OK ] added user "sys" to account "SYS"
[ OK ] generated and stored account key "ADYMVM3TEE6PDJTQEDYEQIBVVJE5LK7LJO4CBOZWDWTJQZRZVDELEZAF"
[ OK ] added account "A"
[ OK ] generated and stored user key "UCOTNH5EIHSJDDJEKQ3VWL4UAO6RWHLC7NNKKA2TIYEF2TJ2NVTM6UIB"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/A/test.creds"
[ OK ] added user "test" to account "A"
[ OK ] added public service export "test"
[ OK ] generated and stored account key "ACQJGNWNDVE6INIYX5J6QFRBMM2XPPG6PCOARJ5ZOPI723M5WZVQ5Y7N"
[ OK ] added account "B"
[ OK ] generated and stored user key "UANCKJBJDNA5ZY5C3OHWLDMSIOIOXOMEGIAVDDA5RUDMS5Q5OT6ECC7C"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/B/test.creds"
[ OK ] added user "test" to account "B"
[ OK ] added service import "test"
[ OK ] generated and stored account key "ACLWTVMZGU4HIXMFTWK6A3NU2IKXYCYAK36A6TVMEEUS6MXXOBM5SSHY"
[ OK ] added account "STAN"
[ OK ] generated and stored user key "UAFPDILQIOQPRSJKKCZ22SK2OXHXJA24PKCYHIJH3J2BVP5UVVFAQIWC"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/STAN/stan.creds"
[ OK ] added user "stan" to account "STAN"
secret/nats-sys-creds created
secret/nats-test-creds created
secret/nats-test2-creds created
secret/stan-creds created
configmap/nats-accounts created
configmap/nats-config created
service/nats created
statefulset.apps/nats created
pod/nats-box created
configmap/stan-config created
service/stan created
statefulset.apps/stan created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
serviceaccount/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/nats-prometheus created
Retrying in 3 secondsunable to recognize "https://raw.githubusercontent.com/nats-io/k8s/93c2a213bd26791fda29da2b7238e3f3b1ca36e1/tools/nats-prometheus.yml": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1"
unable to recognize "https://raw.githubusercontent.com/nats-io/k8s/93c2a213bd26791fda29da2b7238e3f3b1ca36e1/tools/nats-prometheus.yml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
...
service/nats-prometheus unchanged
Retrying in 3 seconds (2 attempts so far)unable to recognize "https://raw.githubusercontent.com/nats-io/k8s/93c2a213bd26791fda29da2b7238e3f3b1ca36e1/tools/nats-prometheus.yml": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1"
unable to recognize "https://raw.githubusercontent.com/nats-io/k8s/93c2a213bd26791fda29da2b7238e3f3b1ca36e1/tools/nats-prometheus.yml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
...
service/nats-prometheus unchanged
prometheus.monitoring.coreos.com/nats-prometheus created
servicemonitor.monitoring.coreos.com/nats created
service/grafana created
deployment.apps/nats-surveyor-grafana created
service/nats-surveyor-prometheus created
configmap/nats-surveyor-observations created
deployment.apps/nats-surveyor created
service/nats-surveyor created
prometheus.monitoring.coreos.com/nats-surveyor created
servicemonitor.monitoring.coreos.com/nats-surveyor created
Retrying in 3 secondsError from server (NotFound): pods "nats-0" not found
...
Retrying in 3 seconds (2 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Retrying in 3 seconds (3 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Retrying in 3 seconds (4 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Retrying in 3 seconds (5 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Error from server (NotFound): pods "nats-0" not found
Retrying in 3 seconds (6 attempts so far)...
Retrying in 3 seconds (7 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Retrying in 3 seconds (8 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Error from server (NotFound): pods "nats-0" not found
Retrying in 3 seconds (9 attempts so far)...
Retrying in 3 seconds (10 attempts so far)Error from server (NotFound): pods "nats-0" not found
...
Could not finish setting up NATS due to errors in the cluster
command terminated with exit code 1

Can't install via quickstart

C:\Users\shaba>curl -sSL https://nats-io.github.io/k8s/setup.sh | sh
serviceaccount/nats-setup unchanged
clusterrolebinding.rbac.authorization.k8s.io/nats-setup-binding unchanged
clusterrole.rbac.authorization.k8s.io/nats-setup unchanged
Error from server (Forbidden): pods "nats-setup" is forbidden: error looking up service account registry/nats-setup: serviceaccount "nats-setup" not found
C:\Users\shaba>kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

After setting current context to default namespace after finding the sa to be in default instead of my current context's namespace

C:\Users\shaba>curl -sSL https://nats-io.github.io/k8s/setup.sh | sh
serviceaccount/nats-setup unchanged
clusterrolebinding.rbac.authorization.k8s.io/nats-setup-binding unchanged
clusterrole.rbac.authorization.k8s.io/nats-setup unchanged
pod/nats-setup created
pod/nats-setup condition met
�[0;36m
##############################################
#                                            #
#  _   _    _  _____ ____   _  _____ ____    #
# | \ | |  / \|_   _/ ___| | |/ ( _ ) ___|   #
# |  \| | / _ \ | | \___ \ | ' // _ \___ \   #
# | |\  |/ ___ \| |  ___) || . \ (_) |__) |  #
# |_| \_/_/   \_\_| |____(_)_|\_\___/____/   #
#                                            #
#                    nats-setup (v0.1.6)     #
##############################################

 +---------------------+---------------------+
 |                 OPTIONS                   |
 +---------------------+---------------------+
         nats server   | true
         nats surveyor | true
         nats tls      | true
        enable auth    | true
  install cert_manager | true
      nats streaming   | true
 +-------------------------------------------+
 |                                           |
 | Starting setup...                         |
 |                                           |
 +-------------------------------------------+

�[0m
[ OK ] generated and stored operator key "OD3F4GS2W7QU5FYH2BRNWB6KEZWFNRXPIT2OAKT32J4JOFRUAQ75E5E3"
[ OK ] added operator "KO"
[ OK ] generated and stored account key "AAY2NHFCJWFZIWRDHMCXC2TTOYSANUP3KFSOGRGPOFFEZ2QH4JYOR3F7"
[ OK ] added account "SYS"
[ OK ] generated and stored user key "UCPPAMVDMVYSMT2XGSIAJH7ZRNF4O6HX6QBLWIMWCC3HJHTTDM65WNZX"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/SYS/sys.creds"
[ OK ] added user "sys" to account "SYS"
[ OK ] generated and stored account key "ABJTXIXBUD2BYNCGMZCGA3JLMS52G6PRDWNUMWIABAIVITDOC3HTOXTZ"
[ OK ] added account "A"
[ OK ] generated and stored user key "UAMESDT2UVI55MHW2EN54ZK6X3TBKD7ENBY7JARVKHDFMZYAFZMMWSOC"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/A/test.creds"
[ OK ] added user "test" to account "A"
[ OK ] added public service export "test"
[ OK ] generated and stored account key "AD5Z7JHDHWITERN3GM4T4Y3CL7J2U4EXJC752ASCLP6BL7KTYHKYMAMS"
[ OK ] added account "B"
[ OK ] generated and stored user key "UBLYHPWT6OIFN4H5NJ7TXP643G6NT6VSQSFD4QTWCQLAI7GOZELCQI5P"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/B/test.creds"
[ OK ] added user "test" to account "B"
[ OK ] added service import "test"
[ OK ] generated and stored account key "AD24ZVRHQ4AFKP36NOANJMS4OV2LP7R4JVH2LEVTKRAHODWLZ5OEMYGB"
[ OK ] added account "STAN"
[ OK ] generated and stored user key "UARAQCJ6ZH4WZJTYRDUTXTQNZNBUDU4WKZODQN7SMZCUDQMFLQ2QH44X"
[ OK ] generated user creds file "/nsc/nkeys/creds/KO/STAN/stan.creds"
[ OK ] added user "stan" to account "STAN"
secret/nats-sys-creds created
secret/nats-test-creds created
secret/nats-test2-creds created
secret/stan-creds created
configmap/nats-accounts created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io configured
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/cert-manager configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/cert-manager-cainjector configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/cert-manager configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/cert-manager-webhook configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:auth-delegator configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:webhook-authentication-reader configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:webhook-requester configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/cert-manager:leaderelection configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim configured
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-leaderelection created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-view configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/cert-manager-edit configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/cert-manager configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/cert-manager-webhook configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
apiservice.apiregistration.k8s.io/v1beta1.webhook.cert-manager.io created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook configured
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"cainjector\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cainjector\",\"helm.sh/chart\":\"cainjector-v0.11.0\"},\"name\":\"cert-manager-cainjector\",\"namespace\":\"cert-manager\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app\":\"cainjector\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cainjector\"}},\"template\":{\"metadata\":{\"annotations\":null,\"labels\":{\"app\":\"cainjector\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cainjector\",\"helm.sh/chart\":\"cainjector-v0.11.0\"}},\"spec\":{\"containers\":[{\"args\":[\"--v=2\",\"--leader-election-namespace=kube-system\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"quay.io/jetstack/cert-manager-cainjector:v0.11.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"cainjector\",\"resources\":{}}],\"serviceAccountName\":\"cert-manager-cainjector\"}}}}\n"},"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cainjector-v0.11.0"}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/managed-by":"Tiller"}},"template":{"metadata":{"annotations":null,"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cainjector-v0.11.0"}},"spec":{"$setElementOrder/containers":[{"name":"cainjector"}],"containers":[{"args":["--v=2","--leader-election-namespace=kube-system"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"quay.io/jetstack/cert-manager-cainjector:v0.11.0","imagePullPolicy":"IfNotPresent","name":"cainjector","resources":{}}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "cert-manager-cainjector", Namespace: "cert-manager"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1"] "creationTimestamp":"2019-12-21T05:50:11Z" "generation":'\x01' "labels":map["app":"cainjector" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cainjector" "helm.sh/chart":"cert-manager-v0.12.0"] "name":"cert-manager-cainjector" "namespace":"cert-manager" "resourceVersion":"4691181" "selfLink":"/apis/apps/v1/namespaces/cert-manager/deployments/cert-manager-cainjector" "uid":"755f626f-7006-41ef-9456-a595b437e729"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x01' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["app":"cainjector" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cainjector"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["app":"cainjector" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cainjector" "helm.sh/chart":"cert-manager-v0.12.0"]] "spec":map["containers":[map["args":["--v=2" "--leader-election-namespace=kube-system"] "env":[map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["apiVersion":"v1" "fieldPath":"metadata.namespace"]]]] "image":"quay.io/jetstack/cert-manager-cainjector:v0.12.0" "imagePullPolicy":"IfNotPresent" "name":"cert-manager" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"cert-manager-cainjector" "serviceAccountName":"cert-manager-cainjector" "terminationGracePeriodSeconds":'\x1e']]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-12-21T05:50:11Z" "lastUpdateTime":"2019-12-21T05:50:16Z" "message":"ReplicaSet \"cert-manager-cainjector-85fbdf788\" has successfully progressed." "reason":"NewReplicaSetAvailable" "status":"True" "type":"Progressing"] map["lastTransitionTime":"2019-12-21T07:41:58Z" "lastUpdateTime":"2019-12-21T07:41:58Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]}
for: "https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml": Deployment.apps "cert-manager-cainjector" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cainjector", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Tiller", "app.kubernetes.io/name":"cainjector"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"cert-manager\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cert-manager\",\"helm.sh/chart\":\"cert-manager-v0.11.0\"},\"name\":\"cert-manager\",\"namespace\":\"cert-manager\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app\":\"cert-manager\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cert-manager\"}},\"template\":{\"metadata\":{\"annotations\":{\"prometheus.io/path\":\"/metrics\",\"prometheus.io/port\":\"9402\",\"prometheus.io/scrape\":\"true\"},\"labels\":{\"app\":\"cert-manager\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"cert-manager\",\"helm.sh/chart\":\"cert-manager-v0.11.0\"}},\"spec\":{\"containers\":[{\"args\":[\"--v=2\",\"--cluster-resource-namespace=$(POD_NAMESPACE)\",\"--leader-election-namespace=kube-system\",\"--webhook-namespace=$(POD_NAMESPACE)\",\"--webhook-ca-secret=cert-manager-webhook-ca\",\"--webhook-serving-secret=cert-manager-webhook-tls\",\"--webhook-dns-names=cert-manager-webhook,cert-manager-webhook.cert-manager,cert-manager-webhook.cert-manager.svc\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"quay.io/jetstack/cert-manager-controller:v0.11.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"cert-manager\",\"ports\":[{\"containerPort\":9402}],\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"32Mi\"}}}],\"serviceAccountName\":\"cert-manager\"}}}}\n"},"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cert-manager-v0.11.0"}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/managed-by":"Tiller"}},"template":{"metadata":{"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cert-manager-v0.11.0"}},"spec":{"$setElementOrder/containers":[{"name":"cert-manager"}],"containers":[{"image":"quay.io/jetstack/cert-manager-controller:v0.11.0","name":"cert-manager","resources":{"requests":{"cpu":"10m","memory":"32Mi"}}}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "cert-manager", Namespace: "cert-manager"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1"] "creationTimestamp":"2019-12-21T05:50:11Z" "generation":'\x01' "labels":map["app":"cert-manager" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cert-manager" "helm.sh/chart":"cert-manager-v0.12.0"] "name":"cert-manager" "namespace":"cert-manager" "resourceVersion":"4691187" "selfLink":"/apis/apps/v1/namespaces/cert-manager/deployments/cert-manager" "uid":"e7167ce2-f8d4-410f-b019-cefbec0cc466"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x01' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["app":"cert-manager" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cert-manager"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["annotations":map["prometheus.io/path":"/metrics" "prometheus.io/port":"9402" "prometheus.io/scrape":"true"] "creationTimestamp":<nil> "labels":map["app":"cert-manager" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"cert-manager" "helm.sh/chart":"cert-manager-v0.12.0"]] "spec":map["containers":[map["args":["--v=2" "--cluster-resource-namespace=$(POD_NAMESPACE)" "--leader-election-namespace=kube-system" "--webhook-namespace=$(POD_NAMESPACE)" "--webhook-ca-secret=cert-manager-webhook-ca" "--webhook-serving-secret=cert-manager-webhook-tls" "--webhook-dns-names=cert-manager-webhook,cert-manager-webhook.cert-manager,cert-manager-webhook.cert-manager.svc"] "env":[map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["apiVersion":"v1" "fieldPath":"metadata.namespace"]]]] "image":"quay.io/jetstack/cert-manager-controller:v0.12.0" "imagePullPolicy":"IfNotPresent" "name":"cert-manager" "ports":[map["containerPort":'\u24ba' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"cert-manager" "serviceAccountName":"cert-manager" "terminationGracePeriodSeconds":'\x1e']]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-12-21T05:50:11Z" "lastUpdateTime":"2019-12-21T05:50:16Z" "message":"ReplicaSet \"cert-manager-754d9b75d9\" has successfully progressed." "reason":"NewReplicaSetAvailable" "status":"True" "type":"Progressing"] map["lastTransitionTime":"2019-12-21T07:41:58Z" "lastUpdateTime":"2019-12-21T07:41:58Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]}
for: "https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml": Deployment.apps "cert-manager" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"cert-manager", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Tiller", "app.kubernetes.io/name":"cert-manager"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"webhook\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"webhook\",\"helm.sh/chart\":\"cert-manager-v0.11.0\"},\"name\":\"cert-manager-webhook\",\"namespace\":\"cert-manager\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app\":\"webhook\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"webhook\"}},\"template\":{\"metadata\":{\"annotations\":null,\"labels\":{\"app\":\"webhook\",\"app.kubernetes.io/instance\":\"cert-manager\",\"app.kubernetes.io/managed-by\":\"Tiller\",\"app.kubernetes.io/name\":\"webhook\",\"helm.sh/chart\":\"cert-manager-v0.11.0\"}},\"spec\":{\"containers\":[{\"args\":[\"--v=2\",\"--secure-port=6443\",\"--tls-cert-file=/certs/tls.crt\",\"--tls-private-key-file=/certs/tls.key\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"quay.io/jetstack/cert-manager-webhook:v0.11.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"cert-manager\",\"resources\":{},\"volumeMounts\":[{\"mountPath\":\"/certs\",\"name\":\"certs\"}]}],\"serviceAccountName\":\"cert-manager-webhook\",\"volumes\":[{\"name\":\"certs\",\"secret\":{\"secretName\":\"cert-manager-webhook-tls\"}}]}}}}\n"},"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cert-manager-v0.11.0"}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/managed-by":"Tiller"}},"template":{"metadata":{"annotations":null,"labels":{"app.kubernetes.io/managed-by":"Tiller","helm.sh/chart":"cert-manager-v0.11.0"}},"spec":{"$setElementOrder/containers":[{"name":"cert-manager"}],"containers":[{"args":["--v=2","--secure-port=6443","--tls-cert-file=/certs/tls.crt","--tls-private-key-file=/certs/tls.key"],"image":"quay.io/jetstack/cert-manager-webhook:v0.11.0","name":"cert-manager"}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "cert-manager-webhook", Namespace: "cert-manager"
Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1"] "creationTimestamp":"2019-12-21T05:50:11Z" "generation":'\x01' "labels":map["app":"webhook" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"webhook" "helm.sh/chart":"cert-manager-v0.12.0"] "name":"cert-manager-webhook" "namespace":"cert-manager" "resourceVersion":"4691199" "selfLink":"/apis/apps/v1/namespaces/cert-manager/deployments/cert-manager-webhook" "uid":"ccd586ea-d4af-4b27-b179-fe2f1aceab77"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x01' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["app":"webhook" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"webhook"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["app":"webhook" "app.kubernetes.io/instance":"cert-manager" "app.kubernetes.io/managed-by":"Helm" "app.kubernetes.io/name":"webhook" "helm.sh/chart":"cert-manager-v0.12.0"]] "spec":map["containers":[map["args":["--v=2" "--secure-port=10250" "--tls-cert-file=/certs/tls.crt" "--tls-private-key-file=/certs/tls.key"] "env":[map["name":"POD_NAMESPACE" "valueFrom":map["fieldRef":map["apiVersion":"v1" "fieldPath":"metadata.namespace"]]]] "image":"quay.io/jetstack/cert-manager-webhook:v0.12.0" "imagePullPolicy":"IfNotPresent" "livenessProbe":map["failureThreshold":'\x03' "httpGet":map["path":"/livez" "port":'\u17c0' "scheme":"HTTP"] "periodSeconds":'\n' "successThreshold":'\x01' "timeoutSeconds":'\x01'] "name":"cert-manager" "readinessProbe":map["failureThreshold":'\x03' "httpGet":map["path":"/healthz" "port":'\u17c0' "scheme":"HTTP"] "periodSeconds":'\n' "successThreshold":'\x01' "timeoutSeconds":'\x01'] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/certs" "name":"certs"]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "serviceAccount":"cert-manager-webhook" "serviceAccountName":"cert-manager-webhook" "terminationGracePeriodSeconds":'\x1e' "volumes":[map["name":"certs" "secret":map["defaultMode":'\u01a4' "secretName":"cert-manager-webhook-tls"]]]]]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-12-21T05:50:11Z" "lastUpdateTime":"2019-12-21T05:50:38Z" "message":"ReplicaSet \"cert-manager-webhook-76f9b64b45\" has successfully progressed." "reason":"NewReplicaSetAvailable" "status":"True" "type":"Progressing"] map["lastTransitionTime":"2019-12-21T07:42:00Z" "lastUpdateTime":"2019-12-21T07:42:00Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]}
for: "https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml": Deployment.apps "cert-manager-webhook" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"webhook", "app.kubernetes.io/instance":"cert-manager", "app.kubernetes.io/managed-by":"Tiller", "app.kubernetes.io/name":"webhook"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
command terminated with exit code 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.