Coder Social home page Coder Social logo

kubernetes-operator's People

Contributors

nmanoogian avatar ryan-blunden avatar sgudbrandsson avatar watsonian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-operator's Issues

Random failure publishing new secrets on changes

We've successfully deployed the Doppler Operator to our environments a little over 2 months ago.

We're noticing however that while everything works as it should post-deployment (we can change a secret in our config, and run kubectl describe secrets --selector=secrets.doppler.com/subtype=dopplerSecret , we will see the changes almost immediately), after a while (days) the operator doesn't seem to pick up the new secrets anymore.

In order to fix this, we have to delete the CRD for that environment and recreate it.

We dug through the logs, but didn't notice anything out of the ordinary.

Happy to provide more info/details/logs that would help in diagnosing this. We are on GKE, k8s 1.23.

Strange double-deployment when doing helm deploy

Hi,

We've been facing some very strange issues in production lately in GKE Autopilot.
When we deploy new revision of our application, the helm deploy action is running, then for some reason, Doppler triggers another deployment very soon after the helm deployment starts.
We are using HPA to automatically scale to N pods and expect helm to just update the deployment, leaving the pod count as is.

When Doppler triggers a deployment immediately after the helm deployment starts, the workload scales down to 1 pod.

You can see the effect here:
image
image

We deploy multiple times per day, so this is causing a major headache in production.

We're using Doppler 1.2.5 currently, and I don't know if upgrading will do anything.
Have you seen this behavior?
What do you recommend?

Request for change of behavior introduced in 1.2.0 which breaks prior use cases

In c55ad5e, a change was introduced which only allows DopplerSecret objects in the operator namespace. This breaks self-service within multi-tenancy.

Instead, I would like to propose a change that takes into account existing RBAC in the cluster while providing the original flexibility.

  1. Add a ValidatingWebhookConfiguration that registers a webhook for all DopplerSecret objects.
  2. Reject any DopplerSecret objects that attempt to overwrite any existing managedSecrets
  3. Reject any Doppler Secret objects that violate the user's RBAC. Combining SelfSubjectAccessReview and impersonation with the userInfo data will allow the operator to determine authorization of the action at time of creation of the DopplerSecret

For those looking for a solution to maintain the previous behavior

You should be able to specify the previous chart version of 1.1.1 with helm. That is our temporary solution.

Is there an option to automatically create a new config if it's not found?

Issue Description:
I'm using Doppler for production, staging, and local development. To streamline the onboarding process for new developers, it would be beneficial if the Kubernetes operator could automatically generate new configurations if they are not already present. Currently, this requires an additional step, either using the CLI or the interface.

Suggested Approach:
I'm currently using the following doppler.yaml configuration:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: "{{ .Values.name }}-api-doppler"
  namespace: default
spec:
  tokenSecret:
    name: "doppler-api-token-secret"
  project: api
  config: 
     name: "{{ .Values.dopplerEnv }}"
     auto: true # Automatically creates the config if not found.
  managedSecret:
    name: "{{ .Values.name }}-api-secrets"
    namespace: default

Question:
Is there an alternative approach using Kubernetes YAML files to achieve this desired behavior, without relying on the CLI or interface?

[Kubernetes] imagePullSecrets: unable to deploy

Thanks again for a nice solution to our problems ;)

When deploying custom images to Kubernetes from a private image repository, an acces configuration is required.
Link to Docs

This secret is defined by a key called " .dockerconfigjson ". You can probably see where im going with this:

There is currently no way to deploy it with the kubernetes operator. https://docs.doppler.com/docs/kubernetes-operator

We really would appreciate a solution to this as the only secret "not dopplered" are our image pull secrets =)
The best solution is probably a Name Transformer.

Thank you!

The operator should allow arbitrary string->string mappings for secrets

Problem: if the desired secret key is not possible to produce via any existing nameTransformer, then DopplerSecret cannot be used to sync the secrets to kubernetes. Example: the secret key string tls.cert key cannot be produced with any nameTransformer from a Doppler secret name.

It should be possible to provide a string-string mapping of <doppler_upper_snake_case_name> to <arbitrary_string> so that any secret can be populated.

Example of what I'd suggest:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test
  namespace: doppler-operator-system
spec:
  tokenSecret:
    name: doppler-token-secret
  # doppler-side secret names cannot contain ":" so it could be used to segment the list entries:
  secrets:
    # directly map "VAR1" from doppler to key "something.totally.different" in the kubernetes secret object
    - 'VAR1:something.totally.different'
  managedSecret:
    name: doppler-test-secret
    namespace: default

Pod/Deployment doesn't restart although recognized by the operator

Versions

Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.17-gke.1901", GitCommit:"b5bc948aea9982cd8b1e89df8d50e30ffabdd368", GitTreeState:"clean", BuildDate:"2021-05-27T19:56:12Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
Operator: v0.1.0

Problem

Pod does not restart even though the secret was updated

Expected result

Pod should restart as soon as the secret is updated, instead only the secret gets updated

Logs

│ 2021-07-15T19:28:44.164Z    INFO    controllers.DopplerSecret    [/] Secrets have been modified    {"dopplersecret": "external-secrets/dopplersecret-test", "verifyTLS": true, "host": "https://api.doppler.com", "oldVersion": "W/\"70d6dcadc0177a11c86e856195e8be2c1078975aaa2fb7ab37ae1db4b5aa03ec\"", "newVersion": "W/\"f37c20815bb0f7c177425f50e14e8051588f0c011e5 │
│ 2021-07-15T19:28:44.170Z    INFO    controllers.DopplerSecret    [/] Successfully updated existing Kubernetes secret                                                                                                                                                                                                                                                     │
│ 2021-07-15T19:28:44.178Z    INFO    controllers.DopplerSecret    Finished reconciling deployments    {"dopplersecret": "external-secrets/dopplersecret-test", "numDeployments": 1}

Configs

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test # DopplerSecret Name
  namespace: external-secrets
spec:
  tokenSecret: # Kubernetes service token secret (namespace defaults to doppler-operator-system)
    name: doppler-token-secret
    namespace: doppler-operator-system
  managedSecret: # Kubernetes managed secret (will be created if does not exist)
    name: doppler-test-secret
    namespace: external-secrets # Should match the namespace of deployments that will use the secret
---
apiVersion: v1
kind: Pod
metadata:
  name: doppler-busybox
  namespace: external-secrets
  annotations:
    secrets.doppler.com/reload: 'true'
spec:
  containers:
  - name: busybox
    image: busybox:glibc
    command:
      - sleep
      - "3600"
    envFrom:
      - secretRef:
          name: doppler-test-secret

allow custom namespace

Currently helm install doppler-kubernetes-operator doppler/doppler-kubernetes-operator --namespace doppler-operator-system --create-namespace will fail

Set Loglevel

Dear Team,

how can i set the loglevel of the operator so my kibana is not spammed with info logs that do nothing for me?

Thanks in advance.

to put this into persepective, here are the last 24h for an active doppler system:

grafik

Support for ARM64-based CPUs

Hey Doppler Team!

Our team is planning to explore using this Kubernetes operator on ARM64-based Devices (AWS Graviton2 + Raspberry Pi). Is it possible to provide support for this architecture?

Looking into your GitHub Actions Script, it looks like this can be done with a one-liner. Within this job, you can enter in platforms: linux/amd64,linux/arm64 after line 32 to easily provide support. You can refer to this document for more information.

Only caveat with this approach is builds will take longer due to ARM64 Emulation. From my testing it took roughly 40 minutes to build the Docker Container on a self-hosted GitHub Actions Runner.

Hoping for your team's support for this though!

Forcing DopplerSecret objects to be created in operator namespace breaks namespace isolation

Edit: This is partially my bad. I had a stale sealed secret obfuscating the real issue (see #45 (comment))

Original message (not overly relevant):

The following simple DopplerSecret definition leads to a kubernetes secret being created, but the keys in that secret are in snake_case rather than the documented default SCREAMING_SNAKE_CASE.

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: oauth2-secrets
spec:
  tokenSecret:
    name: doppler-service-token
  managedSecret:
    name: oauth2-secrets

I tried adding a nameTransformer with a value of upper-snake (which is not a documented supported value), but that leads to an error saying that it's not supported, advising that the supported values are supported values: "upper-camel", "camel", "lower-snake", "tf-var", "dotnet-env", "lower-kebab". None of these will get the keys back to the format they need to be in.

Unfortunately, this means that we can't make use of envFrom as our applications all expect env vars to be in SCREAMING_SNAKE_CASE. The workaround is to specify every env var with a valueFrom or change the implementation to use non-idiomatic env var styles.

Using the same project locally (outside of kubernetes) does not have this issue.

Helm Chart dependency `kube-rbac-proxy` deprecation warning

The Helm Chart's usage of kube-rbac-proxy container outputs these logs during start-up.

==== Deprecation Warning ======================
Insecure listen address will be removed.
Using --insecure-listen-address won't be possible!
The ability to run kube-rbac-proxy without TLS certificates will be removed.
Not using --tls-cert-file and --tls-private-key-file won't be possible!
For more information, please go to https://github.com/brancz/kube-rbac-proxy/issues/187
===============================================

v1.5.0 recommended.yaml

      containers:
      - args:
        - --secure-listen-address=0.0.0.0:8443
        - --upstream=http://127.0.0.1:8080/
        - --logtostderr=true
        - --v=10
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.14.1
        name: kube-rbac-proxy
        ports:
        - containerPort: 8443
          name: https

Snippet from the Github Issue brancz/kube-rbac-proxy#187

What

We are removing the option to run kube-rbac-proxy without configured TLS certificates.
This means that:

using insecure-listen-addresss won't work any more.
not setting tls-cert-file and tls-private-key-file won't work any more.

Upstream H2C should still work, but we might remove verified claims about an identity that are send to upstream in the future.

Why

We are aware that we create obstacles in running kube-rbac-proxy for testing or debugging purposes.
But we reduce the probability for an insecure set up of kube-rbac-proxy, which is a security relevant component.

Running kube-rbac-proxy without TLS certificates, makes it possible to impersonate kube-rbac-proxy.

The reason that we remove that capability is a pre-acceptance requirement for kube-rbac-proxy, before we can donate the project so sig-auth of k8s.

Reconcile algorithm overuses the Doppler API

We have been swapping out an in-house k8s reloading/secret handling solution to this official operator solution. Previously we used Doppler Webhooks for reloading, and an init script at container startup to load the current secrets.

What we have noticed since migrating is that we gets lots of these errors in the pod logs:

Doppler Error: Exceeded rate limit of 240 secret read requests within 60 seconds. Retry in 1 seconds. Upgrade to the Enterprise plan to increase your limit

We have many DopplerSecret custom resources (but many of them reference the same Doppler Config actually). Despite there being many, they rarely ever change (on the frequency level of 1-2 changes per week), so we should not exceed the API rate limits.

It doesn't make any sense to me that the operator needs to HTTP-GET all secrets every N seconds from the Doppler API. It should be able to use a functionality similar to the Webhooks to do push-based reconciliation instead of polling Doppler. Doing so would drastically reduce the API load on Doppler.

I would propose one of two solutions:

  1. Stick to polling, rely on ETag and If-None-Match headers to decrease load on Doppler API (looks like Etag is already implemented) but increase the API rate limit specifically for HTTP-304 responses since they should be less expensive for Doppler than HTTP-200 reponses.
  2. Use a functionality similar to the Webhooks to do push-based reconciliation instead of polling. For example, I would be okay with exposing the operator via an ingress so it could receive notifications from Doppler. Then the polling-based solution could be kept as a fallback with its frequency significantly decreased.

Configure resources for all containers

It would be great if this chart either directly configured resources for the rbac proxy container or allowed consumers of the chart to manually specify resources. Additionally, it would be great of both pods had requests and limits for ephemeral-storage which is a resource as of 1.25. I'm finding that often my nodes get killed because the rbac proxy pod uses up more storage than it requests (with a request of 0) and this causes the node as a whole to run out of storage and cause all pods on the node to suddenly get evicted.

Can't create managed secret for a project's root config

Logs say:
ERROR controllers.DopplerSecret Unable to update dopplersecret {"dopplersecret": "namespace/dopplersecret-root", "error": "Cannot change existing managed secret type from Opaque to . Delete the managed secret and re-apply the DopplerSecret."}

DopplerSecret manifest:
apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/resource-policy: keep
meta.helm.sh/release-name: namespace
labels:
app.kubernetes.io/name: app
name: dopplersecret-root
namespace: namespace
spec:
config: root
managedSecret:
name: dopplersecrets-root
namespace: namespace
project: project
tokenSecret:
name: dopplertoken-root

Status of DopplerSecret object:
status:
conditions:

  • lastTransitionTime: "2024-02-27T21:24:09Z"
    message: 'Secret update failed: Cannot change existing managed secret type from
    Opaque to . Delete the managed secret and re-apply the DopplerSecret.'
    reason: Error
    status: "False"
    type: secrets.doppler.com/SecretSyncReady
  • lastTransitionTime: "2024-02-27T21:24:09Z"
    message: Deployment reload has been stopped due to secrets sync failure
    reason: Stopped
    status: "False"
    type: secrets.doppler.com/DeploymentReloadReady

I am not sure why it states that the managed secret exists, as it is the DopplerSecret itself that is creating it and then complaining about an incorrect secret type (which is not being expanded correctly since it says 'from Opaque to .').
I tried recreating the DopplerSecret multiple times, but it did not help.

Allow DopplerSecret to be deployed to other namespaces

@nmanoogian I saw:

and would have preferred having the option to limit DopplerSecret to a specific namespace or even only reading tokens from the same namespace

it seems counter intuitive to the namespacing of kubernetes. The DopplerSecret should be able to reconcile be deployed to other namespaces. its the cross namespace access that was problematic allowing non operators to enumerate or access secrets that they did not have access to

I have a Doppler Token and own it and I am an application owner, I have to coordinate with the team that deploys the Doppler Operator to deploy my DopplerSecret just so that my namespace can have a secret. It seems we are artificially limiting who can manage DopplerSecret

External Secrets Operator would also allow also me to do it this way using SecretStore and Secret in the same namespace so I would suggest having Doppler Operator mimic that ability.

Thank you for your time :)

Default namespace for TokenSecret not applied correctly

I tried following the sample configuration provided and the only way it worked for me was by adding a namespace of doppler-operator-system in the DopplerSecret file.

secrets_v1alpha1_dopplersecret.yaml

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-test # DopplerSecret Name
  namespace: doppler-operator-system
spec:
  tokenSecret: # Kubernetes service token secret
    name: doppler-token-secret
    # HAD TO ADD THIS FOR IT TO WORK
    namespace: doppler-operator-system
  managedSecret: # Kubernetes managed secret (will be created if does not exist)
    name: doppler-test-secret
    namespace: default # Should match the namespace of deployments that will use the secret

After adding that namespace the operator was able to find the Token secret and generate the ManagedSecret in the desired namespace.

"Cannot change existing managed secret type from Opaque to ." after upgrading to 1.4.0

Hi, doppler-operator is not updating secrets anymore and logging errors Cannot change existing managed secret type from Opaque to . since upgrading it to 1.4.0 (with Helm).

This probably has something to do with the new support for Kubernetes secret types in this version but I'm not sure exactly how it's causing the issue with our existing secrets?

Manage CRDs via Helm

Request

Consider supporting managing for CRDs via Helm.

Reasoning

We manage our charts, including doppler-kubernetes-operator using ArgoCD. And we use Renovatebot to keep dependencies up to date.

The renovatebot PR looks like this:

image

and seeing there is a new property in the DopplerSecret, made me think that I might have to update my CRDs. But this might not always be the case, and I do not want to do this manually.

How to achieve

Helm cannot upgrade custom resource definitions in the <chart>/crds folder by design.

However, by moving the CRDs inside the <chart>/templates directory, it is possible to keep them in-sync, together with the rest of the templates when there's a new version.

See for example how ArgoCD project is doing the same: https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd#custom-resource-definitions

How to configure a "master token secret"

Hey guys,
First of all thank you for the operator, we use it constantly.

Now there has come a point where we are spending more time deploying "token secrets" into the cluster than anything esle, which is especially true if you do feature deployments with kubernetes.

Is there a simple way to deploy a "token secret" with multiple DOPPLER tokens inside and use that to create other secrets?

Usercase:

  1. Have a project that contains all the doppler tokens for the different configurations you deploy in kubernetes. At this point im deploying around 10 token secrets and its getting kumbersome to deploy them by hand.
  2. deploy these tokens into the cluster using doppler
  3. Use the deployed secret of that operation to feed other doppler configurations inside the cluster.

basically im looking for this:

apiVersion: secrets.doppler.com/v1alpha1
kind: DopplerSecret
metadata:
  name: dopplersecret-backups
  namespace: doppler-operator-system
spec:
  host: https://api.doppler.com
  managedSecret:
    name: dopplersecret-staging-postgres
    namespace: postgres-staging
    type: Opaque
  resyncSeconds: 60
  tokenSecret:
    name: doppler-cluster-tokens
    **KEY**: token-backups
  verifyTLS: true

But maybe there is a better way which i simply cant see?

thanks

GCP GKE INFO logs are showing ERROR

Followed the README to install the operator using kubectl and it seems to be running as normal, but every minute I'm getting a chunk of error logs that show INFO in the message and are showing as ERROR.

Is there something I need to change to correct the output?

Screen Shot 2023-03-10 at 9 30 22 AM

...
logName: "projects/project_id/logs/stderr",
severity: "ERROR"
textPayload: "2023-03-10T14:26:47.498Z	INFO	..."
...

Getting started from Readme not working

Hi,
I try to follow the getting started instructions from this readme, but running into issues at step 2

I'm using the option with doppler CLI:
kubectl create secret generic doppler-token-secret -n doppler-operator-system --from-literal=serviceToken=$(doppler configs tokens create doppler-kubernetes-operator --plain)

This results in following error: Doppler Error: You must specify a project

When I add a project (and after the second error message the "config"), I receive following error message

Unable to create service token Doppler Error: Please provide a valid config. error: failed to create secret secrets "doppler-token-secret" already exists

Can anyone help here please?

cannot find Service Account

I have this problem, do not touch anything doppler in the last 20 days. it just stopped updating and i found this in the logs
-Cannot find Service Account in pod to build in-cluster rest config: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0000d4001, 0xc000172000, 0xbb, 0x10f)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:996 +0xb8
k8s.io/klog/v2.(*loggingT).output(0x251bc80, 0xc000000003, 0x0, 0x0, 0xc0001de150, 0x2472f85, 0x7, 0x18e, 0x0)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:945 +0x19d
k8s.io/klog/v2.(*loggingT).printf(0x251bc80, 0x3, 0x0, 0x0, 0x17bca5f, 0x46, 0xc00059d990, 0x1, 0x1)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:733 +0x17a
k8s.io/klog/v2.Fatalf(...)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:1463
main.initKubeConfig(0x0, 0x0, 0x4)
/home/travis/gopath/src/github.com/brancz/kube-rbac-proxy/main.go:398 +0x18f
main.main()
/home/travis/gopath/src/github.com/brancz/kube-rbac-proxy/main.go:151 +0xd5f

goroutine 18 [syscall]:
os/signal.signal_recv(0x0)
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/home/travis/.gimme/versions/go1.13.15.linux.amd64/src/os/signal/signal_unix.go:29 +0x41

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x251bc80)
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
/home/travis/gopath/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xd6

Feature: Support kubernetes.io/tls instead of only Opaque

As I understand from the Kubernetes documentation the kubernetes.io/tls only difference is enforcing DER standards and that the key/cert is present. So consider this as a nice-to-have feature request.

Current code only supports Opaque Kubernetes secrets.


Having the operator create kubernetes.io/tls when a certificate is present would be nice!

Feature request: Service Account support

As I understand how Doppler now works is that a service token gives access to a single branch config, this way tokens and branch config locations are tightly coupled without any need from the user to specify where the branch config is located.

This way the operator knows where to fetch the secrets. Service accounts however can be used to fetch secrets from many configs. I suspect we need to configure through the DopplerSecret where to fetch the config/secrets from but this would require changes, is this assumption correct?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.