Coder Social home page Coder Social logo

sparebankenvest / azure-key-vault-to-kubernetes Goto Github PK

View Code? Open in Web Editor NEW
424.0 18.0 97.0 23.75 MB

Azure Key Vault to Kubernetes (akv2k8s for short) makes it simple and secure to use Azure Key Vault secrets, keys and certificates in Kubernetes.

Home Page: https://akv2k8s.io

License: Apache License 2.0

Go 93.83% Shell 0.76% Dockerfile 2.29% Makefile 3.12%
azure vault keyvault kubernetes controller secrets

azure-key-vault-to-kubernetes's Introduction

Azure Key Vault to Kubernetes

Controller Build Status WebHook Build Status VaultEnv Build Status Go Report Card Release Tag Docker Pulls Docker Pulls

Azure Key Vault to Kubernetes (akv2k8s) makes Azure Key Vault secrets, certificates and keys available to your applications in Kubernetes, in a simple and secure way.

Documentation available at https://akv2k8s.io. Join our Slack Workspace to ask questions to the akv2k8s community.

Overview

Azure Key Vault to Kubernetes (akv2k8s) will make Azure Key Vault objects available to Kubernetes in two ways:

  • As native Kubernetes Secrets
  • As environment variables directly injected into your Container application

The Azure Key Vault Controller (Controller for short) is responsible for synchronizing Secrets, Certificates and Keys from Azure Key Vault to native Secrets in Kubernetes.

The Azure Key Vault Env Injector (Env Injector for short) is responsible for transparently injecting Azure Key Vault secrets as environment variables into Container applications, without touching disk or exposing the actual secret to Kubernetes.

Goals

The goals for this project were:

  1. Avoid a direct program dependency on Azure Key Vault for getting secrets, and adhere to the 12 Factor App principle for configuration (https://12factor.net/config)
  2. Make it simple, secure and low risk to transfer Azure Key Vault secrets into Kubernetes as native Kubernetes secrets
  3. Securely and transparently be able to inject Azure Key Vault secrets as environment variables to applications, without having to use native Kubernetes secrets

All of these goals are met.

Installation

For installation instructions, see documentation at https://akv2k8s.io/installation/.

Credits

Credit goes to Banzai Cloud for coming up with the original idea of environment injection for their bank-vaults solution, which they use to inject Hashicorp Vault secrets into Pods.

Contributing

Development of Azure Key Vault for Kubernetes happens in the open on GitHub, and we encourage users to:

  • Send a pull request with
    • any security issues found and fixed
    • your new features and bug fixes
    • updates and improvements to the documentation
  • Report issues on security or other issues you have come across
  • Help new users with issues they may encounter
  • Support the development of this project and star this repo!

Code of Conduct

Sparebanken Vest has adopted a Code of Conduct that we expect project participants to adhere to. Please read the full text so that you can understand what actions will and will not be tolerated.

License

Azure Key Vault to Kubernetes is licensed under Apache License 2.0.

Contribute to the Documentation

The documentation is located in a separate repository at https://github.com/SparebankenVest/akv2k8s-website. We're using Gatsby + MDX (Markdown + JSX) to generate static docs for https://akv2k8s.io.

azure-key-vault-to-kubernetes's People

Contributors

181192 avatar abhilashjoseph avatar cgroschupp avatar chaosgramer avatar daniel-anova avatar degeens avatar dependabot[bot] avatar el-memer avatar elzapp avatar erdeiattila avatar hansk-p avatar howardjones avatar laozc avatar lmllr avatar rogersolsvik avatar sdwerwed avatar snyk-bot avatar svjo avatar timbuchinger avatar torresdal avatar tspearconquest avatar vsabella avatar waterfoul avatar wimi avatar yix avatar youjintou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-key-vault-to-kubernetes's Issues

[BUG] Critical vulnerability!

Describe the bug
AKS SP is exposed using env injector and brings even bigger vulnerabillity than the problem it is trying to solve

To Reproduce
First of all this is mine initContainer yaml part. Second to that, my application container is running on user 1000 with read-only fs and drop all capabillities (do not need to tell, best practises to run securely your dockerized applications)

  initContainers:
  - command:
    - sh
    - -c
    - cp /usr/local/bin/azure-keyvault-env /azure-keyvault/ && cp /etc/kubernetes/azure.json
      /azure-keyvault/azure.json && chmod 444 /azure-keyvault/azure.json
    image: spvest/azure-keyvault-env:0.1.15
    imagePullPolicy: IfNotPresent
    name: copy-azurekeyvault-env
    volumeMounts:
    - mountPath: /azure-keyvault/
      name: azure-keyvault-env
    - mountPath: /etc/kubernetes/azure.json
      name: azure-config
      readOnly: true

but if you exec into application container and ls -l / this is how it looks

user@app-6498f5f49f-7hkl6:/$ ls -l /
total 68
drwxr-xr-x   1 root root 4096 Jan  2 13:38 app
drwxrwxrwt   2 root root   80 Jan  3 08:14 azure-keyvault
drwxr-xr-x   1 root root 4096 Nov 22 08:33 bin
drwxr-xr-x   2 root root 4096 Sep  8 10:51 boot
...

The problem is with folder azure-keyvault which contains binary and json file.

if you write inside container env output is nice, like in examples, env vars are hidden, but if you do /azure-keyvault/azure-keyvault-env env you get "decrypted" env vars, but this is not the worst case.

The worst case is that json inside the directory with READ permission, which has AKS SP credentials in plain text, you can get AKS Admin with those and access every resource AKS has access to!

user@app-6498f5f49f-7hkl6:/$ ls -l azure-keyvault/
total 37008
-rwxr-xr-x 1 root root 37890317 Jan  3 08:14 azure-keyvault-env
-r--r--r-- 1 root root     1396 Jan  3 08:14 azure.json

Expected behavior
It shouldn't be such "leftovers" in application container.

Additional context
I am really surprised that is not covered in documentation and no one raised this concern.

[FEATURE] Multiple vault objects into single K8s secret

Is your feature request related to a problem? Please describe.
I would like to use azure-key-vault-to-kubernetes together with Flux feature "valuesFrom".
To do that, I need to read multiple secrets from Azure KeyVault and put it into a single Kubernetes secret, under single key. The values is read later by Flux and used as additional values file when deploying Helm release.

Describe the solution you'd like
I would like to be able to specify multiple objects in AzureKeyVaultSecret (or new CRD) so it would look like this:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: secret-sync
  namespace: default
spec:
  vault:
    name: my-keyvault
    objects:
      - name: test-secret
        type: secret
      - name: another-secret
        type: secret
  output:
    secret:
      name: app-secret # kubernetes secret name
      dataKey: values.yaml # key to store object value in kubernetes secret

And the output secret in K8s would be:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: app-secret
  namespace: default
data:
  values.yaml: | # The value would be base64 encoded
    test-secret: test-secret-value
    another-secret: another-secret-value

Describe alternatives you've considered
One alternative could be to have customer templating support so it would be possible to achieve the same with templating. There is an existing issue about templating #18.

Additional context
Would that feature make sense for others? Is anyone using azure-key-vault-to-kubernetes with Flux or ArgoCD?
I may try to implement it, but I would like to know what would be feasible solution for this project.

[BUG] Certificate with bundle in wrong order

Describe the bug
On trying to use akv2k8s with the nginx-ingress controller had the following error:

Error obtaining X.509 certificate: unexpected error creating pem file: tls: private key does not match public key.

On investigation I could see akv2k8s was putting my crt below the bundle rather than on top.

To Reproduce
PFX certificate with bundle in KV, try to export as as type certificate.

Expected behavior
Certificate and bundle should be in the correct order

[Question] Failed to get secret for 'certificate-sync' from Azure Key Vault '<KEYVAULT_NAME>'

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
Hello Guys, I have problem to create akvs and secret.

To Reproduce
Steps to reproduce the behavior:

I have this file, the same name as akvs-certificate-sync.yaml. On azure portal is also located valid certificate with name cert-from-azure-portal. And I follow instruction in wiki.

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: certificate-sync
  namespace: akv-test
spec:
  vault:
    name: "KEYVAULT_NAME" # name of key vault
    object:
      name: "cert-from-azure-portal"
      type: "certificate"
  output:
    secret:
      name: "cert-from_k8s" # kubernetes secret name
      type: kubernetes.io/tls # kubernetes secret type
> kubectl apply -f .\kubernetes\akvs-certificate-sync.yaml
azurekeyvaultsecret.spv.no/certificate-sync created

Expected behavior
A clear and concise description of what you expected to happen.

Logs
If applicable, add logs to help explain your problem.

> kubectl describe -f .\kubernetes\akvs-certificate-sync.yaml
Name:         certificate-sync
Namespace:    akv-test
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"spv.no/v1alpha1","kind":"AzureKeyVaultSecret","metadata":{"annotations":{},"name":"certificate-sync","namespace":"akv-test"...
API Version:  spv.no/v1alpha1
Kind:         AzureKeyVaultSecret
Metadata:
  Creation Timestamp:  2020-04-30T09:25:36Z
  Generation:          1
  Resource Version:    10487268
  Self Link:           /apis/spv.no/v1alpha1/namespaces/akv-test/azurekeyvaultsecrets/certificate-sync
  UID:                 c5adca2b-aff0-411f-ad80-332760bb9216
Spec:
  Output:
    Secret:
      Name:  cert-from-azure-portal
      Type:  kubernetes.io/tls
  Vault:
    Name:  KEYVAULT_NAME
    Object:
      Name:  cert-from_k8s
      Type:  certificate
Events:
  Type     Reason         Age                 From                     Message
  ----     ------         ----                ----                     -------
  Warning  ErrAzureVault  41s (x2 over 101s)  azurekeyvaultcontroller  Failed to get secret for 'certificate-sync' from Azure Key Vault 'KEYVAULT_NAME'

Akvs is created, but is empty and has not create a secret:

> kubectl -n akv-test get akvs
NAME               VAULT                VAULT OBJECT                  SECRET NAME          SYNCHED
certificate-sync   KEYVAULT_NAME   cert-from-azure-portal

Do you have any advice ?

Thanks

Additional context
Add any other context about the problem here.

[FEATURE] - Is it possible to use with 1 KeyVault per namespace?

Is your feature request related to a problem? Please describe.
No.

Describe the solution you'd like
Is it possible to run the env injector where multiple keyvaults are used. We are looking to use 1 keyvault per application which would be namespaced.

I guess my question ties to the authentication piece of the solution

Describe alternatives you've considered
N/A

Additional context
Add any other context or screenshots about the feature request here.

N/A

Injected env comes in as name@azurekeyvault instead of actual value

I went through the installation steps detailed in the README, and all appeared to work fine, but when I go to grab the injected secret, the application sees it as my-secret-ci-env@azurekeyvault instead of the actual value from Azure Key Vault.

What would this mean?

My yaml:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: my-secret-ci-env
  namespace: default
spec:
  vault:
    name: my-dev-kv
    object:
      type: secret
      name: CI-ENV

and in my deployment:

        env:
        - name: CI_ENV
          value: my-secret-ci-env@azurekeyvault

And in my PHP application:

echo 'CI_ENV: ' . $_ENV['CI_ENV'];

Which displays: CI_ENV: my-secret-ci-env@azurekeyvault

Am I missing something? Is there a place to find logs to see if the azure secret is being retrieved correctly?

[FEATURE]: Support for secrets stored in to Azure KeyVault in base64 encoded format

First of all, thanks for open-sourcing this tool, there aren't that many good solutions for syncing secrets between Azure KeyVault and Kubernetes secrets.

We are running lots of small microservices written in JVM languages like Clojure and Kotlin. The standard format for storing certificates and keys in the JVM realm is Java KeyStore, aka JKS. Azure KeyVault doesn't support JKS as input format so we have using base64 encoding to go around the limitation so that we can still use KeyVault as the master for all JKS files. Syncing them to Kubernetes secrets remains a bit hazle, and this would simplify the process a lot. Of course this abroach could be used to any files, not only JKS.

I forked the project and created a feature branch to test out syncing base64 encoded KeyVault secrets to Kubernetes secrets, and it seems to work pretty nicely. My approach was to define a new type for vault object base64-encoded-secret which then can be mounted as a file using volume mounts.

The full example of my suggestion:

  - apiVersion: spv.no/v1alpha1
    kind: AzureKeyVaultSecret
    metadata:
      name: trust-store-jks
      labels:
        kv-secrets: trust-store-jks
    spec:
      vault:
        name: development-kv
        object:
          name: trust-store-jks
          type: base64-encoded-secret
      output:
        secret:
          name: trust-store-jks
          dataKey: trust-store.jks

  - kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: example
    spec:
      replicas: 1
      ...
        spec:
          containers:
            - name: example
              image: example
              volumeMounts:
                - name: trust-store-jks
                  mountPath: /etc/trust-store
          volumes:
            - name: trust-store-jks
              secret:
                secretName: trust-store-jks

What do you think about this approach? Are there any apparent disadvantages with the approach?
Would you be interested in accepting this kind new feature?

I'll soon clean up my feature branch and open pull request regarding this feature request. So that we have more beef around bones to talk about. 😄

Key injection not working on second deployment

I have setup the azure-key-vault-to-kubernetes integration and have got this working with one deployment. I have now added another deployment that I want settings injected into and it is not working despite these being set up almost identically.

This is the error I am getting:

level=fatal msg="env-injector: error getting azurekeyvaultsecret resource 'qat-encryption-certificate', error: Get https://app-dev-a--app-dev-aks-1c7826-ac6b8c6e.hcp.northeurope.azmk8s.io:443/apis/spv.no/v1alpha1/namespaces/default/azurekeyvaultsecrets/qat-encryption-certificate: EOF"

So I have setup the controller and pods and they are running (in a namespace key-vault:
image

My secrets are defined as follows:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: qat-encryption-certificate
  namespace: default
spec:
  vault:
    name: appvault
    object:
      name: qat-encryption-certificate
      type: certificate
  output:
    secret:
      name: qat-encryption-certificate
      type: kubernetes.io/tls
---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: qat-storage-connection-string
  namespace: default
spec:
  vault:
    name: appvault
    object:
      name: qat-storage-connection-string
      type: secret
  output:
    secret:
      name: qat-storage-connection-string
      dataKey: azuresecret

I have setup my namespaces like this:

apiVersion: v1
kind: Namespace
metadata:
  name: key-vault
  labels:
    azure-key-vault-env-injection: enabled
---
apiVersion: v1
kind: Namespace
metadata:
  name: default
  labels:
    azure-key-vault-env-injection: enabled
---

My first deployment which is working is defined like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: company-app-api-qat
  labels:
    oem: app
    app: app-api
    env: qat
    tier: backend
spec:
  replicas: 2
  strategy:
    type: Recreate
    rollingUpdate: null
  selector:
    matchLabels:
      app: company-app-api-qat
  template:
    metadata:
      annotations:
        traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
      labels:
        app: company-app-api-qat
    spec:
      containers:
      - name: company-serviceapi
        image: appsharedcontaineregistryeun.azurecr.io/company-api:25603
        env:
        - name: AzureStorageConnectionString
          value: qat-storage-connection-string@azurekeyvault
        - name: Encryption__PublicKey
          value: qat-encryption-certificate@azurekeyvault?tls.crt
        - name: Encryption__PrivateKey
          value: qat-encryption-certificate@azurekeyvault?tls.key
        resources:
          requests:
            memory: 512Mi
            cpu: "0.25"
          limits:
            memory: 512Mi
            cpu: "0.5"
        securityContext:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
      imagePullSecrets:
        - name: acr-auth

My second deployment is defined like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: company-app-app1-qat
  labels:
    oem: app
    app: app-app1
    env: qat
    tier: backend
spec:
  replicas: 2
  strategy:
    type: Recreate
    rollingUpdate: null
  selector:
    matchLabels:
      app: company-app-app1-qat
  template:
    metadata:
      labels:
        app: company-app-app1-qat
    spec:
      containers:
      - name: company-app1
        image: appsharedcontaineregistryeun.azurecr.io/company-app1:25606
        env:
        - name: AzureWebJobsStorage
          value: qat-storage-connection-string@azurekeyvault
        - name: Encryption__PublicKey
          value: qat-encryption-certificate@azurekeyvault?tls.crt
        - name: Encryption__PrivateKey
          value: qat-encryption-certificate@azurekeyvault?tls.key
        resources:
          requests:
            memory: 512Mi
            cpu: "0.25"
          limits:
            memory: 512Mi
            cpu: "0.25"
        securityContext:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
      - name: app2
        image: appsharedcontaineregistryeun.azurecr.io/company-app2:25606
        resources:
          requests:
            memory: 512Mi
            cpu: "0.25"
          limits:
            memory: 512Mi
            cpu: "0.5"
        securityContext:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
      imagePullSecrets:
        - name: acr-auth

The copy-azurekeyvault-env Init containers exists for the pods in both deployments.

Why would I get this error in one but not the other. Is there are way I can debug this further?

Looking at the logs for the injector I can see it happily mutating the containers and finding the env vars to replace.

I can see the controller is happily synching the secrets and I can see the secrets:

image

Webhook Error: Docker Repo Name must be Canonical

Hello! Thank you for providing this tool. It looks like it addresses a lot of the concerns we had with FlexVolumes as they are now with Azure Key Vault.

I'm trying to use the Env Injector plugin and am seeing some bizarre behavior. When I try to run a simple pod that references a secret, I receive an error from the webhook that states the docker repo name is not canonical. If I comment out that environment variable, the webhook isn't triggered and the pod is deployed without any issues.

Pod Spec

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  containers:
  - name: alpine-cacerts
    image: chrisredfield306/alpine-cacerts:latest
    env:
    - name: test
      value: test
    - name: FOO
      value: foo@azurekeyvault
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

Error Message

Error from server (InternalError): error when creating "busybox2.yaml": Internal error occurred: failed calling admission webhook "pods.azure-key-vault-env-injector.admission.spv.no": an error on the server ("{\"response\":{\"uid\":\"260e8e43-9984-11e9-a4cf-660946027c9f\",\"allowed\":false,\"status\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"failed to get auto cmd, error: failed to pull docker image 'chrisredfield306/alpine-cacerts:latest', error: repository name must be canonical\"}}}") has prevented the request from succeeding

MutatingWebHookConfiguration (describe)

Name:         odd-puffin-azure-key-vault-env-injector
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  admissionregistration.k8s.io/v1beta1
Kind:         MutatingWebhookConfiguration
Metadata:
  Creation Timestamp:  2019-06-27T14:52:06Z
  Generation:          1
  Resource Version:    8546
  Self Link:           /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations/odd-puffin-azure-key-vault-env-injector
  UID:                 220fa51a-98eb-11e9-a003-4e4da925a457
Webhooks:
  Client Config:
    Ca Bundle:  LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lSQU1XSElwN1dSR2pJL3pVMWs0Qm92YVl3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2MzWmpMV05oZEMxallUQWVGdzB4T1RBMk1qY3hORFV5TURWYUZ3MHlPVEEyTWpReApORFV5TURWYU1CVXhFekFSQmdOVkJBTVRDbk4yWXkxallYUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUMvRGpPbG5GWTYyWURiUHFFci9MeTJPS3V6czdkMjNYU1F4TUJEYnNzSStoalMKaHF0USt3QjMrMWM3cDZlSStsaWtZdlRsazBBK0ZOZU92TVJEN1laTE5QTml5YVdRUnN5d09nM3pTYlJ2TmZrRAp4L2t4UlJVSXBncXZFUDN3Vi9CWEg3aHJYY2dEQjEveFZXd0FzUFNKVHI1akxpeHd5M1VaUndsQmcwcHBycmlCClFEaG93a3V6UnVlQlpiK3B2Yy83eldJcm0wL2Y4Zm8zQmdtM2xIcDlFa0dzRDRhMXhxTzNGa2ZhdkM2L0JHNXQKaTFETzdFWXFJNDlwVlZUWVdsMU1yUkpLbndtK0hOY21uUXpaZkZRcVFqOS9kUGd0ZE02NVN4bGh6OHRaejFBKwpoemlKTW51RnRxRmNsbEl6NHhuRkpKeGpFdVkrZEwwZnRNWjdhZHlmQWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXdEd1lEVlIwVEFRSC8KQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBV3o3TS9BMFpMMkF1R21wTjl2TURDbW9WMDJ4aQppWWJqMEpvNGlJU1RnOUZZUHhONWpVWUpmMFV3RGZ1bUhZank0V1IzSWtsNnAxZkQyOWFrNytacHYyNzlmalRrCk1iSmJWWDE3L1l5Q3NQMEhGQVJsbEg4dXo0U1FuOGZRSFQ0NWRMaVZVOXBabldPSmJDbWlNQ1JUUEs0TGpvQUgKU0NLa0owak9YUzF5RFFnUjBxenhlSzd6Q0cvRHZnVnNzbXBJVDB6eUI0Uzl1VlJvMkxXRkQzbzJ2aHZTQkdUNwpBSDlmeVlETlZjcWNsdklyS2wrWHljUGpCL3ZWQzlKaE80MUUwcVphUUVhMWpmYWJta1QzS0xlU1oyZHlQT3N0CklhM0hnVlh6QnNZY2RmVDBKbEdsa3JPTStLRnB0N2FFZElrWWQ5VUtNSEVqbFh5S0JNSnFRWFNQdUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    Service:
      Name:        odd-puffin-azure-key-vault-env-injector
      Namespace:   default
      Path:        /pods
  Failure Policy:  Fail
  Name:            pods.azure-key-vault-env-injector.admission.spv.no
  Namespace Selector:
    Match Labels:
      Azure - Key - Vault - Env - Injection:  enabled
  Rules:
    API Groups:
      *
    API Versions:
      *
    Operations:
      CREATE
    Resources:
      pods
  Side Effects:  Unknown

Default Namespace (describe)

Name:         default
Labels:       azure-key-vault-env-injection=enabled
Annotations:  <none>
Status:       Active

No resource quota.

No resource limits.

kubectl version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

If I comment out the FOO environment variable and value, the pod is deployed without any issues. At first I thought it was an issue with the docker repo I had but it looks like the webhook is hitting a snag elsewhere.

azureKeyVaultResourceURI should be configurable

It seems that this constant is not configurable

On some azure instances (private cloud ones, like FairFax, MoonCake etc.), the azure keyvault resource URI is different from value defined, see this link for me details

We need the ability to configure the controller to run with a different value.

I'll try and create a PR about it the next few days. This issue is to keep track of it.

Support Multiple Certificates (Chain)

I currently use Ambassador with Kubernetes which requires my SSL certificate be stored in a kubernetes secret ambassador-certs

The certificate I have is from GoDaddy and to get this to work with ambassador I need to provide the FULLCHAIN of my certificate as tls.crt in my kubernetes secret. This is what my K8s Certificate looks like... (Base64 Decoded)

tls.crt:
-----BEGIN CERTIFICATE-----
mycert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
intermediate-cert-1
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
intermediate-cert-2
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ca-cert
-----END CERTIFICATE-----

tls.key
-----BEGIN PRIVATE KEY-----
my-key
-----END PRIVATE KEY-----

This is currently working well for me, except I would like to store my certs in Azure KeyVault and use your solution to create/sync the k8s certificates for me.

I have created myself a single fullchain.pem file to contain the full chain of required certificates and the private key so that I can upload this into Azure KeyVault / Certificates.

-----BEGIN CERTIFICATE-----
mycert
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
intermediate-cert-1
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
intermediate-cert-2
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
ca-cert
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
my-key
-----END PRIVATE KEY-----

This certificate is successfully stored in my Vault.

However if I try to create an AzureKeyVaultSecret CRD in my cluster....

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: azure-keyvault-certificate
  namespace: default
spec:
  vault:
    name: my-vault
    object:
      type: certificate
      name: my-cert
  output:
    secret:
      name: ambassador-certs
      type: kubernetes.io/tls

The pod throws an error certificate has multiple public keys.

If I upload and use cert.pem file in Azure KeyVault which contains only the one certificate and private key, then all works fine, but I need the generated kubernetes secret to contain the full chain of certificates as mentioned.

Is this a current limitation or is there a workaround for this?

[BUG] Issue retrieving secret resource in container in Istio environment

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
I'm able to get the init-container to startup, I can see it getting mutated in the akvs pod logs, however in the Container log:

level=fatal msg="env-injector: error getting azurekeyvaultsecret resource 'password', error: an error on the server (\"\") has prevented the request from succeeding (get azurekeyvaultsecrets.spv.no password)"

To Reproduce

Kubernetes Version: v1.15.7

AKVS Definition:

apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
  name: password
  namespace: default
spec:
  vault:
    name: vault
    object:
      name: password
      type: secret

Namespace label:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    azure-key-vault-env-injection: enabled
    istio-injection: enabled
  name: default
spec:
  finalizers:
  - kubernetes

Expected behavior
I expect the env variable to get injected from the azure key vault
Logs

Env Injector Pod logs

time="2020-02-21T03:23:32Z" level=info msg="checking for env vars containing '@azurekeyvault' in container stats-api"
time="2020-02-21T03:23:32Z" level=info msg="found env var: password@azurekeyvault"
time="2020-02-21T03:23:32Z" level=info msg="found credentials to use with registry 'repo.azurecr.io'"
time="2020-02-21T03:23:32Z" level=info msg="pulling docker image epo.azurecr.io/com.company/app:b327a2to get entrypoint and cmd, timeout is 120 seconds"
time="2020-02-21T03:23:33Z" level=info msg="docker image epo.azurecr.io/com.company/app:b327a2 pulled successfully"
time="2020-02-21T03:23:33Z" level=info msg="inspecting container image repo.azurecr.io/com.company/app:b327a2, looking for entrypoint and cmd"
time="2020-02-21T03:23:33Z" level=info msg="using cmd from image: [sh -c java $JAVA_OPTS -jar /usr/src/service/service.jar]"
time="2020-02-21T03:23:33Z" level=info msg="using cmd from image: [sh -c java $JAVA_OPTS -jar /usr/src/service/service.jar]"
time="2020-02-21T03:23:33Z" level=info msg="using 'sh -c java $JAVA_OPTS -jar /usr/src/service/service.jar' as arguments for env-injector"
time="2020-02-21T03:23:33Z" level=info msg="containers mutated and pod updated with init-container and volumes"

Additional context
We are running istio which has it's own init container, but I've tried with the annotation to allow all ingress, along with destinationrules and serviceentries to allow anything to the keyvaults. I don't see anything on Istio that would be blocking this.

[FEATURE] Please allow adding type to secrets

It would be nice if the secret sync would allow for you to be able to specify a type. I would like to inject docker registry credentials but it will only allow you to create an opaque secret and you cannot specify the type

Deployment never schedules webhook container

Been playing around with this chart but seem to be missing something critical as a deployment created from a fresh helm install never yields available pods. (note the Replicas line)

$ kubectl describe deployments keyvault-azure-key-vault-env-injector
Name:                   keyvault-azure-key-vault-env-injector
Namespace:              default
Labels:                 app=azure-key-vault-env-injector
                        chart=azure-key-vault-env-injector-0.1.2
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=azure-key-vault-env-injector,release=keyvault
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=azure-key-vault-env-injector
                    release=keyvault
  Service Account:  keyvault-azure-key-vault-env-injector
  Containers:
   azure-key-vault-env-injector:
    Image:      spvest/azure-keyvault-webhook:0.1.10
    Port:       443/TCP
    Host Port:  0/TCP
    Environment:
      TLS_CERT_FILE:            /var/serving-cert/servingCert
      TLS_PRIVATE_KEY_FILE:     /var/serving-cert/servingKey
      AZUREKEYVAULT_ENV_IMAGE:  spvest/azure-keyvault-env:0.1.10
      AAD_POD_BINDING_LABEL:     (v1:metadata.labels['aadpodidbinding'])
      DEBUG:                    false
# mounts & volumes emitted
OldReplicaSets:    keyvault-azure-key-vault-env-injector-c49946b4b (0/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  7m35s  deployment-controller  Scaled up replica set keyvault-azure-key-vault-env-injector-c49946b4b to 1

As you can see there's 1 desired replica and it stays as 1 unavailable.

Creating an AzureKeyVaultSecret resource named my-test-secret (following the README), then scheduling a simple pod that would utilize the injections,

apiVersion: v1
kind: Pod
metadata:
  name: kv-tester
  labels:
    app: kv-tester
spec:
  containers:
    - name: kv-tester
      image: debian
      command: ["printenv"]
      env:
      - name: MY_SECRET
        value: my-test-secret@azurekeyvault

yields

$ kubectl create -f kubernetes/testing/kv-tester.yaml 

Error from server (InternalError): error when creating "kubernetes/testing/kv-tester.yaml": Internal 
error occurred: failed calling admission webhook "pods.azure-key-vault-env-
injector.admission.spv.no": Post https://keyvault-azure-key-vault-env-
injector.default.svc:443/pods?timeout=30s: net/http: request canceled while waiting for 
connection (Client.Timeout exceeded while awaiting headers)

It's pretty clear that it's trying to communicate with a service that has a deployment with no pods backing it.

Why isn't the deployment scheduling the pods? I'm assuming things will work as expected from there because I've confirmed the service principal has the correct access to key vault instance. Any help would be much appreciated, thank you.

help on keyvault-controller with aad-pod-identity

Hi, I am trying to follow instruction to get keyvault-controller to work. Here is what I did:

  1. install aad-pod-identity
  2. created user-assigned-identity compes-bedrock-kv-reader, grant appropriate permissions
  3. turn on customAuth, set podIdentitySelector to kv-reader
  4. created AzureIdentity and AzureIdentityBinding
  5. create AzureKeyVaultSecret

I can see azurekeyvaultcontroller is created and running, but secrets is not created, can you point me to logs where to trouble shoot.

  • AzureKeyVaultSecret
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: tls-cert
  namespace: {{.Values.namespace}}
spec:
  vault:
    name: {{.Values.vault_name}}
    object:
      name: {{.Values.tls_cert_secret_name}}
      type: certificate
  output:
    name: tls-cert
  • AzureIdentity
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
  name: pod-identity
spec:
  type: 0
  ResourceID: /subscriptions/{{.Values.subscription_id}}/resourcegroups/<resourcegroup>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{{.Values.pod_identity_name}}
  ClientID: {{.Values.pod_identity_client_id}}
  • AzureIdentityBinding
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
  name: pod-identity-binding
spec:
  AzureIdentity: pod-identity
  Selector: {{.Values.pod_identity_selector}}
  • values.yaml
subscription_id: ***
pod_identity_client_id: ***
pod_identity_selector: kv-reader
pod_identity_name: compes-bedrock-kv-reader
  • kubectl describe pod keyvault-controller-azure-key-vault-controller-6c59664544-n4j94 -n security
Name:               keyvault-controller-azure-key-vault-controller-6c59664544-n4j94
Namespace:          security
Priority:           0
PriorityClassName:  <none>
Node:               aks-default-33055681-0/10.240.0.4
Start Time:         Tue, 20 Aug 2019 14:26:23 -0700
Labels:             aadpodidbinding=kv-reader
                    app=azure-key-vault-controller
                    pod-template-hash=6c59664544
                    release=keyvault-controller
Annotations:        <none>
Status:             Running
IP:                 10.244.1.20
Controlled By:      ReplicaSet/keyvault-controller-azure-key-vault-controller-6c59664544
Containers:
  azure-keyvault-controller:
    Container ID:  docker://9b8f25938a798d8b99119596422f1a32a101bd95d25e46edc5a6d2b911ff2801
    Image:         spvest/azure-keyvault-controller:0.1.14
    Image ID:      docker-pullable://spvest/azure-keyvault-controller@sha256:332425e3983a0bac3595746d8a6bd343a9dbe6593ab1c3309b6a3ba953423dde
    Port:          <none>
    Host Port:     <none>
    Args:
      --cloudconfig=/etc/kubernetes/azure.json
    State:          Running
      Started:      Tue, 20 Aug 2019 14:26:28 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      AZURE_VAULT_NORMAL_POLL_INTERVALS:     1m
      AZURE_VAULT_EXCEPTION_POLL_INTERVALS:  10m
      AZURE_VAULT_MAX_FAILURE_ATTEMPTS:      5
      AAD_POD_BINDING_LABEL:                  (v1:metadata.labels['aadpodidbinding'])
      CUSTOM_AUTH:                           true
      LOG_LEVEL:                             info
    Mounts:
      /etc/kubernetes/azure.json from azure-config (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from keyvault-controller-azure-key-vault-controller-token-57xx6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  azure-config:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/azure.json
    HostPathType:  File
  keyvault-controller-azure-key-vault-controller-token-57xx6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  keyvault-controller-azure-key-vault-controller-token-57xx6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

results

kubectl get azurekeyvaultsecret
No resources found.

Injector error: first record does not look like a TLS handshake

Hi, i deployed injector 0.1.14 and in logs i see following error.

Run on AKS version 1.13.5
On AKS 1.12.4 everything works fine.

2019/08/09 12:06:21 [WARN] no metrics recorder active
2019/08/09 12:06:21 [WARN] no tracer active
2019/08/09 12:06:21 [INFO] listening on :443
2019/08/09 12:06:29 http: TLS handshake error from 10.164.202.20:43382: tls: first record does not look like a TLS handshake

[FEATURE] Vault name from configmap or secret

I would like to inject the key vault name from a secret or configmap value.

In our case we have several AzureKeyVaultSecrets pointing to the same keyvault. Then we also have multiple clusters and so I have to manually edit the keyvault name and apply it manually to every cluster.

So I would like to preconfigure the key vault name in a secret or configmap value. Then I could apply the same yaml files without any additional configuration to all clusters. Also in case of adding a new value I could just rollout the new yaml without defining all the different keyvaults again.

[BUG]

how can we sync multiple secrets?
The document does not contain enough information. https://akv2k8s.io/tutorials/4-multi-value-secret

[FEATURE] Add an option to synchronyze secrets containing certificates as K8S tls secret

**Is your feature request related to a problem?
In Azure, App Service Certificate store the generated certificates as a secret and not a certificate.
With the Controller, you can synchronize Key Vault certificates to K8S TLS secret. But if the source is a secret, you can not choose kubernetes.io/tls as output.
Certificates provided by App Services Certificates provide an auto-renew feature, so it will be very handy to have the Controller to be able to check the secret once per day and modify the kubernetes.io/tls secret if the certificates has been renewed.
With this system we can ensure that both certificates (the one used by the ingress, and the one used by the application, retrieved with env-injector or flexvolume or other tools) will be synchronized.
We want to avoid as much as possible to use kubernetes secrets, but for ingress we have no choice for the moment. For applicative container, the certificates is retrieved during each container start to be sure to have the latest one (we deploy at least one time per week, so the certificate will always be ok, before it’s renew 30 days before the expiry date)

We would like to avoid incident with ingress certificates that need to be updated manually ...

Describe the solution you'd like
Add an option as input type like certificate-as-a-secret and provide the output type as kubernetes.io/tls

Describe alternatives you've considered
Manually update the ingress certificate. Copy secrets to certificates in Key Vault with batches (not great ...)

Original pod is not creating

  • We are using only Azure Key Vault Env Injector to inject Azure Key vault secrets into AKS.

Azure Key Vault Env Injector helm release file config.

---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: azure-key-vault-injector
  namespace: aks-key-vault
  annotations:
    fluxcd.io/automated: "false"
spec:
  releaseName: azure-key-vault-injector
  chart:
    repository: http://charts.spvapi.no
    name: azure-key-vault-env-injector
    version: 0.1.5
  values:
    installCrd: false
    customAuth:
      enabled: true
      autoInject:
        enabled: true
    debug: true
    env:
      AZURE_TENANT_ID: ""
      AZURE_CLIENT_ID: ""
      AZURE_CLIENT_SECRET: ""

CRD AzureKeyVaultSecret config.

---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: kvtest
  namespace: aks-key-vault
spec:
  vault:
    name: ***** # name of key vault
    object:
      type: secret # akv object type
      name: kvtest
  • Enabled label "azure-key-vault-env-injection" on another namespace to use AzureKeyVaultSecret as env var.

Env var config as

      - name: Key_Vault
        value: kvtest@azurekeyvault

Issue: The original pod is creating.
Logs from azure-key-vault-injector pod.

time="2020-02-21T15:24:48Z" level=info msg="found pod to mutate in namespace '****'"
time="2020-02-21T15:24:48Z" level=info msg="found container '****' to mutate"
time="2020-02-21T15:24:48Z" level=info msg="checking for env vars containing '@azurekeyvault' in container ****"
time="2020-02-21T15:24:48Z" level=info msg="found env var: kvtest@azurekeyvault"
time="2020-02-21T15:24:48Z" level=info msg="did not find credentials to use with registry '****' - getting default credentials"
time="2020-02-21T15:24:48Z" level=info msg="using default credentials for docker registry with clientid: ****"
time="2020-02-21T15:24:48Z" level=info msg="pulling docker image **** to get entrypoint and cmd, timeout is 120 seconds"
time="2020-02-21T15:24:49Z" level=info msg="docker image **** pulled successfully"
time="2020-02-21T15:24:49Z" level=info msg="inspecting container image ****, looking for entrypoint and cmd"
time="2020-02-21T15:24:49Z" level=info msg="found entrypoint: [dotnet ****.dll]"
time="2020-02-21T15:24:49Z" level=info msg="using 'dotnet ****.dll' as arguments for env-injector"
time="2020-02-21T15:24:49Z" level=info msg="creating secret in new namespace '****'..."
2020/02/21 15:24:49 [ERROR] admission webhook error: Secret "" is invalid: metadata.name: Required value: name or generateName is required

[BUG] could not find the requested resource (get azurekeyvaultsecrets.spv.no)

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
I am trying this for first time based on documentation using default auth in a test environment. I am getting an error could not find the requested resource (get azurekeyvaultsecrets.spv.no) in the controller log.

I see that this name is part of the metadata of the Custom Resource Definition.

I think I understand better now. The controller is created first without the crd but it wants it. Then after installing the env injector the crd gets created so the errors stop coming from the controller?

To Reproduce
#!/bin/bash
RELEASE=key01
helm install $RELEASE spv-charts/azure-key-vault-controller --namespace=akv2k8s --set installCrd=false
RELEASE=key01env
helm install $RELEASE spv-charts/azure-key-vault-env-injector --namespace=akv2k8s

Expected behavior
No errors

Logs
xxshambm@WL362836:~/git/analytics-workspace/scripts$ kubectl --namespace=akv2k8s logs key01-azure-key-vault-controller-65bc586d97-p59vc
time="2020-01-18T12:03:36Z" level=info msg="Log level set to 'info'"
time="2020-01-18T12:03:36Z" level=info msg="Creating event broadcaster"
time="2020-01-18T12:03:36Z" level=info msg="Setting up event handlers"
time="2020-01-18T12:03:36Z" level=info msg="Starting AzureKeyVaultSecret controller"
time="2020-01-18T12:03:36Z" level=info msg="Waiting for informer caches to sync"
E0118 12:03:36.775250 1 reflector.go:134] github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/k8s/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.AzureKeyVaultSecret: the server could not find the requested resource (get azurekeyvaultsecrets.spv.no)
E0118 12:03:37.778798 1 reflector.go:134] github.com/SparebankenVest/azure-key-vault-to-kubernetes/pkg/k8s/client/informers/externalversions/factory.go:120: Failed to list *v1alpha1.AzureKeyVaultSecret: the server could not find the requested resource (get azurekeyvaultsecrets.spv.no)
......
time="2020-01-18T12:03:58Z" level=info msg="Starting workers"
time="2020-01-18T12:03:58Z" level=info msg="Started workers"

Additional context
Add any other context about the problem here.

Using env-injector with aad-pod-identity allows getting all secrets

Describe the bug
I feel like this is a consequence of the fundamental design of the env-injector, but I'd like to discuss the consequences.

When using the env-injector with customAuth, I have to add a aadpodidbinding label to the pod, so that the Injector can connect to Key Vault using the credentials of the pod. Consequently, this means, that later, once inside the container, i can simply connect to the Key Vault directly and get any secret:

curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true
curl https://<YOUR-KEY-VAULT-URL>/secrets/<secret-name>?api-version=2016-10-01 -H "Authorization: Bearer <ACCESS TOKEN>"

Maybe there is another way to configure the Injector that I'm missing? The way it currently is, I am not gaining much in terms of security.

Steps to Reproduce:

# injector-val.yml

customAuth:
  enabled: true
  autoInject:
    enabled: true
    podIdentitySelector: <pod-selector>
# deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
        aadpodidbinding: <pod-selector>
    spec:
      containers:
        - env:
            - name: password
              value: password@azurekeyvault
            - name: ENV_INJECTOR_LOG_LEVEL
              value: debug
          image: <my-image>
          imagePullPolicy: Always
          name: test

Expected behavior
I was hoping to limit the access to Key Vault secrets in case of a malicious take-over. To me, it seems like using the env-injector that is not the case.

[FEATURE] Configurable az cloud (e.g. AzureChinaCloud)

Is your feature request related to a problem? Please describe.
akv2k8s Running in the AzureChinaCloud cloud uses the global endpoint for keyvaultDns.

$ az cloud show --name AzureCloud --output json | grep keyvault
    "keyvaultDns": ".vault.azure.net",
$ az cloud show --name AzureChinaCloud --output json | grep keyvault
    "keyvaultDns": ".vault.azure.cn",

Describe the solution you'd like
It would be great to be able to configure the Azure cloud you are running akv2k8s in. I bet users of the AzureGermanCloud would love this too.

Describe alternatives you've considered
Spinning up an akv instance in the global Azure cloud for a project running in China. Then configuring additional custom auth for it?

Needs retry when using with aad-pod-identity

Currently when you use aad-pod-identity to configure the controller by setting the labels. aad-pod-identity creates the azureassignedIndentites. This is async process with respect to controller starting to process the crds. We need to add some retry to the client which is making the keyvault api call. similar to

waitForAzureAuth()
https://github.com/Azure/application-gateway-kubernetes-ingress/blob/2cbdb2f088e847e584f006231bd01c8031731a5a/cmd/appgw-ingress/main.go

[BUG] Problem after upgrade

Describe the bug
After updating to v1.0.2 the controller shows in the log:
time="2020-03-18T11:44:23Z" level=error msg="error syncing 'default/tls-cluster-wildcard-certificate': AzureKeyVaultSecret.spv.no \"tls-cluster-wildcard-certificate\" is invalid: apiVersion: Invalid value: \"spv.no/v1\": must be spv.no/v1alpha1, requeuing"

To Reproduce
Upgrade from 0.0.x to 1.0.2 with already created AzureKeyVaultSecrets in the Cluster

Expected behavior
The synch should work without complain

Logs

time="2020-03-18T11:32:22Z" level=info msg="Log level set to 'info'"
W0318 11:32:22.799495       1 client_config.go:551] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2020-03-18T11:32:22Z" level=info msg="Creating event broadcaster"
time="2020-03-18T11:32:22Z" level=info msg="Setting up event handlers"
time="2020-03-18T11:32:22Z" level=info msg="Starting AzureKeyVaultSecret controller"
time="2020-03-18T11:32:22Z" level=info msg="Waiting for informer caches to sync"
time="2020-03-18T11:32:22Z" level=info msg="Starting workers"
time="2020-03-18T11:32:22Z" level=info msg="Started workers"
time="2020-03-18T11:32:22Z" level=info msg="Successfully synced AzureKeyVaultSecret 'default/tls-cluster-wildcard-certificate' with Kubernetes Secret"
time="2020-03-18T11:32:22Z" level=info msg="Successfully synced AzureKeyVaultSecret 'default/tls-cluster-wildcard-certificate' with Kubernetes Secret"
time="2020-03-18T11:33:22Z" level=info msg="Exporting certificate with private key: true"
time="2020-03-18T11:33:23Z" level=error msg="error syncing 'default/tls-cluster-wildcard-certificate': AzureKeyVaultSecret.spv.no \"tls-cluster-wildcard-certificate\" is invalid: apiVersion: Invalid value: \"spv.no/v1\": must be spv.no/v1alpha1, requeuing"
time="2020-03-18T11:34:22Z" level=info msg="Exporting certificate with private key: true"
time="2020-03-18T11:34:23Z" level=error msg="error syncing 'default/tls-cluster-wildcard-certificate': AzureKeyVaultSecret.spv.no \"tls-cluster-wildcard-certificate\" is invalid: apiVersion: Invalid value: \"spv.no/v1\": must be spv.no/v1alpha1, requeuing"
time="2020-03-18T11:44:22Z" level=info msg="Exporting certificate with private key: true"
time="2020-03-18T11:44:23Z" level=error msg="error syncing 'default/tls-cluster-wildcard-certificate': AzureKeyVaultSecret.spv.no \"tls-cluster-wildcard-certificate\" is invalid: apiVersion: Invalid value: \"spv.no/v1\": must be spv.no/v1alpha1, requeuing"

Helm logs:

helm3 ls
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                   APP VERSION
akv-controller  env-injector    8               2020-03-18 11:30:40.563095938 +0000 UTC deployed        azure-key-vault-controller-1.0.2        1.0.2
akv-injector    env-injector    8               2020-03-18 11:30:35.355556568 +0000 UTC deployed        azure-key-vault-env-injector-1.0.2      1.0.2

Question: why does env injector insist on pulling the docker image?

Hello,

I've got an environment where i'm trying to deploy resources in Azure Kubernetes and I'm running into an issue where env-injector is failing because it is trying to pull the docker image, but it cannot find an imagePullSecret and it cannot find an azure.json to use for proper auth. I've solved the issue with populating an imagePullSecret, but technically I wouldn't have to do that. env-injector seems to require this, however.

My question is, why is it required that the env-injector pull the image on behalf of the pod admission controller? Is it because mutating the pod via the webhook somehow prevents the scheduler or admission controller from getting the pod itself?

How do we use aad-pod-identity for controllers

As per the aad-pod-identity, after you create the "AzureIdentity" and "AzureIdentityBinding". You need to specify the label "aadpodidbinding" from "AzureIdentityBinding" during deployment of the controller. The install step is not clear about how you do this. The helm template also does not allow this. https://github.com/SparebankenVest/public-helm-charts/blob/019fdcc58f3bdbd5396958a3e6471632bf0b033e/stable/azure-key-vault-controller/templates/deployment.yaml

Does it mean you need to change the template and set the label and then run helm install spv-charts/azure-key-vault-controller --set keyVault.customAuth.enabled=true

Feature Request: Pull all Secrets

Hi Guys!

Thanks for this awesome project!
I hope this is the right place for this Request.

I have an additional Use-Case which I want to explain to you.

We are transforming from Azure App Service with Managed Identity to Azure Kubernetes Service.

In our Microservice scenario, every service got an Azure Key Vault. Containing the secrets needed to run the service. The Service starts and pulls ALL Secrets into it as env variables.
There might be a better approach, where the Secrets are only pulled for the time needed and then dismissed.

Currently, the AppService can connect through the Managed Identity. Now, in Kubernetes, there is no managed identity. One easy solution what it to create a Service Principal and give and create a Secret for the Pod with the credentials.

What I would like to do is the following:
Use your Env - Controller with specific Service Principal to access the Key Vault and then pulling out all Secrets into Kubernetes Pod.
This would be a greater abstraction between Secrets and Service.

So as a new Secret for a given Service comes up, I just have to add it to the Key Vault. The Env - Controller will pulled it out once the Pod restarts.

Specifying all Secrets manually per Pod/Deployment is sadly not an option for us.

Feature:

  1. Pull out all secrets from Keyvault
  2. Pull out all Certs

Thanks a lot,
Michael

AzureKeyVault API invalid URL error

Hi,

version: spvest/azure-keyvault-controller:0.1.14

When trying to get secret from AKV, the AKV API responses with invalid URI error:

failed to get secret value for 'AZURE_SECRET' from Azure Key vault using object name '[NAME]', error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://XX.vault.azure.net/secrets/YY/450e17bd04ac4e58bbb49bd1a67f1?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"\"http://www.w3.org/TR/html4/strict.dtd\">\r\n<HTML><HEAD><TITLE>Bad Request</TITLE>\r\n<META HTTP-EQUIV=\"Content-Type\" Content=\"text/html; charset=us-ascii\"></HEAD>\r\n<BODY><h2>Bad Request - Invalid URL</h2>\r\n<hr><p>HTTP Error 400. The request URL is invalid.</p>\r\n</BODY></HTML>\r\n

If I copy the url logged in the controller and try to get the secret directly from AKV, removing the suffix "?api-version=2016-10-01", it gives me no error, but if I append that suffix it gives me the same invalid URI error.

Does someone else has the same error?

Consistent timeout issue pulling images

I have three clusters a dev, uat, and prod cluster (don't get me started), The Dev and UAT clusters seem to work perfectly with the environment variable injector. Love the way it works BTW! Though the clusters are identical, save the vnet they are configured on, I am getting a timeout when trying to pull images from the prod environment. The timeout only occurs on namespaces that have the injector enabled. I am not necessarily looking for anyone to solve this issue directly, but more to offer some troubleshooting suggestions. Here is a bit more information... Helm appears healthy and upgraded, as I use it for plenty of other things in the environment. Kubernetes is running at 1.13.10 in all three environments.

running kubectl logs on the injector pod (in the default namespace) reveals this:
time="2019-08-27T18:14:42Z" level=info msg="pulling docker image tmocontainers.azurecr.io/eob/web-app:20190827.1 to get entrypoint and cmd, timeout is 30 seconds"
2019/08/27 18:15:12 [ERROR] admission webhook error: failed to get auto cmd, error: failed to pull docker image 'tmocontainers.azurecr.io/eob/web-app:20190827.1', error: context deadline exceeded

I have ensured that authorization policies are in place between the SP in aks and the ACR, I also added a docker login secret - the logs indicates that they find those fine... but apparently just cannot start my pods.

Running the deployment in another namespace without the injection enabled starts the pod just fine... I have created another cluster on the same subnet and it also fails when enabling the injection in the namespace.

Finally, I'll continue looking for an answer and may offer a pull-request if I find there is something that can be done to correct it from a code perspective, but honestly it seems mroe environmental, and would love to have some better information in the logs, or know about where else I could look to get that information. Thanks!

[Question] Multi-tenancy setup

Your question
What is the proper way to achieve a multi-tenancy setup with akv2k8s?

Basically, we have a cluster with multiple tenants, each having their own namespace. For each namespace, we would like to restrict access to the team's own keyvault (so 1 team = 1 namespace = 1 key vault access).

Currently, I am not too sure how we could best achieve this. The credentials seems to be set for the whole chart. I guess we could create multiple controller and injectors but since the webhook is triggered based on the "azure-key-vault-env-injection: enabled" label, I am guessing this would trigger all instances?

Since we cannot restrict access to specific secrets in key vault, I am not sure how to get started building a multi-tenancy setup with akv2k8s.

[FEATURE] FileInjector option to mount KV secrets into a volume

Problem: Lazy app developers have a habit of dumping all env vars to logs. Env vars are also exposed in clear text in some UIs (Azure Log analytics portal for example) which is not desirable with secrets.

Solution: Much like the EnvInjector, support a FileInjector that will inject secrets using the init-container into an in-memory volume for the pod.

[BUG] SecurityContext not supported

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
If a container is run as a different user defined by a SecurityContext the environment injection fails as the Init container does not start.

To Reproduce
I used the exmaples in the tutorial and added a securityContext section to the akvs-secret-app deployment. So it will look like the following now

apiVersion: apps/v1
kind: Deployment
metadata:
  name: akvs-secret-app-run-as
  namespace: akv-test
  labels:
    app: akvs-secret-app-run-as
spec:
  selector:
    matchLabels:
      app: akvs-secret-app-run-as
  template:
    metadata:
      labels:
        app: akvs-secret-app-run-as
    spec:
      containers:
      - name: akv2k8s-env-test
        image: spvest/akv2k8s-env-test:2.0.1
        args: ["TEST_SECRET"]
        env:
        - name: TEST_SECRET
          value: "secret-inject@azurekeyvault" # ref to akvs
      securityContext:
        runAsUser: 100

Expected behavior
Init container started successfully and secret is made available to the pod

Logs
deployment logs:
Error from server (BadRequest): container "akv2k8s-env-test" in pod "akvs-secret-app-run-as-644d9cd75c-n4rbr" is waiting to start: PodInitializing

Additional context
Add any other context about the problem here.

[BUG] when using aad-pod-identity, env-injector fail to pull image from ACR

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Your question
Does the env-injector use some custom mechanism to authenticate with a private docker registry to inspect the image in order to overwrite the entrypoint or cmd?
I'm having some issues where the env-injector container is showing logs where it attempts to pull the image from ACR but it fails.

To Reproduce
If question relates to a certain behavior, describe steps to reproduce:
I'm using AKS + Azure Container Registry with a service principle authentication mechanism that allows me to spin up containers in AKS available in my private registry (ACR).
The original pod itself was deployed successfully and i can verify that the AKS cluster has the necessary permission to pull images from ACR.
However, I've noticed that not env-injector debug logs were showing up in my container so i poked around the env-injector container and noticed some error messages related to it's inability to pull docker images from ACR

Logs
Here is the command I am running to find that information:
$ kubectl logs -n akv2k8s azure-key-vault-env-injector-5458b4bc9d-njxrz -c azure-key-vault-env-injector

Here are the logs indicating the problem mentioned:

time="2020-03-27T19:43:58Z" level=info msg="found pod to mutate in namespace 'argo'"
time="2020-03-27T19:43:58Z" level=info msg="found container 'api-test' to mutate"
time="2020-03-27T19:43:58Z" level=info msg="checking for env vars containing '@azurekeyvault' in container api-test"
time="2020-03-27T19:43:58Z" level=info msg="found env var: db-connection-string@azurekeyvault"
time="2020-03-27T19:43:58Z" level=info msg="did not find credentials to use with registry '****.azurecr.io' - getting default credentials"
time="2020-03-27T19:43:58Z" level=info msg="failed to read azure.json to get default credentials, error: open /etc/kubernetes/azure.json: no such file or directory"
time="2020-03-27T19:43:58Z" level=info msg="pulling docker image ***.azurecr.io/***:latest to get entrypoint and cmd, timeout is 120 seconds"
2020/03/27 19:43:59 [ERROR] admission webhook error: failed to get auto cmd, error: failed to pull docker image '****.azurecr.io/***:latest', error: Error response from daemon: Get https://****.azurecr.io/v2/***/manifests/latest: unauthorized: authentication required

[BUG]: ReplicaSet cannot create a pod

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
Following the installation guide, I installed a new keyvault controller. The pod does not start, because the replicaset gives the following error:
Warning FailedCreate 6s (x4 over 24s) replicaset-controller (combined from similar events): Error creating: Pod "secretvault.yaml-azure-key-vault-controller-549b785c9c-8jvqz" is invalid: [spec.volumes[1].name: Invalid value: "secretvault.yaml-azure-key-vault-controller-token-tjbfk": a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is 'a-z0-9?'), spec.containers[0].volumeMounts[1].name: Not found: "secretvault.yaml-azure-key-vault-controller-token-tjbfk"]

To Reproduce
Steps to reproduce the behavior: Install a new controller using yaml files

Expected behavior
The pod to be launched by the replicaset

Logs

Additional context
We do not use helm, so we generated the yaml files using the --dry-run option. We use Kubernetes 1.15.7

Secret with multiple entries in data field

Hello,

I am deploying istio using helm into my cluster. The chart requires a Secret with "username" and "passphrase" in the data section. At the moment I am only able to create a Secret with one single data entry. Can you please guide me how to configure the AzureKeyVaultSecret to achieve this?

This example shows the final Secret I want to create with AzureKeyVaultSecret.

##
apiVersion: v1
kind: Secret
metadata:
  name: grafana
  namespace: istio
  labels:
    app: grafana
type: Opaque
data:
  username: ...
  passphrase: ...

Thanks!

[BUG] hostPath type check failed: /etc/kubernetes/azure.json is not a file

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
I am trying to run Azure Key Vault Env Injector in a local minikube Kubernetes cluster, with customAuth.enabled set to true to use client credentials and client credentials are specified as env values in the values file.

Here is my full valuesfile I pass to helm chart:

# Default values 

installCrd: true # if installing both controller and env injector, only one can install CRD
replicaCount: 1
debug: false

image:
  repository: spvest/azure-keyvault-webhook
  tag: 0.1.15
  pullPolicy: IfNotPresent

envImage:
  repository: spvest/azure-keyvault-env
  tag: 0.1.15
  
service:
  name: azure-keyvault-secrets-webhook
  type: ClusterIP
  externalPort: 443
  internalPort: 443

# Set to true to use custom auth - see https://github.com/SparebankenVest/azure-key-vault-to-kubernetes/blob/master/README.md#authentication 
customAuth:
  enabled: true

  # set to true to enable auto injection of credentials
  autoInject: 
    enabled: true

    # if credentials are env vars, spesify name of the 
    # secret the env injector should create for
    # storing the credentials
    
    secretName: azure-key-vault-secret
    
    # uncomment to use aad-pod-identity (MSI) and point to a selector
    # if used, secretName must not be set
    
    # podIdentitySelector: <selector name>

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

env:
  AZURE_TENANT_ID: <redacted>
  AZURE_CLIENT_ID: <redacted>
  AZURE_CLIENT_SECRET: <redacted>

My understanding is that this should allow to run Azure Key Vault Env Injector outside of AKS in any Kubernetes cluster, yet the pod tries to access hostPaht /etc/kubernetes/azure.json which doesn't exists since it's not AKS.

To Reproduce
helm install elm install spv-charts/azure-key-vault-env-injector -f ../values.yaml
where values.yaml is my specified values file.

kubectl describe pod -l app=azure-key-vault-env-injector
It shows in Events:

Events:
  Type     Reason       Age              From               Message
  ----     ------       ----             ----               -------
  Normal   Scheduled    4s               default-scheduler  Successfully assigned default/injector-azure-key-vault-env-injector-96454db4-ln6hh to minikube
  Warning  FailedMount  1s (x4 over 4s)  kubelet, minikube  MountVolume.SetUp failed for volume "azureconf" : hostPath type check failed: /etc/kubernetes/azure.json is not a file

Expected behavior
Pod should start with no issues

[BUG] index out of range on listing of containers comand

Describe the bug
We had some use case of multi-container pod (mainly php api services with nginx container+php one) was failing to inject the secrets.
During the oauth token serving the request failed because of an index out of bound error, this error was caused by one of the container (the nginx one) having empty commands in the pod spec. This is probably because the image Dockerfile used CMD instead of ENTRYPOINT and kubernetes, when no command is specified in the container spec yaml, it use (correctly) the ENTRYPOINT command when creating the container Spec. So in the end if there is no entrypoint the container commads is left empty.

To solve this issue for the multicontainer pod case, where only one of them needs secret injection and its image has a "correct" Dockerfile, it should be enough to check for the Commands array length>0 in AND with the current condition (@line 39 of azure-keyvault-secrets-webhook/authorization.go). In general this case should be managed throwing a correct error message stating the issue.

Expected behavior
Webook succesfully serve the token

Logs
time="2020-04-15T09:27:24Z" level=info msg="using '/usr/sbin/php-fpm7.3 --nodaemonize' as arguments for env-injector" time="2020-04-15T09:27:24Z" level=info msg="containers mutated and pod updated with init-container and volumes" 2020/04/15 09:27:50 http: panic serving 10.80.1.188:55488: runtime error: index out of range [0] with length 0 goroutine 45033 [running]: net/http.(*conn).serve.func1(0xc0004500a0) /usr/local/go/src/net/http/server.go:1767 +0x139 panic(0x15ddb60, 0xc000509600) /usr/local/go/src/runtime/panic.go:679 +0x1b2 main.authorize(0xc00013f320, 0xc00072e3c0, 0x11, 0xc000540d15, 0x18, 0xc000540d0a, 0xa, 0x203000, 0x203000) /go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/authorization.go:39 +0xabb main.authHandler(0x18ff400, 0xc0005ba460, 0xc00085a800) /go/src/github.com/SparebankenVest/azure-key-vault-to-kubernetes/cmd/azure-keyvault-secrets-webhook/main.go:178 +0x34a net/http.HandlerFunc.ServeHTTP(0x17614e8, 0x18ff400, 0xc0005ba460, 0xc00085a800) /usr/local/go/src/net/http/server.go:2007 +0x44 github.com/gorilla/mux.(*Router).ServeHTTP(0xc0001bbe00, 0x18ff400, 0xc0005ba460, 0xc00085a600) /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0xe2 net/http.serverHandler.ServeHTTP(0xc0001ae2a0, 0x18ff400, 0xc0005ba460, 0xc00085a600) /usr/local/go/src/net/http/server.go:2802 +0xa4 net/http.(*conn).serve(0xc0004500a0, 0x1905500, 0xc0005e6480) /usr/local/go/src/net/http/server.go:1890 +0x875 created by net/http.(*Server).Serve /usr/local/go/src/net/http/server.go:2927 +0x38e

[BUG] starting container process caused "exec: \"/azure-keyvault/azure-keyvault-env\": stat /azure-keyvault/azure-keyvault-env: no such file or directory"

Note: Make sure to check out known issues (https://github.com/sparebankenvest/azure-key-vault-to-kubernetes#known-issues) before submitting

Describe the bug
Using environment injection with either the supplied test image or my own results in the container not starting, apparently because the expected volume mounts are not actually created.

AKS with kubernetes 1.17.0

To Reproduce
Steps to reproduce the behavior:

Helm installation as per manual.

apiVersion: v1
kind: Namespace
metadata:
  name: akv-test
  labels:
    azure-key-vault-env-injection: enabled
---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: db-secret-inject
  namespace: akv-test
spec:
  vault:
    name: akvtest # name of key vault
    object:
      name: login-secret # name of the akv object
      type: secret # akv object type
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: akv2k8s-test-injection
  namespace: akv-test
  labels:
    app: akv2k8s-test-injection
spec:
  selector:
    matchLabels:
      app: akv2k8s-test-injection
  template:
    metadata:
      labels:
        app: akv2k8s-test-injection
    spec:
      containers:
      - name: akv2k8s-env-test
        image: spvest/akv2k8s-env-test
        env:
        - name: TEST_SECRET
          value: "secret-inject@azurekeyvault"
        - name: SP_SECRET
          value: "db-secret-inject@azurekeyvault"
        - name: ENV_INJECTOR_LOG_LEVEL
          value: debug

Expected behavior

SP_SECRET environment variable defined with value from AKV, inside a running container.

Logs
If applicable, add logs to help explain your problem.

The env-injector pod seems to think it did its work:

time="2020-03-11T09:20:02Z" level=info msg="found pod to mutate in namespace 'akv-test'"
time="2020-03-11T09:20:02Z" level=info msg="found container 'akv2k8s-env-test' to mutate"
time="2020-03-11T09:20:02Z" level=info msg="checking for env vars containing '@azurekeyvault' in container akv2k8s-env-test"
time="2020-03-11T09:20:02Z" level=info msg="found env var: secret-inject@azurekeyvault"
time="2020-03-11T09:20:02Z" level=info msg="found env var: db-secret-inject@azurekeyvault"
time="2020-03-11T09:20:02Z" level=info msg="did not find credentials to use with registry 'spvest' - getting default credentials"
time="2020-03-11T09:20:02Z" level=info msg="registry host 'spvest' is not a acr registry"
time="2020-03-11T09:20:02Z" level=info msg="pulling docker image docker.io/spvest/akv2k8s-env-test:latest to get entrypoint and cmd, timeout is 120 seconds"
time="2020-03-11T09:20:04Z" level=info msg="docker image docker.io/spvest/akv2k8s-env-test:latest pulled successfully"
time="2020-03-11T09:20:04Z" level=info msg="inspecting container image docker.io/spvest/akv2k8s-env-test:latest, looking for entrypoint and cmd"
time="2020-03-11T09:20:04Z" level=info msg="using 'entrypoint.sh' as arguments for env-injector"
time="2020-03-11T09:20:04Z" level=info msg="containers mutated and pod updated with init-container and volumes"

But in the pod description:

Events:
  Type     Reason     Age                   From                                        Message
  ----     ------     ----                  ----                                        -------
  Normal   Scheduled  <unknown>             default-scheduler                           Successfully assigned akv-test/akv2k8s-test-injection-6c796746df-vqspz to aks-nodepool1-40643695-vmss000001
  Normal   Pulled     13m                   kubelet, aks-nodepool1-40643695-vmss000001  Container image "spvest/azure-keyvault-env:1.0.1" already present on machine
  Normal   Created    13m                   kubelet, aks-nodepool1-40643695-vmss000001  Created container copy-azurekeyvault-env
  Normal   Started    13m                   kubelet, aks-nodepool1-40643695-vmss000001  Started container copy-azurekeyvault-env
  Normal   Started    13m                   kubelet, aks-nodepool1-40643695-vmss000001  Started container akv2k8s-env-test
  Normal   Pulled     12m (x4 over 13m)     kubelet, aks-nodepool1-40643695-vmss000001  Successfully pulled image "spvest/akv2k8s-env-test"
  Normal   Created    12m (x4 over 13m)     kubelet, aks-nodepool1-40643695-vmss000001  Created container akv2k8s-env-test
  Warning  Failed     12m (x3 over 12m)     kubelet, aks-nodepool1-40643695-vmss000001  Error: failed to start container "akv2k8s-env-test": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/azure-keyvault/azure-keyvault-env\": stat /azure-keyvault/azure-keyvault-env: no such file or directory": unknown
  Normal   Pulling    11m (x5 over 13m)     kubelet, aks-nodepool1-40643695-vmss000001  Pulling image "spvest/akv2k8s-env-test"
  Warning  BackOff    3m11s (x44 over 12m)  kubelet, aks-nodepool1-40643695-vmss000001  Back-off restarting failed container

The init container does not show any error state (exit code 0). Both containers have the /azure-keyvault/ mount listed.

The logs for the init container simply say that the file was copied:

kubectl logs -n akv-test akv2k8s-test-injection-6c796746df-vqspz -c copy-azurekeyvault-env
Copying /azure-keyvault/azure-keyvault-env to /azure-keyvault/

Additional context

This did work yesterday! Nothing has changed in the AKS cluster. I have redeployed the akv2k8s and application deployments a few times to be sure.

[BUG] Cleaning up controller using terraform doesn't remove the CRD

Describe the bug
We're using terraform to setup our AKS which also installs the akv2k8s using helm provider. Upon terraform destroy the following error appears:
Error: uninstallation completed with 1 error(s): uninstall: Failed to purge the release: delete: failed to get release "sh.helm.release.v1.azure-key-vault-controller.v1": release: not found

After trying to apply the terraform plan after that the following error will be thrown:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: azurekeyvaultsecrets.spv.no, existing_kind: apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition, new_kind: apie xtensions.k8s.io/v1beta1, Kind=CustomResourceDefinition

To Reproduce
Setup AKS cluster (i.e. using this quickstart https://github.com/Azure/terraform/tree/master/quickstart/201-aks-helm) and add the following to install helm chart

provider "kubernetes" {
  host = data.azurerm_kubernetes_cluster.default.kube_config.0.host

  client_certificate = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.client_certificate)
  client_key = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.cluster_ca_certificate)
}

provider "helm" {
  kubernetes {
    host = data.azurerm_kubernetes_cluster.default.kube_config.0.host

    client_certificate = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.client_certificate)
    client_key = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.default.kube_config.0.cluster_ca_certificate)
  }
}

data "helm_repository" "spv" {
  name = "spv"
  url = "http://charts.spvapi.no"
}

resource "kubernetes_namespace" "default" {
  metadata {
    name = local.aks_namespace
  }
}

resource "helm_release" "azure_keyvault_controller" {
  name = "azure-key-vault-controller"
  repository = data.helm_repository.spv.metadata[0].name
  chart = "azure-key-vault-controller"
  namespace = local.aks_namespace
}

Expected behavior
CRD should be cleaned up properly.
After executing manually the kubectl delete -f .\crd.yaml I can perform terraform apply successfully.

[BUG] Multiple AzureKeyVaultSecrets targeting the same Kubernets Secret are overwriting each other.

Describe the bug
As described here #36 (comment), defining multiple AzureKeyVaultSecret objects, which are targeting the same Kubernetes Secret are constantly overwriting each other.

To Reproduce

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: keyvault-secret-one
spec:
  vault:
    name: mykeyvault
    object:
      name: SecretOne
      type: secret
  output:
    secret:
      name: my-only-kuberentes-secret # <-- Same Kubernetes Secret targeted
      dataKey: secret-one
---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: keyvault-secret-two
spec:
  vault:
    name: mykeyvault
    object:
      name: SecretTwo
      type: secret
  output:
    secret:
      name: my-only-kuberentes-secret # <-- Same Kubernetes Secret targeted
      dataKey: secret-two

Expected behavior

I would expect an error message in the logs or (best case) one secret with two values:

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: my-only-kuberentes-secret
data:
  secret-one: XXXXXXXX
  secret-two: XXXXXXXX

Would be probably resolved, when implementing #36

No cleanup after the key is deleted from definition

Initial state:
A secret definition that works so that there is an AzureKeyVaultSecret and an ordinary secret in kuberentes created by the controller and synchronization happens.

Next state:
The AzureKeyVaultSecret definition is removed (in my case I changed name actually, so maybe that has to do something with it), changes are applied to kubernetes.

Outcome:
Both the old AzureKeyVaultSecret and Kubernetes secret is present in the culster.

Desired outcome:
Only the secrets defined in the configuration file should be present, both old resources have to be deleted.

[Question] is it possible to set labels on a AzureKeyVaultSecret

Hi,

ist it possible to set a label on a secret ressource?

e.g.

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: < name for azure key vault secret>
  namespace: <namespace for azure key vault secret>
spec:
  vault:
    name: <name of azure key vault>
    object:
      name: <name of azure key vault object to sync>
      type: <object type in azure key vault to sync>
      version: <optional - version of object to sync>
      contentType: <only used when type is the special multi-key-value-secret - either application/x-json or application/x-yaml>
  output: # ignored by env injector, required by controller to output kubernetes secret
    secret: 
      name: <name of the kubernetes secret to create>
      label: mylabel
      dataKey: <required when type is opaque - name of the kubernetes secret data key to assign value to - ignored for all other types>
      type: <optional - kubernetes secret type - defaults to opaque>
´´´

[Question] Can you inject a secret as part of a template?

Is it possible to inject something like this?

    - name: CONNECTION_STRING
      value: redis:6379,abortConnect=false,password=redispassword@azurekeyvault

In this case I need access to the password in other cases but also need to inject a connection string that contains the same password.

datakey expected when output type 'kubernetes.io/tls' is used

Manifest:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: my-cert
spec:
  vault:
    name: my-keyvault-name
    object:
      type: secret
      name: my-keyvault-secret-name
  output:
    secret:
      name: my-keyvault-secret-name
      type: kubernetes.io/tls

When applied the following Kubernetes logs occur:
failed to get secret value for 'default/my-cert' from Azure Key vault 'my-keyvault-name' using object name 'my-keyvault-secret-name', error: no datakey spesified for output secret

From the documentation it states that :
dataKey: <required when type is opaque - name of the kubernetes secret data key to assign value to - ignored for all other types>

So my understanding is that this shouldn't be specified for my type: kubernetes.io/tls

[Question] A better way to handle deletion of sensitive data/files?

I've spent the weekend fixing and improving things around the env-injector. One of the tings that now seams to work fine, is removing sensitive files (the injector executable azure-keyvault-env and if running inside Azure without custom auth, the azure.json host file containing AKS credentials - ref #25 )

The main problem to solve, is that we don't know how much privileges (or which user) the executing container has, so my current solution is to chmod the /azure-keyvault directory and its files with 777 (ref: azure-keyvault-secrets-webhook) and then have the executing container (through the azure-keyvault-env executable) delete the files (ref: azure-keyvault-env) as soon as they are not needed anymore.

Even though these files only exist in a in-memory volume for a few milliseconds, it still feels weird to use 777.

Anyone have a better solution? Do you see any reel security issues with the current solution?

[BUG] HELM: Can only install one chart on AKS

When trying to install both controller and env-injector through helm I get the following error

Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: CustomResourceDefinition, namespace: , name: azurekeyvaultsecrets.spv.no

First chart always succeeds but the next fails with above err.

To Reproduce

  1. helm repo update
  2. helm repo add spv-charts http://charts.spvapi.no
  3. helm install spv-charts/azure-key-vault-controller --generate-name
  4. helm install spv-charts/azure-key-vault-env-injector --set installCrd=false --generate-name

step 4 fails with the following err Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: CustomResourceDefinition, namespace: , name: azurekeyvaultsecrets.spv.no

Expected behavior
I expect every step in the installation process to succeed

[FEATURE] Prometheus metrics

The env-injector is implemented as a Mutating Admission Webhook, and any disruptions can cause Pod not to start (even Pods that are not managed by Env Injector). We have taken measures to have two replicas by default, and specified a Pod Disruption Budget, but it would still be nice if both the Controller and Env Injector exposed relevant metrics.

Describe the solution you'd like
A prometheus endpoint exposing relevant metrics for Controller and Env Injector

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.