Coder Social home page Coder Social logo

hashicorp / vault-plugin-auth-kubernetes Goto Github PK

View Code? Open in Web Editor NEW
203.0 38.0 62.0 7.85 MB

Vault authentication plugin for Kubernetes Service Accounts

Home Page: https://www.vaultproject.io/docs/auth/kubernetes.html

License: Mozilla Public License 2.0

Makefile 1.40% Go 98.21% Shell 0.21% Dockerfile 0.19%

vault-plugin-auth-kubernetes's Introduction

Vault Plugin: Kubernetes Auth Backend

This is a standalone backend plugin for use with Hashicorp Vault. This plugin allows for Kubernetes Service Accounts to authenticate with Vault.

Please note: We take Vault's security and our users' trust very seriously. If you believe you have found a security issue in Vault, please responsibly disclose by contacting us at [email protected].

Quick Links

Getting Started

This is a Vault plugin and is meant to work with Vault. This guide assumes you have already installed Vault and have a basic understanding of how Vault works.

Otherwise, first read this guide on how to get started with Vault.

To learn specifically about how plugins work, see documentation on Vault plugins.

Security Model

The current authentication model requires providing Vault with a Service Account token, which can be used to make authenticated calls to Kubernetes. This token should not typically be shared, but in order for Kubernetes to be treated as a trusted third party, Vault must validate something that Kubernetes has cryptographically signed and that conveys the identity of the token holder.

We expect Kubernetes to support less sensitive mechanisms in the future, and the Vault integration will be updated to use those mechanisms when available.

Usage

Please see documentation for the plugin on the Vault website.

This plugin is currently built into Vault and by default is accessed at auth/kubernetes. To enable this in a running Vault server:

$ vault auth enable kubernetes
Successfully enabled 'kubernetes' at 'kubernetes'!

To see all the supported paths, see the Kubernetes auth API docs.

Developing

If you wish to work on this plugin, you'll first need Go installed on your machine.

To compile a development version of this plugin, run make or make dev. This will put the plugin binary in the bin and $GOPATH/bin folders. dev mode will only generate the binary for your platform and is faster:

$ make
$ make dev

Put the plugin binary into a location of your choice. This directory will be specified as the plugin_directory in the Vault config used to start the server.

...
plugin_directory = "path/to/plugin/directory"
...

Start a Vault server with this config file:

$ vault server -config=path/to/config.hcl ...
...

Once the server is started, register the plugin in the Vault server's plugin catalog:

$ vault plugin register \
        -sha256=<expected SHA256 Hex value of the plugin binary> \
        -command="vault-plugin-auth-kubernetes" \
        auth kubernetes
...
Success! Data written to: sys/plugins/catalog/kubernetes

Note you should generate a new sha256 checksum if you have made changes to the plugin. Example using openssl:

openssl dgst -sha256 $GOPATH/vault-plugin-auth-kubernetes
...
SHA256(.../go/bin/vault-plugin-auth-kubernetes)= 896c13c0f5305daed381952a128322e02bc28a57d0c862a78cbc2ea66e8c6fa1

Enable the auth plugin backend using the Kubernetes auth plugin:

$ vault auth enable kubernetes
...

Successfully enabled 'plugin' at 'kubernetes'!

Tests

If you are developing this plugin and want to verify it is still functioning (and you haven't broken anything else), we recommend running the tests.

To run the tests, invoke make test:

$ make test

You can also specify a TESTARGS variable to filter tests like so:

$ make test TESTARGS='--run=TestConfig'

To run integration tests, you'll need kind installed.

# Create the Kubernetes cluster for testing in
make setup-kind
# Build the plugin and register it with a Vault instance running in the cluster
make setup-integration-test
# Run the integration tests against Vault inside the cluster
make integration-test

vault-plugin-auth-kubernetes's People

Contributors

benashz avatar briankassouf avatar catsby avatar dependabot[bot] avatar eh-steve avatar fairclothjm avatar giskarda avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar hc-github-team-secure-vault-ecosystem avatar jefferai avatar jwong101 avatar kpcraig avatar malnick avatar nathkn avatar ncabatoff avatar noelledaley avatar raymonstah avatar riuvshyn avatar rvasilevsf avatar sylwit avatar thevilledev avatar thyton avatar tomhjp avatar tsaarni avatar tvoran avatar tyrannosaurus-becks avatar violethynes avatar vishalnayak avatar zlaticanin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-plugin-auth-kubernetes's Issues

Not a compact JWS error

Hey I have an issue delegating authentication to k8s, I configured my backend as follow:

K8s CA cert

/tmp/example.crt contains the K8s cert to authenticate to the API, from my ~/.kube/config

tokenreview JWT

ACCOUNT_TOKEN=$(kubectl -n default get secret kubectl -n default get serviceaccount vault-tokenreview -o jsonpath='{.secrets[0].name}' -o jsonpath='{.data.token}' | base64 --decode)

vault write -f auth/kube-example/config kubernetes_host=$GKE_URL kubernetes_ca_cert=@/tmp/example.crt token_reviewer_jwt=$ACCOUNT_TOKEN

vault write auth/kube-example/role/myapplication \
    bound_service_account_names=myapplication \
    bound_service_account_namespaces=myapplication \
    policies=myapplication \
    ttl=48h \
    max_ttl=48h

Created before with the config:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-tokenreview
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: vault-tokenreview-binding
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vault-tokenreview
  namespace: default

Pod SA

curl --request POST --data '{"jwt": "$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)", "role": "myapplication"}' https://vault.example.com/v1/auth/kube-example/login
{"errors":["not a compact JWS"]}

I tried playing with both jwts and base64 encoding but no luck and this is driving me crazy. Am I missing something?

Error validating serviceaccount token when "pem_keys" are specified in the configuration

Hi,

I have been trying to use the kubernetes authentication plugin and i was successful until i tried to use the "pem_keys" to validate the signature of the serviceaccount provided by the client pod as documented here

I am running vault 0.10.1 and kubernetes 1.9.8 , i am using ECDSA Key to sign the serviceaccounts and i extracted the public key from the key using openssl ec -in /etc/kubernetes/ssl/serviceaccount.key -pubout

I then updated the kubernetes auth config doing

 vault write auth/kubernetes/config kubernetes_host=URL token_reviewer_jwt=VAULT_SERVICE_TOKEN kubernetes_ca_cert=@/etc/ssl/certs/kubernetes-ca.pem pem_keys="PUBLIC_KEY_FROM_OPENSSL_COMMAND"

Doing a read show the key succesfully loaded in the vault config for kubernetes auth

Whenever i try to login from my client though i get an error

~ $ curl -X POST -d @/dev/shm/payload.json -k https://vault:8200/v1/auth/kubernetes/login                                                           
{"errors":["asn1: structure error: length too large"]}

The very same command work as long as i don't have any "pem_keys" defined

I also verified that the signature is correct for the public key using https://jwt.io/

Is there anything i am maybe missing in my configuration or anything else i am doing wrong ?

Getting "missing client token" when using --path for multiple kubernetes cluster.

Hello,
i have two k8s clusters and im trying to integrate both to the same vault instance. Trying the solution proposed with "--path=k8s-2" proposed in #19 i am getting "missing client token". I believe that its caused by the default endpoint that k8s is validating token "/v1/auth/kubernetes/login" instead of "/v1/auth/k8s-2/login".

there is something that im missing at k8s config during vault integration?

vault kubernetes auth error - service account name not authorized

I am facing issue login to openshift approle created using kubernetes auth method for vault authentication.

This could be similar or same issue in the below URL, #49

I have vault running in minishift, exactly following this url, https://medium.com/hashicorp-engineering/vault-kubernetes-auth-method-for-openshift-9b9155590a6d?

In the openshift side, executed the below commands.

Create OC project and token reviewer JWT:

oc login -u system:admin
oc new-project vault-demo
oc projects
oc create sa vault-auth

Create Cluster role binding for vault-auth

oc adm policy add-cluster-role-to-user
system:auth-delegator system:serviceaccount:vault-demo:vault-auth
oc serviceaccounts get-token vault-auth > reviewer_sa_jwt.txt

Lets create two more serviceaccounts for applications

oc create sa app1
oc create sa app2
my vault addr is like below.

~/github/hashitvault$ echo $VAULT_ADDR
http://vault-myproject.192.168.42.186.nip.io

I am seeing the below error when login to "$VAULT_ADDR/v1/auth/ocp/login"

desktop-e470:~/hashitvault$ curl --request POST --data @payload.json "${VAULT_ADDR}/v1/auth/ocp/login"
{"errors":["service account name not authorized"]}

below are the vault commands executed as part of this exercise

desktop-e470:~/hashitvault$ vault policy write app1-policy app1-policy.hcl
Success! Uploaded policy: app1-policy

desktop-e470:~/hashitvault$ cat app1-policy.hcl
path "secret/app1" {
capabilities = ["read", "list"]
}
path "database/creds/app1" {
capabilities = ["read", "list"]
}

desktop-e470:~/hashitvault$ vault policy read app1-policy
path "secret/app1" {
capabilities = ["read", "list"]
}
path "database/creds/app1" {
capabilities = ["read", "list"]
}

desktop-e470:~/hashitvault$ vault kv put secret/app1 username=app1 password=supasecr3t
Key Value


created_time 2019-12-19T16:23:58.402322163Z
deletion_time n/a
destroyed false
version 1

desktop-e470:~/hashitvault$ vault write "auth/ocp/config" \

token_reviewer_jwt="${reviewer_jwt}"
kubernetes_host="http://192.168.42.186:8443"
kubernetes_ca_cert=@/home/apurb/.minishift/ca.pem
Success! Data written to: auth/ocp/config

desktop-e470:~/hashitvault$ vault write "auth/ocp/role/app1-role" \

bound_service_account_names="default,app1"
bound_service_account_namespaces="vault-demo"
policies="app1-policy" ttl=1h
Success! Data written to: auth/ocp/role/app1-role

desktop-e470:~/hashitvault$ reviewer_jwt="$(cat reviewer_sa_jwt.txt)"

desktop-e470:~/hashitvault$ vault write "auth/ocp/config" token_reviewer_jwt="${reviewer_jwt}" kubernetes_host="http://192.168.42.186:8443" kubernetes_ca_cert=@/home/apurb/.minishift/ca.pem
Success! Data written to: auth/ocp/config

desktop-e470:~/hashitvault$ vault write "auth/ocp/role/app1-role" bound_service_account_names="default,app1" bound_service_account_namespaces="vault-demo" policies="app1-policy" ttl=1h
Success! Data written to: auth/ocp/role/app1-role

desktop-e470:~/hashitvault$ curl -H "X-Vault-Token: s.hswgw3TIjDCTNmxbUSfT5hbP" \

"${VAULT_ADDR}/v1/secret/data/app1"
{"request_id":"06846c6f-7405-19f0-971b-f4715ae7b180","lease_id":"","renewable":false,"lease_duration":0,"data":{"data":{"password":"supasecr3t","username":"app1"},"metadata":{"created_time":"2019-12-19T16:23:58.402322163Z","deletion_time":"","destroyed":false,"version":1}},"wrap_info":null,"warnings":null,"auth":null}

cat payload.json
{ "role":"app1-role", "jwt":"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJ2YXVsdC1kZW1vIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InZhdWx0LWF1dGgtdG9rZW4taHd4NjciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoidmF1bHQtYXV0aCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjFjNDIxNWQyLTIyN2MtMTFlYS05YjZmLTUyNTQwMDk4YWMzOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDp2YXVsdC1kZW1vOnZhdWx0LWF1dGgifQ.XzvbWRi2DUKnNzYoZYyJfKqHgQdxv8jg_75nHhmqTHAiYuz4-ABaqJokUTlrQGwsvw41V4rqEmc0CVF3MK_jgyUZzmpGnCNMySkRyYQw9TChhHUmOQDH9AKj6OOFcmAV811sQu9-qvVav4QlJPIW4cm6dHe-XHSNxuzqJ7OWScezqVDYaiWXBkcFpzEEisV6puXA7o5Npg-so2u0lW9bGEe9UP363ZyR3AYZ_rlZoRB-Gq7exGlN2TII0xUZDaBwbf9vDE_i3Zs_HFdNSBGsVFsG3-Xlw_iUTPTGTehDkSX7koYTT8GzjS9KR94TMVZdPLGH6txF4QfaRnWAvKgvOg" }

desktop-e470:~/hashitvault$ curl --request POST --data @payload.json "${VAULT_ADDR}/v1/auth/ocp/login"
{"errors":["service account name not authorized"]}

permission denied while creating external secrets

Hi, I am trying to use the vault kubernetes auth with k8s external secrets provided by Godaddy https://github.com/godaddy/kubernetes-external-secrets#hashicorp-vault.
Note: the external secrets is configured in vault namespace.
My vault endpoint is configured in external secrets env var $VAULT_ADDR .
Created new secret in vault using vault token login from my machine in kv engine.
vault kv put secret/dev/rds @pass.json
Enabled auth using
vault auth enable -path=k8s-cluster kubernetes
Configured auth using

k8s_host="$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")"
k8s_cacert="$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 --decode)"
secret_name="$(kubectl get serviceaccount vault-auth -n vault -o go-template='{{ (index .secrets 0).name }}')"
tr_account_token="$(kubectl get secret ${secret_name} -n vault -o go-template='{{ .data.token }}' | base64 --decode)"
vault write auth/k8s-cluster/config token_reviewer_jwt="${tr_account_token}" kubernetes_host="${k8s_host}" kubernetes_ca_cert="${k8s_cacert}"

Policy to read all secrets in kv,

vault policy write k8s-policy-read - <<EOF
path "secret/*" {
    capabilities = ["read", "list"]
}
EOF

Role to allow above policy

vault write auth/k8s-cluster/role/sbx  \  
    bound_service_account_names=vault-auth   \
    bound_service_account_namespaces=vault   \
    policies=k8s-policy-read \
    ttl=1h

The external secrets manifest creates a service account vault-auth in vault namespace.
After creating the external secret to fetch above secret as below, I get permission denied

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: secret-rds
  namespace: vault
spec:
  backendType: vault
  # Your authentication mount point, e.g. "kubernetes"
  vaultMountPoint: k8s-cluster
  # The vault role that will be used to fetch the secrets
  # This role will need to be bound to kubernetes-external-secret's ServiceAccount; see Vault's documentation:
  # https://www.vaultproject.io/docs/auth/kubernetes.html
  vaultRole: sbx
  data:
  - name: pass
    # The full path of the secret to read, as in `vault read secret/data/hello-service/credentials`
    key: secret/dev/rds
    property: pass
  # Vault values are matched individually. If you have several keys in your Vault secret, you will need to add them all separately
  - name: user
    key: secret/dev/rds
    property: user

This should ideally create a secret, but gives below error in secret event:

Status:               ERROR, permission denied

The external secret controller gives similar error. :

{"level":50,"time":1591282202905,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-fbdf84d6f-d65st","response":{"statusCode":403,"body":{"errors":["permission denied"]}},"stack":"Error: permission denied\n    at handleVaultResponse (/app/node_modules/node-vault/src/index.js:49:21)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:97:5)\n    at async VaultBackend._get (/app/lib/backends/vault-backend.js:42:21)\n    at async /app/lib/backends/kv-backend.js:29:31\n    at async Promise.all (index 0)\n    at async Promise.all (index 1)\n    at async VaultBackend.getSecretManifestData (/app/lib/backends/kv-backend.js:116:42)\n    at async Poller._createSecretManifest (/app/lib/poller.js:88:18)\n    at async Poller._upsertKubernetesSecret (/app/lib/poller.js:152:28)","type":"Error","msg":"failure while polling the secret vault/secret-rds"}
{"level":30,"time":1591282202919,"pid":17,"hostname":"external-secrets-kubernetes-external-secrets-fbdf84d6f-d65st","msg":"stopping poller for vault/secret-rds"}

This purely shows authorization error, but as the policies and roles are mapped properly, the permission denied must not be coming. Not sure if I am missing anything, any help or discussion will be appreciated. Thanks in advance.

Cannot register vault-plugin-auth-kubernetes plugin

Environment

I'm running on GKE; Kubernetes 1.7.5

Steps to reproduce:

I'm deploying vault 8.2 with the following StatefulSet, which downloads the plugin using an init container: https://gist.github.com/kelseyhightower/f90929935d52144b74a7380404d78e8a

Here is my vault config that configures the plugin directory:

disable_mlock = true

listener "tcp" {
      address = "0.0.0.0:8200"
      tls_cert_file = "/etc/vault/tls/vault.pem"
      tls_client_ca_file = "/etc/vault/tls/ca.pem"
      tls_key_file = "/etc/vault/tls/vault-key.pem"
      tls_min_version = "tls12"
      tls_require_and_verify_client_cert = "false"
}

plugin_directory = "/usr/local/libexec/vault"

storage "consul" {
      address = "127.0.0.1:8500"
      path = "vault/"
}

How I Enabled the Plugin

vault write sys/plugins/catalog/kubernetes \
  sha_256=7da474c70fd4d3ce14508fd19a4cae8160e1f47ebe365e077fd0f8094d1dfacb \
  command="vault-plugin-auth-kubernetes"
Success! Data written to: sys/plugins/catalog/kubernetes

Next configure the plugin

vault write auth/kubernetes/config \
  kubernetes_host=https://35.197.84.128 \
  [email protected]
Error writing data to auth/kubernetes/config: Error making API request.

URL: PUT https://35.203.143.169:8200/v1/auth/kubernetes/config
Code: 500. Errors:

* 1 error occurred:

* internal error

Reviewing the logs:

kubectl logs vault-0 vault
2017/09/19 10:09:46.119519 [ERROR] core: failed to run existence check: error=plugin exited before we could connect
2017/09/19 10:10:19.565675 [INFO ] expiration: revoked lease: lease_id=sys/wrapping/wrap/c2f3825192a5798055baee43902d43c9d4001bdd
2017/09/19 10:10:19 http: TLS handshake error from 10.16.4.5:38898: remote error: tls: bad certificate
2017/09/19 10:10:19.685566 [ERROR] plugin.vault-plugin-auth-kubernetes: plugin tls init: error="error during token unwrap request: secret is nil"
2017/09/19 10:10:19.686531 [ERROR] rollback: error rolling back: path=auth/kubernetes/ error=plugin exited before we could connect
2017/09/19 10:10:45.978640 [INFO ] expiration: revoked lease: lease_id=sys/wrapping/wrap/f271ef2d3f59a6018d7900be22293f2957676773

10.16.4.5 is the IP address of the Pod where vault is running and where the plugin is installed.

I'm running vault with TLS enable and the certificate was generated like this: https://github.com/kelseyhightower/nomad-on-kubernetes/blob/master/docs/04-nomad-infrastructure.md#setup-pki-infrastructure

Notice that the Pod IP address is not included in the TLS cert created for vault as all client use the external IP assigned to Vault. Is the plugin attempted to validate the Vault TLS cert using the local IP address assigned to Vault?

Disabling SSL for k8s auth method

Is your feature request related to a problem? Please describe.
I have two k8s clusters in gke. One of them runs my application, another one runs vault. They are in separate projects and are connected over vcp peering. The application cluster is private and has a private master endpoint. That makes it impossible to access that endpoint directly from another vpc. Proxy has to be used: https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies

I've tried using http proxy, however, vault-init container has this issue: kelseyhightower/vault-init#16.

So I tried using a tcp proxy to access master directly. And vault does get to master through proxy but master cert does not match the ip address of the proxy and ssl connection gets refused with:

# curl -k --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "scorpion"}' $VAULT_ADDR/v1/auth/kubernetes/login | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1105  100   220  100   885    367   1479 --:--:-- --:--:-- --:--:--  1844
{
  "errors": [
    "Post https://10.48.24.34:443/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate is valid for 35.236.224.245, 10.48.96.1, 35.245.183.91, 35.245.219.73, 35.236.226.10, 10.48.36.2, not 10.48.24.34"
  ]
}

I tried to see if it's possible to customize master cert to include the desired ip, but it is not possible to do so on gke.

Describe the solution you'd like
The only solution I see is to allow to use a connection without SSL for now, but I can't turn of SSL verification on kuberentes method. I understand that it is necessary and defeats the whole purpose of authorization, but I am on internal net and literally ran out of options to make this work. Vault can still be useful for us and still want it for key managing, recycling, auth on per namespace/cluster/app basis, but I don't want to run a VM and create infra around it. UnSSLed connection for auth methods should be an option.

Later when google allows cert modifications or our situation changes we could fix this. Otherwise obstructs implementation in our org.
Describe alternatives you've considered
Described above.

Explain any additional use-cases
If there are any use-cases that would help us understand the use/need/value please share them as they can help us decide on acceptance and prioritization.

Additional context
Add any other context or screenshots about the feature request here.

bug: authentication to external k8s cluster fails

Given the following situation:

  • vault is running in cluster 1
  • kubernetes authentication setup to external cluster (cluster 2) with no token_reviewer_jwt set

In this situation authentication will always fail due to the fact that Vault will default to using the token that vault itself is running under for the TokenReviewerJWT. As this has no authority in cluster2, authentication fails.

Looking over the changes

#83

Is the PR that I believe broke this functionality due to the fact it added the following

if len(tokenReviewer) == 0 && len(localTokenReviewer) > 0 {
  tokenReviewer = string(localTokenReviewer)
}

As you can see if there is no tokenReviewer specified in the request it checks for the localTokenReviewer (found if running in a cluster) and defaults to that.

service account name not authorized

Im trying to get the k8s plugin to work with vault. I'm setting this up in GKE.
The documentation on how to use this plugin is incomplete. So I followed these steps pretty much - https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets#start-vault-server-on-ec2

Once I launch my pod, login to the shell and do curl --request POST --data "{\"jwt\": \"$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" , \"role\": \"demo\"}" $VAULT_ADDR/v1/auth/kubernetes/login, I'm getting {"errors":["service account name not authorized"]}.

The only difference is I'm running this in GKE rather than AWS. Any help would be appreciated.

Include namespace in display name with hyphen not colon

#31 changed the display name to include the kubernetes namespace (yey!)

When using the Kubernetes auth method to generate MSSQL Database credentials, I now get a username with v-kubernetes-{namespace}: prefix, e.g;

v-kubernetes-testing:v-GENPRD_0XA_LSTN-clie-cFbDNmp9WeGSJAo3hanx-1532477048

previously the equivalent MSSQL username would have been;

v-kubernetes-vault-aut-GENPRD_0XA_LSTN-clie-1qMciGW3fm0jNFAIGMTz-1531203590

Comparing the same credential generated while using the AWS auth method, I get;

v-aws-Role-Lab_ACP-AWS-GENPRD_0XA_LSTN-clie-e7L3UhmmeGCfKCQNwWlr-1532568250

While a colon in a user name is valid for MSSQL, some older applications don't accept this unfortunately.
Can we change the display name to be hyphen separated instead of colon separated, which would be consistent with (at least) AWS Auth?

Make Issuer validation optional

According to JWT spec ISS validation could be optional but Audience must be validated.

Sometimes in k8s ISS can be updated so it will cause a massive outage of services that using Vault.

It would be nice if the auth/kubernetes/config path supported delete operation

I'm having a problem with Terraform interacting with Vault through Terraform's Vault Provider. One of the things I'm doing is using that provider's vault_generic_secret resource to write to the auth/kubernetes/config path after redeploying a k8s cluster. However, when I queue a destroy plan in Terraform Enterprise, it fails and I get an error saying:
vault_generic_secret.config: error deleting "auth/kubernetes/config" from Vault ... unsupported operation".

I'm reporting the issue here rather than in the Vault Provider for Terraform repo because I believe the problem is in the Vault Kubernetes Auth backend itself. In particular, if I try to use the vault CLI to do "vault delete auth/kubernetes/config", I get the same "unsupported operation" error.

In addition to not being able to destroy my k8s cluster because of the above error, I cannot even remove the portion of my Terraform code that is using the vault_generic_secret resource that wrote to auth/kubernetes/config. Removing the code causes Terraform to try to send a delete operation to Vault which again fails.

Note that I did not actually provision the kubernetes auth backend with Terraform since I had manually created it before deciding that I wanted to update the backend's config through Terraform whenever I redeployed my cluster since doing so results in new ca.crt and JWT token.

I recognize that deleting the auth/kubernetes/config path would leave the auth backend in a useless state. That is probably why the delete operation was not added. I imagine the developers assumed that someone who no longer wanted to use the backend would use vault auth-disable (or the corresponding API) instead. In my case, I don't actually want to delete the backend. I don't even care about deleting the auth/kubernetes/config path. Unfortunately, Terraform does want to do that.

Configuration fails if Host ends with a slash

If you configure the kubernetes host with a trailing slash (e.g. https://kube.example.com/), all requests will fail with a cryptic error message (it gets a 404 from Kubernetes, which results in a 500 from Vault).

This is caused by https://github.com/hashicorp/vault-plugin-auth-kubernetes/blob/master/token_review.go#L76, which adds a slash before the start of the path, regardless of whether the host ends in a slash.

Expected behavior: The host configuration should work either with or without the trailing slash. Whether that happens when configuration is entered or in the token_review.go, I don't care, but I shouldn't have to read your source code because I entered a valid URL as the host.

Add custom metadata to Vault from ServiceAccount annotations

We would like to create policy rules that use custom entity/alias metadata and have this metadata be supplied from Kubernetes annotations on the ServiceAccount.

The way I envision this working is, if you have a ServiceAccount with annotations such as:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-account
  namespace: default
  annotations:
    # could have different form
    vault.hashicorp.com/auth-metadata/metadata-key: example-value

Then the auth plugin adds following metadata to the entity/alias:

metadata-key: example-value

We could then create a templated policy document (in Terraform) such as:

data "vault_policy_document" "tokens_write_document" {
  path         = "root/{{identity.entity.aliases.<mount-accessor>.metadata.<metadata-key>}}/*"
  capabilities = ["write", "list", "read"]
}

And with this policy in place, the ServiceAccount would have access to any secret under:

/root/example-value/*

Currently, we would have to manually edit entity/alias metadata to achieve this at which point we might as well create per-service-account policies which we would really like to avoid doing.

What are your thoughts on this? I would gladly submit a PR.

I'm thinking what would be necessary is to make an extra request to get the annotations for a given service account, parse them, and pass them to Vault in this code block: https://github.com/hashicorp/vault-plugin-auth-kubernetes/blob/master/path_login.go#L112-L133. Extra RBAC permissions would be required for the JWT reviewer service account too.

Can't issue tokens

After following the configuration steps, I end up with the following situation/error :

~$ vault write auth/kubernetes/login role="${MY_ROLE}" jwt="${MY_JWT_TOKEN}"
Error writing data to auth/kubernetes/login: Error making API request.

URL: PUT https://vault.rocks/v1/auth/kubernetes/login
Code: 500. Errors:

* invalid character '<' looking for beginning of value

I believe it might be related to my CA. I use an AWS ACM certificate in order to reach my Kubernetes cluster TLS endpoint. The CA used to sign my JWT token is different. I couldn't find clear documentation around what is expected in terms of configuration for this use case?

Do anyone have a clue or been facing this as well?

auth/kubernetes/login fails when configuring /auth/kubernetes/config with REST endpoint

When I configure the kubernetes auth backend using the vault CLI, my subsequent auth/kubernetes/login requests succeed:

SA_NAME="vault-auth"
K8S_HOST=${K8S_HOST:-"https://kubernetes.default:443"}
SECRET=$(kubectl get sa $SA_NAME -o jsonpath='{.secrets[0].name}')
VAULT_CA_CRT=$(kubectl get secret $SECRET -o jsonpath="{.data['ca\.crt']}" | base64 -d)
VAULT_JWT=$(kubectl get secret $SECRET -o jsonpath="{.data['token']}" | base64 -d)

vault write auth/kubernetes/config \
  token_reviewer_jwt="$VAULT_JWT" \
  kubernetes_host="$K8S_HOST" \
  kubernetes_ca_cert="$VAULT_CA_CRT"

This works and I'm able to use the k8s auth backend. Yay.

However I am setting this up in an init container that does not have the Vault CLI, so I'm using the REST endpoint to configure Kubernetes auth. With the same environment variables I do the following:

cat << EOF > data.json
{
  "kubernetes_host": "$K8S_HOST",
  "kubernetes_ca_cert": "$VAULT_CA_CRT",
  "token_reviewer_jwt": "$VAULT_JWT"
}
EOF

curl \
    --header "X-Vault-Token: ${VAULT_TOKEN}" \
    --silent \
    --write-out '%{http_code}' \
    --request POST \
    --data @data.json \
    ${VAULT_ADDR}/v1/auth/kubernetes/config

When I configure this way, my subsequent /auth/kubernetes/login requests fail with the following error:

{"errors":["Post https://kubernetes.default:443/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority"]}

I suspect it's a new line issue with my certificate because when I set the curl --data field to an inline json string (rather than using the file), I get this error: certificate failed to parse JSON input: invalid character '\\n' in string literal.

How can I convert this:

VAULT_CA_CRT=$(kubectl get secret $SECRET -o jsonpath="{.data['ca\.crt']}" | base64 -d)

into a format that the REST endpoint will accept?

ServiceAccount name + namespace is missing in Entity

As discussed here: https://groups.google.com/forum/#!topic/vault-tool/WcujbsujdgE it would be ideal if the serviceaccount name (and namespace) was available to templated policies. Right now only the uuid (.metadata.uid) associated with the serviceaccount is exposed.

The uid is used to prevent attacks where the attacker recreates a serviceaccount name that was once deleted and inherits its abilities within Vault. In some (arguably most) environments, convenience outweighs this risk.

Permission Denied when trying to login with Kubernetes Auth Method enabled

My HTTP POST request using the token from the pod's /var/run/secrets/kubernetes.io/serviceaccount/token file and using the role defined by:

vault write auth/kubernetes/role/cf-test \
bound_service_account_names="vault-acl" \
bound_service_account_namespaces="default" \
policies="exampleapp-kv" \
token_policies="exampleapp-auth" \
ttl=1h

returns:
HTTP/1.1 403 Forbidden │
│ Content-Length: 33
│ Cache-Control: no-store
│ Content-Type: application/json
│ Date: Mon, 09 Mar 2020 19:09:41 GMT │
│ {"errors":["permission denied"]}

in the Vault log in the pod has:
│ 2020-03-09T17:24:28.196Z [INFO] identity: entities restored
│ 2020-03-09T17:24:28.198Z [INFO] identity: groups restored
│ 2020-03-09T17:24:28.228Z [WARN] core: post-unseal upgrade seal keys failed: error="no recovery key found" │
│ 2020-03-09T17:24:28.235Z [INFO] core: post-unseal setup complete
│ 2020-03-09T17:24:28.305Z [INFO] core: root token generated
│ 2020-03-09T17:24:28.305Z [INFO] core: pre-seal teardown starting
│ 2020-03-09T17:24:28.305Z [INFO] rollback: stopping rollback manager
│ 2020-03-09T17:24:28.306Z [INFO] core: pre-seal teardown complete
│ 2020-03-09T17:24:28.306Z [INFO] core: stored unseal keys supported, attempting fetch
│ 2020-03-09T17:24:28.336Z [INFO] core.cluster-listener: starting listener: listener_address=[::]:8201
│ 2020-03-09T17:24:28.336Z [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201
│ 2020-03-09T17:24:28.336Z [INFO] core: entering standby mode
│ 2020-03-09T17:24:28.338Z [INFO] core: vault is unsealed
│ 2020-03-09T17:24:28.338Z [INFO] core: unsealed with stored keys: stored_keys_used=1
│ 2020-03-09T17:24:28.358Z [INFO] core: acquired lock, enabling active operation
│ 2020-03-09T17:24:28.503Z [INFO] core: post-unseal setup starting
│ 2020-03-09T17:24:28.506Z [INFO] core: loaded wrapping token key
│ 2020-03-09T17:24:28.506Z [INFO] core: successfully setup plugin catalog: plugin-directory=
│ 2020-03-09T17:24:28.510Z [INFO] core: successfully mounted backend: type=system path=sys/
│ 2020-03-09T17:24:28.510Z [INFO] core: successfully mounted backend: type=identity path=identity/
│ 2020-03-09T17:24:28.510Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
│ 2020-03-09T17:24:28.525Z [INFO] core: successfully enabled credential backend: type=token path=token/
│ 2020-03-09T17:24:28.525Z [INFO] core: restoring leases
│ 2020-03-09T17:24:28.525Z [INFO] rollback: starting rollback manager
│ 2020-03-09T17:24:28.528Z [INFO] expiration: lease restore complete
│ 2020-03-09T17:24:28.530Z [INFO] identity: entities restored
│ 2020-03-09T17:24:28.532Z [INFO] identity: groups restored
│ 2020-03-09T17:24:28.589Z [INFO] core: post-unseal setup complete
│ 2020-03-09T17:24:29.270Z [WARN] core: attempted unseal with stored keys, but vault is already unsealed │
│ 2020-03-09T17:24:30.091Z [INFO] core: successful mount: namespace= path=secret/ type=kv
│ 2020-03-09T17:24:33.305Z [INFO] core: enabled credential backend: path=kubernetes/ type=kubernetes │
│ 2020-03-09T17:32:40.532Z [ERROR] auth.kubernetes.auth_kubernetes_9d4e0ee2: login unauthorized due to: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","

I'm at a loss to find any reference to the login unauthorised error.
can you shed some light on what this error message means?

Support for multiple Kubernetes clusters

We have multiple Kubernetes clusters deployed in the same VPC. From what I can tell it is not possible to have more than one cluster registered with Vault, which means we would have to run multiple instances of Vault and Consul in the same VPC also, which we would like to avoid if possible.

Is there a workaround or plan to support multiple clusters in the future?

Login error using service account verification keys

I'm trying to setup Vault/K8s integration with Tectonic Kubernetes using service account verification keys instead of a CA cert. It does not seem to properly validate the JWT key.

Versions:

  • Tectonic Kubernetes 1.9.6
  • Vault 0.11.1

Example:

Configured using public key verification, login fails:

~ # ./vault write auth/kubernetes/login role=vault-auth jwt=${JWT_TOKEN}
Error writing data to auth/kubernetes/login: Error making API request.

URL: PUT https://vault.service.testing.consul/v1/auth/kubernetes/login
Code: 500. Errors:

* invalid character '<' looking for beginning of value

The JWT and key pass validation at jwt.io: link (note that the link will fail to validate the signature due to an encoding issue in their link gen, but replace the key with the value in this issue and you'll see it passes).

I do not think that the CA cert based signing is how things are done in Tectonic K8s as when I paste the CA value for the K8s cluster in jwt.io the signature fails. When I use the CA based setup in vault for the k8s cluster, I get a x509 cert error.

Setup Script:

#!/bin/bash
SA_NAME="vault-reviewer"
K8S_HOST=${K8S_HOST:-"https://vault.service.testing.consul:443"}
SECRET=$(kubectl get sa $SA_NAME --namespace=kube-system -o jsonpath='{.secrets[0].name}')
VAULT_JWT=$(kubectl get secret $SECRET --namespace=kube-system -o jsonpath="{.data['token']}" | base64 -D)

vault auth enable kubernetes
vault write auth/kubernetes/config \
  token_reviewer_jwt="$VAULT_JWT" \
  kubernetes_host="$K8S_HOST" \
  pem_keys="${VAULT_JWT}"

vault write auth/kubernetes/role/vault-auth \
  bound_service_account_names=vault-auth \
  bound_service_account_namespaces=pse \
  policies=kube-auth \
  period=60s

Public Key:

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwzbaW/mLp3btu9tkpT7k
supji8h6cXnM6QXZJE+BUc0HlnHksDWbxGZDGS1blluPj3GfebgvIrcaVXJ/Phzp
lFKuuzHjQm/1Qzqq294ZkD9o+0zP+Q5fcNfWIu2r9kf/ElzQLP7DVvkfLvKyLdCq
7pvWLNBNV8JBThlECMuSdx14JMtcwQJx9+z70PiMFcN2Dv4hBW0FtkvcpUsMQJmz
i31/sUuzNLx8ee19N2Xiym4G3xruhweW6veJ/6S9oSMyD1HbHPZJDW3908YFab1H
xAZVdUsNwC1RDIa4eeUaYeVxdbr1pCuzUMdmrlGdVDWyacp4OBY5am1g7vf5vbN8
WwIDAQAB
-----END PUBLIC KEY-----

JWT:

eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJ2YXVsdC1yZXZpZXdlci10b2tlbi05a2Z0ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJ2YXVsdC1yZXZpZXdlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJlZDNlMmQwLWNlNTQtMTFlOC04NjkwLTIyY2I4MGFkZDBmZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTp2YXVsdC1yZXZpZXdlciJ9.DzWBl1ALDaGctzEtpwLIizK5B491L2nJJnRUbZk0yJtSW-EOKmTxLVTazL2KJUb69J6fKZSAi5W_bAhvJQnKt4Gza5uGpDs-agcpEDp3k5kGaXP7ae2wRZcmdbWFXQkn_olx0OOGIs0egIMZ7Op8Q0i446CPyrIdotQL1RVh_bXuTm5Vd6KoFY2dy1WmhkLA-nPazqOzU4R-xnuDWAKsNmqxrQKwlhWqK8g5vdDHojcHIu6qc-VOLZaub-PFQOoUcKSBqEJu0s5WAjCznl2vfpTkmnDiP7h5Y29cr3qvEE_P1WIf4mxIUFerXyo2NeDiEtB-B9Rwlj-dqIjBlhq9gA

UI Screenshot

screen shot 2018-10-15 at 2 31 15 pm

Would you be so kind as to point me in the right direction as to either what I am doing wrong?

Wildcard support on roles

As discussed here: https://groups.google.com/forum/#!topic/vault-tool/WcujbsujdgE it would be ideal to support wildcards for service_account_names when defining roles for this backend. This allows an administrator to provide a templated policy for Kubernetes services in Vault and moves "account management" from Vault to the Kubernetes API (kubectl create serviceaccount ..).

In effect, an administrator should be able to run:

vault write auth/kubernetes/role/kubernetes-service \
    bound_service_account_names=* \
    bound_service_account_namespaces=default \
    policies=templated-kubernetes-service \
    ttl=1h

Then any ServiceAccount in the default namespace would match this role and be entitled to the resolved templated-kubernetes-service policy.

Further, it should be possibly to do simple postfix wildcards (like policies do in path matching), such that:

  bound_service_account_names=sa-*

Would match any service account starting with sa-.

k8s auth with PSATs and custom audiecne fails

Hey guys! I was trying to get these changes #70 working and stuck with this:

{
  "errors": [
    "lookup failed: [invalid bearer token, token audiences [\"vault\"] is invalid for the target audiences [\"api\"]]"
  ]
}

while audience on my PSAT is vault:

  "aud": [
    "vault"
  ],
  "exp": 1573731794,
  "iat": 1573724594,
  "iss": "api",

and audience on the role is also vault:


Key                                 Value
---                                 -----
audience                            vault
bound_service_account_names         [default]
bound_service_account_namespaces    [service]
token_bound_cidrs                   []
token_explicit_max_ttl              0s
token_max_ttl                       0s
token_no_default_policy             false
token_num_uses                      0
token_period                        0s
token_policies                      [vault]
token_ttl                           0s
token_type                          default

it work ONLY if I use audience = api on role and on PSAT so this looks like a bug.

vault HTTP/1.x transport connection broken: malformed HTTP response

Hi Team,

I have deployed a sample application to talk vault server but having issues related to token it seems.

Environment:
Running minishift locally in ubuntu desktop.

In the myproject namespace, vault is running.
In the vault-demo namespace, running the sample application using the below deployment.yaml

Expecting the application to login to vault server and kubernetes path mentioned in the environmental variables. but it is showing the error. It seems, some tokens are secrets are missing.
image

Please suggest what configuration is missing like any secrets or mount paths missing in the deployment.yaml file.

Below is the configuration of deployment.yaml file.
hashitvault$ cat deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: basic-example
namespace: vault-demo
spec:
replicas: 1
template:
metadata:
labels:
app: basic-example
spec:
serviceAccountName: app1
containers:
- name: app
image: "172.30.1.1:5000/vault-demo/vault-example-init:0.0.7"
imagePullPolicy: Always
env:
- name: VAULT_ADDR
value: "172.30.238.95:8200"
- name: VAULT_ROLE
value: "app1-role"
- name: SECRET_KEY
value: "secret/app1"
- name: VAULT_LOGIN_PATH
value: "auth/ocp/login"

The below is the ClusterIP of the VAULT SERVER.
- name: VAULT_ADDR
value: "172.30.238.95:8200"

Allow matching also on pod names

As described in issue #65, a scoped token also includes the name of the pod it has been injected into.

What do you think about about adding an additional configuration parameter that would allow a vault auth role specification to allow matching not only on the kubernetes namespace and service account names, but also on the pod name (possibly using wildcards)?

This would allow to assign roles also based on the pod identity in all those cases when it is not possible to assign different service accounts to different pods, like when building (as in the referenced issue) with the gitlab runner that does assign the same service account to all the spawned build jobs.

If you think this is a reasonable feature to add, I can work to contribute a pull request.

Unable to login to auth/kubernetes backend using JWT

Hi

I'm trying to get the Vault Kubernetes auth working but I'm stumbling on a weird problem when trying to login to Vault using a k8s pod's service account (I'm running Kubernetes on GKE):

### In the pod container

$ cat /var/run/secrets/kubernetes.io/serviceaccount/token ; echo
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFwcC1hLXRva2VuLXR6ZjdiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFwcC1hIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjA4MDlhYmQtOGMyNy0xMWU4LTkwNjUtNDIwMTBhODQwMDE2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmE6YXBwLWEifQ.JASwQlSOZmqQfB_EKw__XB4Vsogl3jlQHDzKWBGSuWhahgdhC5v3i87k0463os1GA6pCY6DG5eFayYq0l-B-iXv2fvUkF-hSjfeTNauVJNu4PFRlxv9TahPznGYHrpxQ4PodmbilumkEXE9R1VcQEJ9IEBopyMqiudnw90wQqNt-WHNWlPaS9I8pZMLWa89ksQ_y3XeySToUfz3w89LloUM1cBDOnySNaZQR_se4ugc1aqWo0zvnA6ydSamNF0NmYbjLq1u5pCJx_eAjtRECo-UOfaiYrWn-FOEXjX_w1ojc7xobLJ0DNzzVv665ZHEgcvaewxoFWB4ctkDubtTbuQ

$ curl --request POST --data '{"jwt":"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFwcC1hLXRva2VuLXR6ZjdiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFwcC1hIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjA4MDlhYmQtOGMyNy0xMWU4LTkwNjUtNDIwMTBhODQwMDE2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmE6YXBwLWEifQ.JASwQlSOZmqQfB_EKw__XB4Vsogl3jlQHDzKWBGSuWhahgdhC5v3i87k0463os1GA6pCY6DG5eFayYq0l-B-iXv2fvUkF-hSjfeTNauVJNu4PFRlxv9TahPznGYHrpxQ4PodmbilumkEXE9R1VcQEJ9IEBopyMqiudnw90wQqNt-WHNWlPaS9I8pZMLWa89ksQ_y3XeySToUfz3w89LloUM1cBDOnySNaZQR_se4ugc1aqWo0zvnA6ydSamNF0NmYbjLq1u5pCJx_eAjtRECo-UOfaiYrWn-FOEXjX_w1ojc7xobLJ0DNzzVv665ZHEgcvaewxoFWB4ctkDubtTbuQ","role":"app-a"}' http://vault:8200/v1/auth/kubernetes/login
{"errors":["Post https://x.x.x.x/apis/authentication.k8s.io/v1/tokenreviews: net/http: invalid header field value \"Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InZhdWx0LXRva2VuLXBobnZ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjEyNzVhNTYtOGMxNi0xMWU4LWEzYTYtNDIwMTBhODQwMDAzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6dmF1bHQifQ.K247b5dAFxE12dGYFf1JDxY2T2M3clVPpTbrvM1Wd1Gux12-A4GGpINSudZspQgmqqpiaeEN_XxC46RsOxn4T3i2YFjGUGnYPwxia8lCBFDnTaxCvqhqc7WQwPnoBItmAaMkJNAP1wC-VgovOvjLGipFEoL08Oe_5spWtVXh_N97DbmaDkuC-O5aIDVZQKBplaJpVLj6y1wHF9_6wMm5zPaHiUug-06pVbWSayVwRODk4mtdwD1PaYQEIrtfq2NSfoh8I_empue_nSjtu_BxPQq6JSb9w9IS0IVU3cqfozeCSrNkuOmTHgsziRUoaVzCoeY516M4ose7MdGjfKp0Xg\\n\" for key Authorization"]}

Looks like the JWT Vault passes to the k8s TokenReview API is different that the one received with login request, is that normal?

I get the same issue using the CLI vault write auth/kubernetes/login ... command.

Here are some detail on my setup, let me know if you need more:

  • Vault version: v0.10.3
  • Kubernetes server version: v1.10.5-gke.0

Only use the TokenReview API when validating service account tokens.

Not all Kubernetes users have access to the service account signing public key so this auth backend cannot be used in those cases. One solution is to force users of this auth backend to obtain the signing keys required for validation.

Another solution, which maybe more correct, is to avoid doing any extra validation outside of the Kubernetes TokenReview API.

pods are not starting in custom namespace.

vault-reviewer service-account.yaml
<--------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-reviewer
namespace: app
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding
namespace: app
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:

  • kind: ServiceAccount
    name: vault-reviewer
    namespace: app
    -------------------->

vault write auth/kubernetes/login role=demo jwt="@sa.token"
Key Value


token --------------------------------
token_accessor ----------------------------
token_duration 48h
token_renewable true
token_policies ["default" "test-policy"]
identity_policies []
policies ["default" "test-policy"]
token_meta_service_account_secret_name vault-reviewer-token-9sv66
token_meta_service_account_uid d62de135-8335-4a58-b2c2-263c8a4ce847
token_meta_role demo
token_meta_service_account_name vault-reviewer
token_meta_service_account_namespace app

the same configuration works for default workspace. but when i migrate it to different namespace, pod is not initialising. Moreover it doesnot shows any logs

Error when vault login via kubernetes service account: "missing client token"

Hi Guys,

Pardon my knowledge about Vault, as I am just a beginner to Vault. I am trying to setup Vault for kubernetes following this article.

https://medium.com/@gmaliar/dynamic-secrets-on-kubernetes-pods-using-vault-35d9094d169

After following the steps successfully including Vault setup and login etc, I dont see anything wrong with setting up vault or kubernetes SA setup. But following all the steps correctly when another pod is trying to login to vault using K8s Service Account, I am seeing the error "missing client token".

Can someone help me what could be the possible cause of this error. Just to reiterate, I am able to login to vault from vault client, but not able to login via kubernetes service account.

$ vault read auth/kubernetes/config
Key Value


kubernetes_ca_cert -----BEGIN CERTIFICATE-----
MIIFVTCCAz2gAwIBAgIJANipEBaZrDFBMA0GCSqGSIb3DQEBCwUAMCUxIzAhBgNV
BAMMGmRpYW1hbnRpLXNpZ25lckAxNTMzNzY2Mzg1MB4XDTE4MDgwODIyMTMwNloX
DTI4MDgwNTIyMTMwNlowJTEjMCEGA1UEAwwaZGlhbWFudGktc2lnbmVyQDE1MzM3
NjYzODUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDBYmFRzHpIcOR+
RwRcnNoe8s7ePjPN7KYRq/Ty/XL8etYfIAQoIG1vnzxNGFfqOQMhmU7y7v4+ber0
95A7nOgv8hAOm7g/b0iPNK15YQMrPUJXVRLW+v8MFaMnrw0apiBD2D+kJEw9ReHe
hF8JoXbeaKgtv2Z5sqCvpIeKII8lHjtwJxlHpexQIIKXveNacmEtfRDivwGjYNuH
jDEi1oX7bL/V4oYwS3y9KK3ePTdxIduHXUtbViQ4BJz94FeBbQWOGksXvxyR8OBe
4ng+UyTubIcVjMQara/AK3e1CIANCjcmVvlddHMLotPdhTeLSakkflv4IR8noatM
ysUKlcxbxgULUgxDRiefYSh0nUlOlP7NADQHBUE85GaE13Ewtmr/o8k5cMWVk1vB
yEIKgrCFq37TCklc9W1CQNztfMiFPkNdYe861amcpq43oQ9fBFrnhm2Ozqp7iq9M
cQ+KR1UGj1hTrFC2eIVEyd4iPNU2L8ZClrc9yFkh0pGYkykGS0ccQQlNEb+vbrYL
GYGG77KnSAFu6G11wgkb/K3dIxvmIDOGGhDdb5ZsrXDO3FJVdH75i+Xwsqs4HRZC
LrxFHqFUx9HmtE7B5dhsIX6uOpiQ6L+OU6BNgeac1Gh7bZSBdDTkgDM6Piw8c6at
yFpIB1Toz5zyZgjvvuT6g538RapnhwIDAQABo4GHMIGEMD8GA1UdIwQ4MDahKaQn
MCUxIzAhBgNVBAMMGmRpYW1hbnRpLXNpZ25lckAxNTMzNzY2Mzg1ggkA2KkQFpms
MUEwEgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAqQwHQYDVR0OBBYE
FEKdGQ54ZNd6RraemHipKMcEMniRMA0GCSqGSIb3DQEBCwUAA4ICAQBUMO3nnJhF
WUXRseCUibSBAShtLKOlltoANOSDSQ+GkNxjHmARyR7eFc9XMFyOXMsSC1HvP0IN
pdymebbzKPpevkbaIqyS1m0/lZYLbvuxzQF+IrPGQbwTtAWic3k7wwyDVVizm9Rq
s/SemiKcrAdCHYMxeKMctWiXwGXwwuv22D8dIjew+mO5zs5/IOYsfd0RCsiZl4HI
WsHGiyZ/oJ0NBTmP8F90Q2fVTHWGwdExWQQiDMXpI11orPcivmxl5Pt3oGZzFiTj
5lqlgVejYuNJ2s81GbVDWv3DIJN7Q9iqHb5Mxshf54V/E2zzKl/I4A4ZbBSY7Lie
reSurDCChR+j82XOO3D6NIIhHyUTcKNOVKdnzeFG2V+VXwQEOinvXTYvBW9YJpQb
8GCCNvrpfTIdnU/grzLPre14xSYLqeIfGD63ABq3vfh/GrPo1niX2ce5EtjXUEvM
tdouZKKyJ4wI9MDy/xvd9726qP+aoL3Z0TGptmpLVhoV9OkRmewkoxaYVp+w296F
4ewSZEJfd3dgBa3uPJPIXh7r+bfqATTfjgf6ZNL6aJufAi8pve8r7ksOfEs3Q51L
I2lTxwZ5aVUoe6BH96rXsbInQvMITjSqJb9zguhIgh7CdaqLcJj98bco0mJB2hnD
PAm55R3fhMzKx5hcXqnRHo+uqGjYLFh+kA==
-----END CERTIFICATE-----
kubernetes_host https://172.16.220.102:443
pem_keys []

$ kubectl run tmp --rm -i --tty --serviceaccount=postgres-vault --image alpine
/ # apk update
/ # apk add curl postgresql-client jq

/ # echo $KUBE_TOKEN
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InBvc3RncmVzLXZhdWx0LXRva2VuLWh6eHZnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InBvc3RncmVzLXZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWUzYzdjZWEtYTY3Ni0xMWU4LWE5MmEtYTRiZjAxNGIyOWZjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6cG9zdGdyZXMtdmF1bHQifQ.YrvlggZgckoW962gCXkh0kjQVeA-qkUf0nf1bf168XxfJhMgi40Xaof1Y_3Iy_CYukI_b_PiGdKw6ttkDk3gc1C0AzTE3fln1qKaWd7z6VvLdDXvLwdbDSmQ14t9Z4A-PU1OJkauM8b0o9hadmRYET6Liv8GnBQrEAf7lU-kVQDfRpzbY1khT8CGzVZJe-9lifAHmxHD4zYFhoyMoFp7X0moIubSsKI91dEaRNuUEFJrMYBpo0shnKWTi9H0G9HbDQygIEtMMosBRJtBNqE8yQ5ibZjWnJpwSNwQBLWnG9rpm5aK-GvG5-NRdwvw4q0cp59b2Gb7fovNoWxDS8YD4Q

/ # VAULT_K8S_LOGIN=$(curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://masked-beetle-vault:8200/v1/auth/kubernetes/login)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 941 100 36 100 905 6000 147k --:--:-- --:--:-- --:--:-- 183k
/ # echo $VAULT_K8S_LOGIN
{"errors":["missing client token"]}

Thanks,
-Arvind

Don't mandate that the "token_reviewer_jwt" authentication token is a JWT

Given that this plugin takes the value of the "token_reviewer_jwt" configuration parameter and stuffs it into an "Authorization" HTTP header as a bearer token, why does the plugin care at all about the format of this token?

For clusters with custom Webhook token authentication, tokens can be in any format: they can be service account tokens, or JWTs, PASETOs, or something more exotic understood by the Webhook. The API server's authentication subsystem will take any value submitted as a bearer token and pass it through all the active token-consuming authentication methods. In our clusters, we have a Webhook that can consume tokens that are not JWTs. At present, we can't allow this plugin to authenticate using such a token.

I understand that it's not tenable to change the name of the "token_reviewer_jwt" configuration parameter today, but can we at least consider relaxing the enforcement of the format? If the intended consumer of the tokens doesn't mandate a particular format, why should this plugin be an overly stringent intermediary? In the interest of trying to help catch mistakes early, it's actually cutting off otherwise valid input.

This request may overlap with the earlier #34.

Role Period not used on Auth tokens

Given the following role:

$ curl -s -H "X-Vault-Token: my-token" https://vault.external/v1/auth/kubernetes/role/test_aws_auth | jq '.'
{
...
  "data": {
    "bound_service_account_names": [
      "default"
    ],
    "bound_service_account_namespaces": [
      "default"
    ],
    "max_ttl": 0,
    "num_uses": 0,
    "period": 3600,
    "policies": [
      "test_aws"
    ],
    "ttl": 0
  },
...
}

Calling vaultapi.Logical().Write() returns an auth token without a Period:

resp, _ := vaultapi.Logical().Write("auth/kubernetes/login", {role:test_aws_auth,:jwt:my-jwt})
log.Printf("%+v", resp.Auth)

{
	ClientToken:my-client-token
	Accessor:my-client-token-accessor
	Policies:[default, test_aws]
	Metadata:map[
		role:test_aws_auth
		service_account_name:default
		service_account_namespace:default
		service_account_secret_name:default-token-12345
		service_account_uid:5481711b-eb16-4d7a-a10f-a9c3e9fff05e
	]
	LeaseDuration:3600
	Renewable:true
}

If I understand https://github.com/hashicorp/vault-plugin-auth-kubernetes/blob/master/path_login.go#L103 correctly, Period should be set in the token received from the login call to the value on the associated role.

Use case: I have a custom k8s controller responsible for authenticating with Vault, parsing a custom SecretClaim resource, and storing a k8s Secret with the actual leased secret. I'm finding that my kubernetes auth tokens are expiring despite their associated roles having a set period, and so all the leases created by the controller are getting revoked when their parent token expires.

Return service account name from AliasLookaheadOperation

Currently AliasLookaheadOperation returns the service account's uid. It means if I would like to create an Entity for my service, with an alias for kubernetes auth, I need to know the kubernetes ServiceAccount uid. This is troublesome, as the ServiceAccount may not exists when configuring vault, or it can be recreated at any time during a deployment.

Please create a config so we can use either the service account name (kubernetes.io/serviceaccount/service-account.name) or the kubernetes auth role name, or maybe even serviceaccount namespace+name combination.

clarification of service account authorization

I am trying to fully understand the scope of this plugin. I see that I can obviously authenticate a Kubernetes service account with Vault, but does this extend to letting me populate Kubernetes secrets and/or volumes with secrets from Vault? I would like to use Vault as the backend for my secrets in Kubernetes. If not, are you guys aware of something that fulfills this role? If not I intend to build it.

Default jwt token and ca cert when running in k8s ?

I'm running vault in kubernetes using the official helm chart, therefore ca.crt and k8s token are available in the vault pod in /var/run/secrets/kubernetes.io/serviceaccount/.

The helm chart also set up the vault serviceaccount with cluster role system:auth-delegator for token validation. So the pod token can be used a the jwt_reviewer_token.

Would it be acceptable for the plugin to check if ca.crt and token exists in /var/run/secrets/kubernetes.io/serviceaccount/ and use them when enabling the plugin without specifying any ca cert or token.

This would greatly simplify my provisioning of vault when running in k8s.

I'm willing to work on a patch if it sounds good to you

Better support for K8S projected/scoped tokens

This is a general service account token, which can be used just fine with this plugin. But it also allows access to the kubernetes API itself.

{
  "iss": "kubernetes/serviceaccount",
  "kubernetes.io/serviceaccount/namespace": "gitlab-runner",
  "kubernetes.io/serviceaccount/secret.name": "christian-simon--test-token-pml6t",
  "kubernetes.io/serviceaccount/service-account.name": "christian-simon--test",
  "kubernetes.io/serviceaccount/service-account.uid": "f101dff5-98b6-11e9-ac8d-42010a8400cb",
  "sub": "system:serviceaccount:gitlab-runner:christian-simon--test"
}

I was trying to setup a projected/scoped token, which can't be used to access k8s API server and comes with a custom audiences. The pod has been created like that:

kind: Pod
apiVersion: v1
metadata:
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /var/run/secrets/tokens
      name: vault-token
  volumes:
  - name: vault-token
    projected:
      sources:
      - serviceAccountToken:
          path: vault-token
          expirationSeconds: 7200
          audience: https://vault.mycorp.net

This is the resulting JWT:

{
  "aud": [
    "https://vault.mycorp.net"
  ],
  "exp": 1561638806,
  "iat": 1561631606,
  "iss": "https://container.googleapis.com/v1/projects/mycorp-gitlab-workers/locations/europe-west1-b/clusters/mycorp-gitlab-workers",
  "kubernetes.io": {
    "namespace": "gitlab-runner",
    "pod": {
      "name": "runner-9fswzwaf-project-382-concurrent-0l99cc",
      "uid": "ff505f37-98c6-11e9-ac8d-42010a8400cb"
    },
    "serviceaccount": {
      "name": "christian-simon--test",
      "uid": "f101dff5-98b6-11e9-ac8d-42010a8400cb"
    }
  },
  "nbf": 1561631606,
  "sub": "system:serviceaccount:gitlab-runner:christian-simon--test"
}

Now trying this token with the auth plugin fails as the issuer is not as expected:

Code: 500. Errors:

* claim "iss" is invalid

I also can't use the JWT auth plugin, because Google's GKE doesn't give me access to the signing keys.

So I am proposing:

  • Allow to specify both audiences and issuers on roles, and then taking care of verifying those during login.

Wdyt?

I am more than happy providing a PR for that

Update JWT parsing to use a well-maintained library

It makes more sense to use coreos/go-oidc rather than briankassouf/jose. The coreos repo appears very well maintained, as of today the last commit was 11 days old vs 2 years old. It also appears to have a more healthy community backing.

docs should indicate that token_reviewer_jwt should be set to actual JWT token

The docs at https://www.vaultproject.io/docs/auth/kubernetes.html#configuration give the following for writing to the auth/kubernetes/config path:

$ vault write auth/kubernetes/config
token_reviewer_jwt="reviewer_service_account_jwt"
kubernetes_host=https://192.168.99.100:8443
[email protected]

I had interpreted "reviewer_service_account_jwt" as a literal and actually used that. I think it would be better if this were changed to "<your_reviewer_servicer_account_jwt>" using brackets and changing the name to indicate that this should be replaced with an actual JWT token.

We could also add brackets around your_service_account_jwt in the login example. But at least the name in this case makes it fairly clear that a user should not simply use that string.

Do not make ca cert or pem keys required

What is the reason for having ca cert or pem keys required? It doesn't make sense for me since the ca cert is being used only for establishing a connection to a kubernetes api. So for example, if you have kubernetes api behind AWS load balancer with a certificate from AWS certificate manager then you have to set either kubernetes_ca_cert with one of the AWS root CA's or set pem_keys. Setting public root CA looks odd to me. Setting pem_keys not always possible as described in #3 and changed in #4. Maybe we can get rid of such behavior? I think it won't make any harm because even if the kubernetes api would be secured with a self-signed certificate and a user would forget to set kubernetes_ca_cert he would see just an error like this x509: certificate signed by unknown authority", what makes clear to user that he needs to set kubernetes_ca_cert with appropriate ca cert.

Use pod service account instead of provided JWT for authentication

Right now the auth method relies on token_reviewer_jwt as a way to supply the JWT for authentication. This makes it a lot harder to config when running the config as code (e.g. using Terraform). Why not just used the service account used by the pod? This is also support token projection API, which renew the tokens. Maybe this can be another config option? I'll be happy to open a PR

Auth requests seem nondeterministic

Following this tutorial https://medium.com/@gmaliar/dynamic-secrets-on-kubernetes-pods-using-vault-35d9094d169 (important bits https://github.com/gmaliar/vault-k8s-dynamic-secrets/blob/master/COMMANDS.md) to get Vault and Kubernetes to play nicely together.

This is my login command (run manually from inside the pod):

curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login

Sometimes it works, sometimes it doesn't. ¯\_(ツ)_/¯

As expected, working looks like this:

{
  "request_id": "18f4b4d1-58a7-d19e-bc27-44457e3958f6",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "05cdb644-eeca-85fc-18c7-0423f6357d42",
    "accessor": "bc287bb2-0995-0e30-1d85-b250508d73b5",
    "policies": [
      "default",
      "postgres-policy"
    ],
    /*... blah blah blah */
  }
}

Not working looks like this:

{
  "errors": [
    "missing client token"
  ]
}

Neither delay, nor frequency, nor manual revocation seem to have any meaningful impact. Like, either it should work, or it shouldn't. The token is definitely not only sometimes there, so at the very least, the error is unhelpful and misleading.

I cannot stress enough that I am repeatedly running the identical command.

Proof
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"request_id":"ecee2ffd-2bfb-d746-a051-bda1e5c42f78","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":null,"auth":{"client_token":"406d1b21-b65d-4dbe-1cf4-06d44a8f1687","accessor":"65068c80-0447-29d3-7254-33e5694b6c29","policies":["default","postgres-policy"],"metadata":{"role":"postgres","service_account_name":"postgres-vault","service_account_namespace":"default","service_account_secret_name":"postgres-vault-token-577sq","service_account_uid":"75014f32-8ac1-11e8-b1f6-080027cfdb6c"},"lease_duration":86400,"renewable":true,"entity_id":"49e65f4d-88d5-eef4-b989-15587c7b89ba"}}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"request_id":"ea7f0dd5-6583-5945-e040-bf513c7a6201","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":null,"auth":{"client_token":"0ce78e3e-4da9-fd92-9e5b-6adb7357b063","accessor":"185e58c2-6655-f10c-1899-71d96256ff37","policies":["default","postgres-policy"],"metadata":{"role":"postgres","service_account_name":"postgres-vault","service_account_namespace":"default","service_account_secret_name":"postgres-vault-token-577sq","service_account_uid":"75014f32-8ac1-11e8-b1f6-080027cfdb6c"},"lease_duration":86400,"renewable":true,"entity_id":"49e65f4d-88d5-eef4-b989-15587c7b89ba"}}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}
/ # curl --request POST --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "postgres"}' http://oily-puma-vault:8200/v1/auth/kubernetes/login
{"errors":["missing client token"]}

Software versions:
Vault: 0.10.1
Kubernetes: 1.10.0

Docs should call out using the backend gives it some control over the cluster

Handing over K8s service account tokens to third parties is not an authentication behavior we want to encourage in general. In Vault's case this is likely to be acceptable in many environments since Vault tends to be the root of trust and an attack on the system requires control of Vault.

The long term plan is for K8s to make it easier for integrators to deliver their own creds via a solution being designed by the Kubernetes Container Identity Working Group:
https://docs.google.com/document/d/1bCK-1_Zy2WfsrMBJkdaV72d2hidaxZBhS5YQHAgscPI/edit

In the meantime the docs for this auth backend should make it clear that enabling it gives the Vault installation you're talking to control over the cluster. It should only be used with Vault installations that you trust with powerful access to your cluster.

Read config endpoint does not indicate if token_reviewer_jwt is set

Currently since the read config endpoint does not expose the token_reviewer_jwt field for security reasons, but there is no indication if it is set or not. Because this field is optional, it doesn't seem to be possible to tell if its set other than trying to login with a kubernetes JWT that does not have access to the token reviewer API. It would be nice if the API returned the token_reviewer_jwt masked, or just a field that indicate it was set (something like token_reviewer_jwt_provided as a boolean).

Happy to provide a PR if this is something you'd be interested in supporting.

Thanks for the great product!

Is the token_reviewer_jwt mandatory to authenticate client requests?

In our Vault docker init script file, we only inject the kubernetes_ca_cert for the k8s config:

vault write auth/kubernetes/config kubernetes_host=$KUBERNETES_HOST kubernetes_ca_cert=@/vault/data/cert/k8cert.crt 

And it's able to authenticate the client with k8s. Is the token_reviewer_jwt still mandatory (which is present in almost all the official samples or demos)?

about TokenReviewAPI permissions

Hello folks,
I have a conceptual question here:

  • Why do applications wishing to obtain a Vault token need to have their service account bound to a ClusterRole, i.e.: why do they need TokenReviewAPI access if it's Vault and not the application who needs to review that the provided token is legit and valid ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.