Coder Social home page Coder Social logo

ory / k8s Goto Github PK

View Code? Open in Web Editor NEW
327.0 13.0 253.0 20.12 MB

Kubernetes Helm Charts for the ORY ecosystem.

Home Page: https://k8s.ory.sh/helm

License: Apache License 2.0

Shell 12.36% Makefile 14.78% Smarty 9.35% Mustache 63.52%
k8s kubernetes helm ory-oathkeeper ory-hydra ory-keto ory-hive cloud charts helm-chart

k8s's Introduction

Kubernetes Helm Charts for ORY

CI

This repository contains helm charts for Kubernetes. All charts are in incubation phase and use is at your own risk.

Please go to k8s.ory.sh/helm for a list of helm charts and their configuration options.

NOTE

All charts present in this repository require Kubernetes 1.18+. Please refer to releases 0.18.0 and older for versions supporting older releases of Kubernetes.

Development

You can test and develop charts locally using Minikube.

To test a chart locally without applying it to kubernetes, do:

$ helm install --debug --dry-run <name> .
$ name=<name>
$ helm install $name .
$ helm upgrade $name .

Ingress

If you wish to test ingress, run:

$ minikube addons enable ingress

Next you need to set up /etc/hosts to route traffic from domains - in this example for ORY Oathkeeper:

  • api.oathkeeper.localhost
  • proxy.oathkeeper.localhost

to the ingress IP. You can find the ingress IP using:

$ kubectl get ingress
NAME                           HOSTS                        ADDRESS        PORTS     AGE
kilted-ibex-oathkeeper-api     api.oathkeeper.localhost     192.168.64.3   80        1d
kilted-ibex-oathkeeper-proxy   proxy.oathkeeper.localhost   192.168.64.3   80        1d

Then, append the following entries to your host file (/etc/hosts):

192.168.64.3    api.oathkeeper.localhost
192.168.64.3    proxy.oathkeeper.localhost

Testing

To run helm test, do:

$ helm lint .
$ helm install <name> .
$ helm test <name>

Remove all releases

To remove all releases (only in test environments), do:

$ helm del $(helm ls --all --short) --purge

k8s's People

Contributors

adamstrawson avatar adamwalach avatar aeneasr avatar alexandrebrg avatar alexgnx avatar bkomraz1 avatar christian-roggia avatar clement-buchart avatar cpwc avatar david-wobrock avatar demonsthere avatar dkimprowised avatar ermik avatar ixday avatar jorgagu avatar kevgo avatar koenmtb1 avatar omissis avatar ory-bot avatar paulbdavis avatar phsym avatar piotrmsc avatar pjediny avatar pommelinho avatar scottcrossen avatar supercairos avatar thomasboni avatar tricky42 avatar zepatrik avatar zhming0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

hydra-maester: redirectURIs and clientSecret not supported by latest release 0.0.47

Description

In the docs the properties redirectURIs and clientSecret of the crd are mentioned. But the current version 0.0.47 doesn't support those values. I think those values are very essential. I cannot use hydra-maester without them.

Solution

Add a new release after this commit 9c1566c.

9c1566c#diff-aa0d96bd300dbf559b37535472d28f74

Alternative

Change the docs to be consistent with 0.0.47

Comments

Is there a version that I can use for now that does include those properties?

Installing hydra and oathkeeper in one namespace results in conflict

Describe the bug

When installing hydra and oathkeeper in one namespace there is conflict regarding maester-account:

Error: serviceaccounts "maester-account" already exists

which, as far as I checked comes from hydra-maester and oathkeeper-maester charts.

To Reproduce

Steps to reproduce the behavior:

  1. helm create ory
  2. Add dependencies in Chart.yaml:
    dependencies:
    - name: hydra
      version: 0.3.4
      repository: https://k8s.ory.sh/helm/charts
    - name: oathkeeper
      version: 0.3.4
      repository: https://k8s.ory.sh/helm/charts
    
  3. Delete templates directory.
  4. Try helm install ...

Expected behavior

The charts are deployed successfully (or complain about missing values in values.yaml.

Actual behavior

Error: serviceaccounts "maester-account" already exists

is thrown.

Environment

  • Version: 0.3.4
  • Environment: helm3

Additional context

N/A

Migrate Helm to v3

Is your feature request related to a problem? Please describe.

Helm charts and the release pipeline have been written using Helm v2. We want to deprecate support for Helm v2 and move to Helm v3 as it is way easier to install and set up and comes with other improvements.

Describe the solution you'd like

All helm charts should be updated to work with v3. In the optimal case, they would be backwards compatible.

The CI chain also needs to be upgraded to Helm v3.

Issue with hydra.dangerousAllowInsecureRedirectUrls

Describe the bug

hydra.dangerousAllowInsecureRedirectUrls isn't implemented correctly

To Reproduce

Steps to reproduce the behavior:

Expected behavior

It should install with the correct args passed through to the container.

Environment

N/A

Additional context

Find the problem here:
https://github.com/ory/k8s/blob/master/helm/charts/hydra/templates/deployment.yaml#L57-L62

Not sure if hydra takes multiple key pairs of ""--dangerous-allow-insecure-redirect-urls" or if the value should be comma separated.

Considering moving to kustomize

Is your feature request related to a problem? Please describe.
Yes, while installing through helm I felt somewhat limited by the template, e.g. I want to create some additional environment variables, istio's virtualservice in place of ingress, some crds along with hydra etc...

Describe the solution you'd like
I propose moving to Kustomize, apart from the main benefit that its built into the kubectl as well as its allows a better way of ovveriding the base, its more closer to vanilla yaml and template free.

Describe alternatives you've considered
helm convert a plugin I used to convert my own deployments. I will send a PR connecting this issue

Maester still installed with `maester.enabled=false`

Describe the bug

When installing hydra with maester.enabled=false, the hydra-maester still gets installed.

To Reproduce

Steps to reproduce the behavior:

  1. in a local minikube environment:
➜  hydra git:(master) ✗ minikube version
minikube version: v1.6.1
commit: 42a9df4854dcea40ec187b6b8f9a910c6038f81a
  1. before installing hydra list all pods:
➜  hydra git:(master) ✗ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       hydra-idp-example-idp-6546f97574-2f79j   1/1     Running   1          9h
kube-system   coredns-6955765f44-4h67k                 1/1     Running   4          12d
kube-system   coredns-6955765f44-8bqz6                 1/1     Running   4          12d
kube-system   etcd-minikube                            1/1     Running   4          12d
kube-system   kube-addon-manager-minikube              1/1     Running   4          12d
kube-system   kube-apiserver-minikube                  1/1     Running   4          12d
kube-system   kube-controller-manager-minikube         1/1     Running   12         12d
kube-system   kube-proxy-492ns                         1/1     Running   4          12d
kube-system   kube-scheduler-minikube                  1/1     Running   11         12d
kube-system   storage-provisioner                      1/1     Running   7          12d
  1. install hydra using the following command:
helm install hydra ory/hydra \
    --set 'maester.enabled=false' \
    --set 'hydra.dangerousForceHttp=true' \
    --set 'hydra.config.dsn=memory' \
    --set 'hydra.config.urls.self.issuer=http://localhost:4444' \
    --set 'hydra.config.urls.login=http://localhost:3000/login' \
    --set 'hydra.config.urls.consent=http://localhost:3000/consent' \
    --set 'hydra.config.oauth2.expose_internal_errors=true'
  1. as a result, hydra-maester is also installed along with hydra:
➜  hydra git:(master) ✗ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
default       hydra-674bd4dcfb-fj74l                   0/1     Running   0          7s
default       hydra-hydra-maester-754c678b64-6qvkt     1/1     Running   0          7s
default       hydra-idp-example-idp-6546f97574-2f79j   1/1     Running   1          9h
kube-system   coredns-6955765f44-4h67k                 1/1     Running   4          12d
kube-system   coredns-6955765f44-8bqz6                 1/1     Running   4          12d
kube-system   etcd-minikube                            1/1     Running   4          12d
kube-system   kube-addon-manager-minikube              1/1     Running   4          12d
kube-system   kube-apiserver-minikube                  1/1     Running   4          12d
kube-system   kube-controller-manager-minikube         1/1     Running   12         12d
kube-system   kube-proxy-492ns                         1/1     Running   4          12d
kube-system   kube-scheduler-minikube                  1/1     Running   11         12d
kube-system   storage-provisioner                      1/1     Running   7          12d

Expected behavior

It is expected that the hydra-maester k8s controller is not installed if installing hydra with attribute maester.enabled set to false.

For instance, listing all pods using kubectl get pods -A should not list any pod for hydra-maester.

Environment

K8s version:

➜  hydra git:(master) ✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:11:03Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Helm version:

➜  hydra git:(master) ✗ helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}

Helm repo:

➜  hydra git:(master) ✗ helm repo list
NAME	URL
ory 	https://k8s.ory.sh/helm/charts

Additional context

Using the --debug and --dry-run flags when executing the helm install command gives the following output:

➜  hydra git:(master) ✗ helm install hydra ory/hydra \
    --debug \
    --dry-run \
    --set 'maester.enabled=false' \
    --set 'hydra.dangerousForceHttp=true' \
    --set 'hydra.config.dsn=memory' \
    --set 'hydra.config.urls.self.issuer=http://localhost:4444' \
    --set 'hydra.config.urls.login=http://localhost:3000/login' \
    --set 'hydra.config.urls.consent=http://localhost:3000/consent' \
    --set 'hydra.config.oauth2.expose_internal_errors=true'
install.go:148: [debug] Original chart version: ""
install.go:165: [debug] CHART PATH: /Users/garryyuan/Library/Caches/helm/repository/hydra-0.0.48.tgz

NAME: hydra
LAST DEPLOYED: Mon Dec 30 21:59:38 2019
NAMESPACE: default
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
hydra:
  config:
    dsn: memory
    oauth2:
      expose_internal_errors: true
    urls:
      consent: http://localhost:3000/consent
      login: http://localhost:3000/login
      self:
        issuer: http://localhost:4444
  dangerousForceHttp: true
maester:
  enabled: false

COMPUTED VALUES:
affinity: {}
deployment:
  annotations: {}
  labels: {}
  nodeSelector: {}
  resources: {}
  tolerations: []
fullnameOverride: ""
hydra:
  autoMigrate: false
  config:
    dsn: memory
    oauth2:
      expose_internal_errors: true
    secrets: {}
    serve:
      admin:
        port: 4445
      public:
        port: 4444
      tls:
        allow_termination_from:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
    urls:
      consent: http://localhost:3000/consent
      login: http://localhost:3000/login
      self:
        issuer: http://localhost:4444
  dangerousAllowInsecureRedirectUrls: false
  dangerousForceHttp: true
hydra-maester:
  adminService:
    name: null
    port: null
  affinity: {}
  deployment:
    annotations: {}
    nodeSelector: {}
    resources: {}
    tolerations: []
  enabledNamespaces: []
  global: {}
  image:
    pullPolicy: IfNotPresent
    repository: oryd/hydra-maester
    tag: v0.0.5
  rbacJob:
    image:
      repository: eu.gcr.io/kyma-project/test-infra/alpine-kubectl
      tag: v20190325-ff66a3a
  replicaCount: 1
image:
  pullPolicy: IfNotPresent
  repository: oryd/hydra
  tag: v1.0.0
imagePullSecrets: []
ingress:
  admin:
    annotations: {}
    enabled: false
    hosts:
    - host: admin.hydra.localhost
      paths:
      - /
  public:
    annotations: {}
    enabled: false
    hosts:
    - host: public.hydra.localhost
      paths:
      - /
maester:
  adminService: null
  enabled: false
nameOverride: ""
replicaCount: 1
service:
  admin:
    annotations: {}
    enabled: true
    port: 4445
    type: ClusterIP
  public:
    annotations: {}
    enabled: true
    port: 4444
    type: ClusterIP

HOOKS:
---
# Source: hydra/charts/hydra-maester/templates/crd-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hydra-crd-init
  annotations:
    "helm.sh/hook": "pre-install, pre-upgrade"
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": "before-hook-creation"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["create", "get", "list", "watch", "patch"]
---
# Source: hydra/charts/hydra-maester/templates/crd-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hydra-crd-init
  annotations:
    "helm.sh/hook": "pre-install, pre-upgrade"
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": "before-hook-creation"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hydra-crd-init
subjects:
  - kind: ServiceAccount
    name: hydra-crd-init
    namespace: default
---
# Source: hydra/charts/hydra-maester/templates/crd-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hydra-crd-init
  namespace: default
  annotations:
    "helm.sh/hook": "pre-install, pre-upgrade"
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": "before-hook-creation"
---
# Source: hydra/charts/hydra-maester/templates/crd-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: default
  name: hydra-crd-oauth2clients
  annotations:
    "helm.sh/hook": "pre-install, pre-upgrade"
    "helm.sh/hook-weight": "1"
    "helm.sh/hook-delete-policy": "before-hook-creation"
data:
  oauth2clients.yaml: |-
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      creationTimestamp: null
      name: oauth2clients.hydra.ory.sh
    spec:
      group: hydra.ory.sh
      names:
        kind: OAuth2Client
        plural: oauth2clients
      scope: ""
      subresources:
        status: {}
      validation:
        openAPIV3Schema:
          description: OAuth2Client is the Schema for the oauth2clients API
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this representation
                of an object. Servers should convert recognized schemas to the latest
                internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
              type: string
            metadata:
              properties:
                annotations:
                  additionalProperties:
                    type: string
                  description: 'Annotations is an unstructured key value map stored with
                    a resource that may be set by external tools to store and retrieve
                    arbitrary metadata. They are not queryable and should be preserved
                    when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations'
                  type: object
                clusterName:
                  description: The name of the cluster which the object belongs to. This
                    is used to distinguish resources with same name and namespace in different
                    clusters. This field is not set anywhere right now and apiserver is
                    going to ignore it if set in create or update request.
                  type: string
                creationTimestamp:
                  description: "CreationTimestamp is a timestamp representing the server
                    time when this object was created. It is not guaranteed to be set
                    in happens-before order across separate operations. Clients may not
                    set this value. It is represented in RFC3339 form and is in UTC. \n
                    Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata"
                  format: date-time
                  type: string
                deletionGracePeriodSeconds:
                  description: Number of seconds allowed for this object to gracefully
                    terminate before it will be removed from the system. Only set when
                    deletionTimestamp is also set. May only be shortened. Read-only.
                  format: int64
                  type: integer
                deletionTimestamp:
                  description: "DeletionTimestamp is RFC 3339 date and time at which this
                    resource will be deleted. This field is set by the server when a graceful
                    deletion is requested by the user, and is not directly settable by
                    a client. The resource is expected to be deleted (no longer visible
                    from resource lists, and not reachable by name) after the time in
                    this field, once the finalizers list is empty. As long as the finalizers
                    list contains items, deletion is blocked. Once the deletionTimestamp
                    is set, this value may not be unset or be set further into the future,
                    although it may be shortened or the resource may be deleted prior
                    to this time. For example, a user may request that a pod is deleted
                    in 30 seconds. The Kubelet will react by sending a graceful termination
                    signal to the containers in the pod. After that 30 seconds, the Kubelet
                    will send a hard termination signal (SIGKILL) to the container and
                    after cleanup, remove the pod from the API. In the presence of network
                    partitions, this object may still exist after this timestamp, until
                    an administrator or automated process can determine the resource is
                    fully terminated. If not set, graceful deletion of the object has
                    not been requested. \n Populated by the system when a graceful deletion
                    is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata"
                  format: date-time
                  type: string
                finalizers:
                  description: Must be empty before the object is deleted from the registry.
                    Each entry is an identifier for the responsible component that will
                    remove the entry from the list. If the deletionTimestamp of the object
                    is non-nil, entries in this list can only be removed.
                  items:
                    type: string
                  type: array
                generateName:
                  description: "GenerateName is an optional prefix, used by the server,
                    to generate a unique name ONLY IF the Name field has not been provided.
                    If this field is used, the name returned to the client will be different
                    than the name passed. This value will also be combined with a unique
                    suffix. The provided value has the same validation rules as the Name
                    field, and may be truncated by the length of the suffix required to
                    make the value unique on the server. \n If this field is specified
                    and the generated name exists, the server will NOT return a 409 -
                    instead, it will either return 201 Created or 500 with Reason ServerTimeout
                    indicating a unique name could not be found in the time allotted,
                    and the client should retry (optionally after the time indicated in
                    the Retry-After header). \n Applied only if Name is not specified.
                    More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency"
                  type: string
                generation:
                  description: A sequence number representing a specific generation of
                    the desired state. Populated by the system. Read-only.
                  format: int64
                  type: integer
                initializers:
                  description: "An initializer is a controller which enforces some system
                    invariant at object creation time. This field is a list of initializers
                    that have not yet acted on this object. If nil or empty, this object
                    has been completely initialized. Otherwise, the object is considered
                    uninitialized and is hidden (in list/watch and get calls) from clients
                    that haven't explicitly asked to observe uninitialized objects. \n
                    When an object is created, the system will populate this list with
                    the current set of initializers. Only privileged users may set or
                    modify this list. Once it is empty, it may not be modified further
                    by any user. \n DEPRECATED - initializers are an alpha field and will
                    be removed in v1.15."
                  properties:
                    pending:
                      description: Pending is a list of initializers that must execute
                        in order before this object is visible. When the last pending
                        initializer is removed, and no failing result is set, the initializers
                        struct will be set to nil and the object is considered as initialized
                        and visible to all clients.
                      items:
                        properties:
                          name:
                            description: name of the process that is responsible for initializing
                              this object.
                            type: string
                        required:
                        - name
                        type: object
                      type: array
                    result:
                      description: If result is set with the Failure field, the object
                        will be persisted to storage and then deleted, ensuring that other
                        clients can observe the deletion.
                      properties:
                        apiVersion:
                          description: 'APIVersion defines the versioned schema of this
                            representation of an object. Servers should convert recognized
                            schemas to the latest internal value, and may reject unrecognized
                            values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
                          type: string
                        code:
                          description: Suggested HTTP return code for this status, 0 if
                            not set.
                          format: int32
                          type: integer
                        details:
                          description: Extended data associated with the reason.  Each
                            reason may define its own extended details. This field is
                            optional and the data returned is not guaranteed to conform
                            to any schema except that defined by the reason type.
                          properties:
                            causes:
                              description: The Causes array includes more details associated
                                with the StatusReason failure. Not all StatusReasons may
                                provide detailed causes.
                              items:
                                properties:
                                  field:
                                    description: "The field of the resource that has caused
                                      this error, as named by its JSON serialization.
                                      May include dot and postfix notation for nested
                                      attributes. Arrays are zero-indexed.  Fields may
                                      appear more than once in an array of causes due
                                      to fields having multiple errors. Optional. \n Examples:
                                      \  \"name\" - the field \"name\" on the current
                                      resource   \"items[0].name\" - the field \"name\"
                                      on the first array entry in \"items\""
                                    type: string
                                  message:
                                    description: A human-readable description of the cause
                                      of the error.  This field may be presented as-is
                                      to a reader.
                                    type: string
                                  reason:
                                    description: A machine-readable description of the
                                      cause of the error. If this value is empty there
                                      is no information available.
                                    type: string
                                type: object
                              type: array
                            group:
                              description: The group attribute of the resource associated
                                with the status StatusReason.
                              type: string
                            kind:
                              description: 'The kind attribute of the resource associated
                                with the status StatusReason. On some operations may differ
                                from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
                              type: string
                            name:
                              description: The name attribute of the resource associated
                                with the status StatusReason (when there is a single name
                                which can be described).
                              type: string
                            retryAfterSeconds:
                              description: If specified, the time in seconds before the
                                operation should be retried. Some errors may indicate
                                the client must take an alternate action - for those errors
                                this field may indicate how long to wait before taking
                                the alternate action.
                              format: int32
                              type: integer
                            uid:
                              description: 'UID of the resource. (when there is a single
                                resource which can be described). More info: http://kubernetes.io/docs/user-guide/identifiers#uids'
                              type: string
                          type: object
                        kind:
                          description: 'Kind is a string value representing the REST resource
                            this object represents. Servers may infer this from the endpoint
                            the client submits requests to. Cannot be updated. In CamelCase.
                            More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
                          type: string
                        message:
                          description: A human-readable description of the status of this
                            operation.
                          type: string
                        metadata:
                          description: 'Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
                          properties:
                            continue:
                              description: continue may be set if the user set a limit
                                on the number of items returned, and indicates that the
                                server has more data available. The value is opaque and
                                may be used to issue another request to the endpoint that
                                served this list to retrieve the next set of available
                                objects. Continuing a consistent list may not be possible
                                if the server configuration has changed or more than a
                                few minutes have passed. The resourceVersion field returned
                                when using this continue value will be identical to the
                                value in the first response, unless you have received
                                this token from an error message.
                              type: string
                            resourceVersion:
                              description: 'String that identifies the server''s internal
                                version of this object that can be used by clients to
                                determine when objects have changed. Value must be treated
                                as opaque by clients and passed unmodified back to the
                                server. Populated by the system. Read-only. More info:
                                https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency'
                              type: string
                            selfLink:
                              description: selfLink is a URL representing this object.
                                Populated by the system. Read-only.
                              type: string
                          type: object
                        reason:
                          description: A machine-readable description of why this operation
                            is in the "Failure" status. If this value is empty there is
                            no information available. A Reason clarifies an HTTP status
                            code but does not override it.
                          type: string
                        status:
                          description: 'Status of the operation. One of: "Success" or
                            "Failure". More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status'
                          type: string
                      type: object
                  required:
                  - pending
                  type: object
                labels:
                  additionalProperties:
                    type: string
                  description: 'Map of string keys and values that can be used to organize
                    and categorize (scope and select) objects. May match selectors of
                    replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels'
                  type: object
                managedFields:
                  description: "ManagedFields maps workflow-id and version to the set
                    of fields that are managed by that workflow. This is mostly for internal
                    housekeeping, and users typically shouldn't need to set or understand
                    this field. A workflow can be the user's name, a controller's name,
                    or the name of a specific apply path like \"ci-cd\". The set of fields
                    is always in the version that the workflow used when modifying the
                    object. \n This field is alpha and can be changed or removed without
                    notice."
                  items:
                    properties:
                      apiVersion:
                        description: APIVersion defines the version of this resource that
                          this field set applies to. The format is "group/version" just
                          like the top-level APIVersion field. It is necessary to track
                          the version of a field set because it cannot be automatically
                          converted.
                        type: string
                      fields:
                        additionalProperties: true
                        description: Fields identifies a set of fields.
                        type: object
                      manager:
                        description: Manager is an identifier of the workflow managing
                          these fields.
                        type: string
                      operation:
                        description: Operation is the type of operation which lead to
                          this ManagedFieldsEntry being created. The only valid values
                          for this field are 'Apply' and 'Update'.
                        type: string
                      time:
                        description: Time is timestamp of when these fields were set.
                          It should always be empty if Operation is 'Apply'
                        format: date-time
                        type: string
                    type: object
                  type: array
                name:
                  description: 'Name must be unique within a namespace. Is required when
                    creating resources, although some resources may allow a client to
                    request the generation of an appropriate name automatically. Name
                    is primarily intended for creation idempotence and configuration definition.
                    Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names'
                  type: string
                namespace:
                  description: "Namespace defines the space within each name must be unique.
                    An empty namespace is equivalent to the \"default\" namespace, but
                    \"default\" is the canonical representation. Not all objects are required
                    to be scoped to a namespace - the value of this field for those objects
                    will be empty. \n Must be a DNS_LABEL. Cannot be updated. More info:
                    http://kubernetes.io/docs/user-guide/namespaces"
                  type: string
                ownerReferences:
                  description: List of objects depended by this object. If ALL objects
                    in the list have been deleted, this object will be garbage collected.
                    If this object is managed by a controller, then an entry in this list
                    will point to this controller, with the controller field set to true.
                    There cannot be more than one managing controller.
                  items:
                    properties:
                      apiVersion:
                        description: API version of the referent.
                        type: string
                      blockOwnerDeletion:
                        description: If true, AND if the owner has the "foregroundDeletion"
                          finalizer, then the owner cannot be deleted from the key-value
                          store until this reference is removed. Defaults to false. To
                          set this field, a user needs "delete" permission of the owner,
                          otherwise 422 (Unprocessable Entity) will be returned.
                        type: boolean
                      controller:
                        description: If true, this reference points to the managing controller.
                        type: boolean
                      kind:
                        description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
                        type: string
                      name:
                        description: 'Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names'
                        type: string
                      uid:
                        description: 'UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids'
                        type: string
                    required:
                    - apiVersion
                    - kind
                    - name
                    - uid
                    type: object
                  type: array
                resourceVersion:
                  description: "An opaque value that represents the internal version of
                    this object that can be used by clients to determine when objects
                    have changed. May be used for optimistic concurrency, change detection,
                    and the watch operation on a resource or set of resources. Clients
                    must treat these values as opaque and passed unmodified back to the
                    server. They may only be valid for a particular resource or set of
                    resources. \n Populated by the system. Read-only. Value must be treated
                    as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency"
                  type: string
                selfLink:
                  description: SelfLink is a URL representing this object. Populated by
                    the system. Read-only.
                  type: string
                uid:
                  description: "UID is the unique in time and space value for this object.
                    It is typically generated by the server on successful creation of
                    a resource and is not allowed to change on PUT operations. \n Populated
                    by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids"
                  type: string
              type: object
            spec:
              properties:
                grantTypes:
                  description: GrantTypes is an array of grant types the client is allowed
                    to use.
                  items:
                    enum:
                    - client_credentials
                    - authorization_code
                    - implicit
                    - refresh_token
                    type: string
                  maxItems: 4
                  minItems: 1
                  type: array
                hydraAdmin:
                  description: HydraAdmin is the optional configuration to use for managing
                    this client
                  properties:
                    endpoint:
                      description: Endpoint is the endpoint for the hydra instance on
                        which to set up the client. This value will override the value
                        provided to `--endpoint` (defaults to `"/clients"` in the application)
                      pattern: (^$|^/.*)
                      type: string
                    forwardedProto:
                      description: ForwardedProto overrides the `--forwarded-proto` flag.
                        The value "off" will force this to be off even if `--forwarded-proto`
                        is specified
                      pattern: (^$|https?|off)
                      type: string
                    port:
                      description: Port is the port for the hydra instance on which to
                        set up the client. This value will override the value provided
                        to `--hydra-port`
                      maximum: 65535
                      type: integer
                    url:
                      description: URL is the URL for the hydra instance on which to set
                        up the client. This value will override the value provided to
                        `--hydra-url`
                      maxLength: 64
                      pattern: (^$|^https?://.*)
                      type: string
                  type: object
                redirectUris:
                  description: RedirectURIs is an array of the redirect URIs allowed for
                    the application
                  items:
                    pattern: \w+:/?/?[^\s]+
                    type: string
                  type: array
                responseTypes:
                  description: ResponseTypes is an array of the OAuth 2.0 response type
                    strings that the client can use at the authorization endpoint.
                  items:
                    enum:
                    - id_token
                    - code
                    - token
                    type: string
                  maxItems: 3
                  minItems: 1
                  type: array
                scope:
                  description: Scope is a string containing a space-separated list of
                    scope values (as described in Section 3.3 of OAuth 2.0 [RFC6749])
                    that the client can use when requesting access tokens.
                  pattern: ([a-zA-Z0-9\.\*]+\s?)+
                  type: string
                secretName:
                  description: SecretName points to the K8s secret that contains this
                    client's ID and password
                  maxLength: 253
                  minLength: 1
                  pattern: '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
                  type: string
              required:
              - grantTypes
              - scope
              - secretName
              type: object
            status:
              properties:
                observedGeneration:
                  description: ObservedGeneration represents the most recent generation
                    observed by the daemon set controller.
                  format: int64
                  type: integer
                reconciliationError:
                  properties:
                    description:
                      description: Description is the description of the reconciliation
                        error
                      type: string
                    statusCode:
                      description: Code is the status code of the reconciliation error
                      type: string
                  type: object
              type: object
          type: object
      versions:
      - name: v1alpha1
        served: true
        storage: true
    status:
      acceptedNames:
        kind: ""
        plural: ""
      conditions: []
      storedVersions: []
---
# Source: hydra/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: hydra
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
  annotations:
    # Create the secret before installation, and only then. This saves the secret from regenerating during an upgrade
    "helm.sh/hook": "pre-install"
    "helm.sh/hook-delete-policy": "before-hook-creation"
type: Opaque
data:
  # Generate a random secret if the user doesn't give one. User given password has priority
  secretsSystem: "emwzSGFwQTMxS08xY0RNOU9KT3dqVHFRY3ZUUHFHa0c="
  secretsCookie: "SFBsNm01d3M0aXhXZ1h3c1J1cHJ5R0lIWEF3b2NxUVE="
  dsn: "bWVtb3J5"
---
# Source: hydra/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "hydra-test-connection"
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
  annotations:
    "helm.sh/hook": test-success
spec:
  containers:
    - name: healthcheck-ready
      image: busybox
      command: ['wget']
      args:  ['hydra-admin:4445/health/ready']
  restartPolicy: Never
---
# Source: hydra/charts/hydra-maester/templates/crd-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  namespace: default
  name: hydra-crd-init
  annotations:
    "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
    "helm.sh/hook": "pre-install,pre-upgrade"
    "helm.sh/hook-weight": "10"
spec:
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      serviceAccountName: hydra-crd-init
      containers:
      - name: hydra-crd-oauth2clients
        image: "eu.gcr.io/kyma-project/test-infra/alpine-kubectl:v20190325-ff66a3a"
        volumeMounts:
        - name: crd-oauth2clients
          mountPath: /etc/crd
          readOnly: true
        command: ["kubectl",  "apply", "-f", "/etc/crd/oauth2clients.yaml"]
      volumes:
      - name: crd-oauth2clients
        configMap:
          name: hydra-crd-oauth2clients
      restartPolicy: OnFailure
MANIFEST:
---
# Source: hydra/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: hydra
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
data:
  "config.yaml": |

    oauth2:
      expose_internal_errors: true
    serve:
      admin:
        port: 4445
      public:
        port: 4444
      tls:
        allow_termination_from:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
    urls:
      consent: http://localhost:3000/consent
      login: http://localhost:3000/login
      self:
        issuer: http://localhost:4444
---
# Source: hydra/charts/hydra-maester/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hydra-maester-account
  namespace:  default
---
# Source: hydra/charts/hydra-maester/templates/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hydra-maester-role
  namespace:  default
rules:
  - apiGroups: ["hydra.ory.sh"]
    resources: ["oauth2clients", "oauth2clients/status"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["list", "watch", "create"]
---
# Source: hydra/charts/hydra-maester/templates/rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hydra-maester-role-binding
  namespace:  default
subjects:
  - kind: ServiceAccount
    name: hydra-maester-account # Service account assigned to the controller pod.
    namespace:  default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hydra-maester-role
---
# Source: hydra/charts/hydra-maester/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hydra-maester-role-default
  namespace:  default
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "watch", "create"]
---
# Source: hydra/charts/hydra-maester/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: hydra-maester-role-binding-default
  namespace:  default
subjects:
  - kind: ServiceAccount
    name: hydra-maester-account # Service account assigned to the controller pod.
    namespace:  default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: hydra-maester-role-default
---
# Source: hydra/templates/service-admin.yaml
apiVersion: v1
kind: Service
metadata:
  name: hydra-admin
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
spec:
  type: ClusterIP
  ports:
    - port: 4445
      targetPort: http-admin
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: hydra
    app.kubernetes.io/instance: hydra
---
# Source: hydra/templates/service-public.yaml
apiVersion: v1
kind: Service
metadata:
  name: hydra-public
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
spec:
  type: ClusterIP
  ports:
    - port: 4444
      targetPort: http-public
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: hydra
    app.kubernetes.io/instance: hydra
---
# Source: hydra/charts/hydra-maester/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hydra-hydra-maester
  labels:
    app.kubernetes.io/name: hydra-maester
    helm.sh/chart: hydra-maester-0.0.5-alpha10
    app.kubernetes.io/instance: hydra
    app.kubernetes.io/version: "0.0.5-alpha10"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      control-plane: controller-manager
      app.kubernetes.io/name: hydra-maester
      app.kubernetes.io/instance: hydra
  template:
    metadata:
      labels:
        control-plane: controller-manager
        app.kubernetes.io/name: hydra-maester
        app.kubernetes.io/instance: hydra
    spec:
      containers:
        - name: hydra-maester
          image: "oryd/hydra-maester:v0.0.5"
          imagePullPolicy: IfNotPresent
          command:
            - /manager
          args:
            - --metrics-addr=127.0.0.1:8080
            - --hydra-url=http://hydra-admin
            - --hydra-port=4445
          resources:
            {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      serviceAccountName: hydra-maester-account
      nodeSelector:
---
# Source: hydra/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hydra
  namespace: default
  labels:
    "app.kubernetes.io/name": "hydra"
    "app.kubernetes.io/instance": "hydra"
    "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
    "app.kubernetes.io/managed-by": "Helm"
    "helm.sh/chart": "hydra-0.0.48"
  annotations:
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: hydra
      app.kubernetes.io/instance: hydra
  template:
    metadata:
      labels:
        "app.kubernetes.io/name": "hydra"
        "app.kubernetes.io/instance": "hydra"
        "app.kubernetes.io/version": "v1.0.0-rc.14_oryOS.12"
        "app.kubernetes.io/managed-by": "Helm"
        "helm.sh/chart": "hydra-0.0.48"
      annotations:
    spec:
      volumes:
        - name: hydra-config-volume
          configMap:
            name: hydra
      containers:
        - name: hydra
          image: "oryd/hydra:v1.0.0"
          imagePullPolicy: IfNotPresent
          command: ["hydra"]
          volumeMounts:
            - name: hydra-config-volume
              mountPath: /etc/config
              readOnly: true
          args: [
            "serve",
            "all",
            "--dangerous-force-http",
            "--config",
            "/etc/config/config.yaml"
          ]
          ports:
            - name: http-public
              containerPort: 4444
              protocol: TCP
            - name: http-admin
              containerPort: 4445
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /health/alive
              port: http-admin
            initialDelaySeconds: 30
            periodSeconds: 10
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: /health/ready
              port: http-admin
            initialDelaySeconds: 30
            periodSeconds: 10
            failureThreshold: 5
          env:
            - name: URLS_SELF_ISSUER
              value: "http://localhost:4444"
            - name: DSN
              valueFrom:
                secretKeyRef:
                  name: hydra
                  key: dsn
            - name: SECRETS_SYSTEM
              valueFrom:
                secretKeyRef:
                  name: hydra
                  key: secretsSystem
            - name: SECRETS_COOKIE
              valueFrom:
                secretKeyRef:
                  name: hydra
                  key: secretsCookie
          resources:
            {}

NOTES:
The ORY Hydra HTTP Public API is available via:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=hydra,app.kubernetes.io/instance=hydra" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:4444 to use your application"
  kubectl port-forward $POD_NAME 4444:4444
  export HYDRA_PUBLIC_URL=http://127.0.0.1:4444/
  curl $HYDRA_PUBLIC_URL/.well-known/openid-configuration

If you have the ORY Hydra CLI installed locally, you can run commands
against this endpoint:

  hydra token client \
    --endpoint $HYDRA_PUBLIC_URL \
    # ...

The ORY Hydra HTTP Admin API is available via:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=hydra,app.kubernetes.io/instance=hydra" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:4445 to use your application"
  kubectl port-forward $POD_NAME 4445:4445
  export HYDRA_ADMIN_URL=http://127.0.0.1:4445/
  curl $HYDRA_ADMIN_URL/clients

If you have the ORY Hydra CLI installed locally, you can run commands
against this endpoint:

  hydra clients list \
    --endpoint $HYDRA_ADMIN_URL

A highlight for the above output is that under COMPUTED VALUES it shows:

maester:
  adminService: null
  enabled: false

Chart for kratos using old version

Describe the bug

The helm chart for kratos is referencing version 0.1.1, which is old.

To Reproduce

Steps to reproduce the behavior:

  1. helm install kratos
  2. Observe old version is deployed

Expected behavior

  1. helm install kratos
  2. Have a recent (or stable, once kratos is stable) version deployed. At the moment probably 0.3.0

Environment

  • Version: helm chart version 0.1.0

Configurable annotations

Is your feature request related to a problem? Please describe.
I'm trying to deploy Oathkeeper behind Ambassador and for that to work you have to add annotations to the Service of Oathkeeper. Right now this isn't configurable in the chart.

Describe the solution you'd like
A way to add annotations with either a file or when deploying with an extra variable that can be passed.

Describe alternatives you've considered
The only solution right now is to fork and edit it myself but I think this is a key feature that should be implemented into this chart.

Kratos chart requires configuration that kratos doesn't need

Describe the bug

The kratos chart requires a non-empty value for kratos.config.secrets.session, however kratos 0.3.0 happily starts without this value defined.

To Reproduce

Steps to reproduce the behavior:

  1. Modify the chart secrets.yml template to allow an empty value for kratos.config.secrets.session
  2. helm install and note that kratos doesn't complain

Expected behavior

Either:

  1. The chart allows an empty value for kratos.config.secrets.session, or
  2. kratos fails to start with an empty value for kratos.config.secrets.session.

Environment

  • Version: kratos chart version 0.1.0

oathkeeper-maester start failed

hello,I start a oathkeeper in kubernate with helm ,but a pod named oathkeeper-maester start failed,this is a pod log abuout oathkeeper-maester, what it is reason for this?

2020-01-01T08:18:44.365Z INFO setup using default values for authenticatorsAvailable
2020-01-01T08:18:44.365Z INFO setup using default values for authorizersAvailable
2020-01-01T08:18:44.365Z INFO setup using default values for mutatorsAvailable
2020-01-01T08:18:44.366Z INFO controller-runtime.controller Starting EventSource {"controller": "rule", "source": "kind source: /, Kind="}
2020-01-01T08:18:44.366Z ERROR controller-runtime.source if kind is a CRD, it should be installed before calling Start {"kind": "Rule.oathkeeper.ory.sh", "error": "no matches for kind "Rule" in version "oathkeeper.ory.sh/v1alpha1""}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:88
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122
sigs.k8s.io/controller-runtime/pkg/builder.(*Builder).doWatch
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/builder/build.go:191
sigs.k8s.io/controller-runtime/pkg/builder.(*Builder).Build
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/builder/build.go:180
sigs.k8s.io/controller-runtime/pkg/builder.(*Builder).Complete
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/builder/build.go:147
github.com/ory/oathkeeper-maester/controllers.(*RuleReconciler).SetupWithManager
/go/src/github.com/ory/oathkeeper-maester/controllers/rule_controller.go:120
main.main
/go/src/github.com/ory/oathkeeper-maester/main.go:100
runtime.main
/usr/local/go/src/runtime/proc.go:200
2020-01-01T08:18:44.366Z ERROR setup unable to create controller {"controller": "Rule", "error": "no matches for kind "Rule" in version "oathkeeper.ory.sh/v1alpha1""}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
main.main
/go/src/github.com/ory/oathkeeper-maester/main.go:102
runtime.main
/usr/local/go/src/runtime/proc.go:200

[helm] repo not resolving

Describe the bug

Helm cannot find hydra and example-idp charts, as chart index only contains oathkeeper

To Reproduce

Steps to reproduce the behavior:

#!/usr/bin/env bash
set -e

helm repo add ory https://k8s.ory.sh/helm/charts

helm install \
    --set hydraAdminUrl=http://hydra-example-admin:4445/ \
    --set hydraPublicUrl=http://public.hydra.localhost/ \
    --set ingress.enabled=true \
    --name hydra-example-idp \
    ory/example-idp

helm install \
    --set hydra.config.secrets.system=$(LC_ALL=C tr -dc 'A-Za-z0-9' < /dev/urandom | base64 | head -c 32) \
    --set hydra.config.dsn=memory \
    --set hydra.config.urls.self.issuer=http://public.hydra.localhost/ \
    --set hydra.config.urls.login=http://example-idp.localhost/login \
    --set hydra.config.urls.consent=http://example-idp.localhost/consent \
    --set hydra.config.urls.logout=http://example-idp.localhost/logout \
    --set ingress.public.enabled=true \
    --set ingress.admin.enabled=true \
    --set hydra.dangerousForceHttp=true \
    --name hydra-example \
    ory/hydra

Expected behavior

Images to be found. helm repo add ... should maybe contained in hydra and oathkeeper md files

Environment

  • Version: any
  • Environment: Linux

readinessProbe w/http-admin does not create valid heath check on GKE loadbalancer

Describe the bug

On gke , load balancer fails because the correct health check for public is not found.

To Reproduce

My fan out ingress contains multiple hosts, the hydra section looks like this

    - host:  hydra.example.com
      http:
        paths:
        - backend:
            serviceName: releasename-hydra-public
            servicePort: http

Expected behavior

A valid health check would be found

Environment

  • GKE
  • HELM 3
  • hydra helm chart v0.0.43

Additional context

Changing the readiness probe in deployment.yaml like so. Fixes the issue for me

          readinessProbe:
            httpGet:
              path: /health/ready
              port: http-public

High Availability for hydra in k8s

Currently, hydra couldn't start with more than one replicas. In case of using hydra as an application authorization provider (not cluster authorization provider) so to let each request pass through hydra, it seems not acceptable, especially according to fact that k8s do ton guarantees liveness of the particular pod.

login consent tag rc16 doesn't exist

Describe the bug

Helm cannot resolve with Failed to pull image "oryd/hydra-login-consent-node:v1.0.0-rc.16" becauseoryd/hydra-login-consent-node:v1.0.0-rc.16 doesn't exist.

To Reproduce

  1. Run the hydra idp example according to docs
$ helm install --set hydraAdminUrl=http://hydra-example-admin:4445/ --set hydraPublicUrl=http://public.hydra.localhost/ --set ingress.enabled=true --name hydra-example-idp ory/example-idp
  1. Get pod name
$ kubectl get pods
NAME                                 READY     STATUS         RESTARTS   AGE
hydra-example-idp-69bdc9bd5c-5b8sp   0/1       ErrImagePull   0          2d
  1. Describe pod
kubectl describe pod hydra-example-idp-69bdc9bd5c-5b8sp
Name:               hydra-example-idp-69bdc9bd5c-5b8sp
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               minikube/192.168.64.2
Start Time:         Sat, 06 Jul 2019 06:17:27 -0400
Labels:             app.kubernetes.io/instance=hydra-example-idp
                    app.kubernetes.io/name=example-idp
                    pod-template-hash=69bdc9bd5c
Annotations:        <none>
Status:             Pending
IP:                 172.17.0.5
Controlled By:      ReplicaSet/hydra-example-idp-69bdc9bd5c
Containers:
  example-idp:
    Container ID:
    Image:          oryd/hydra-login-consent-node:v1.0.0-rc.16
    Image ID:
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HYDRA_ADMIN_URL:   http://hydra-example-admin:4445/
      HYDRA_PUBLIC_URL:  http://public.hydra.localhost/
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-twt6t (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-twt6t:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-twt6t
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  2d               default-scheduler  Successfully assigned default/hydra-example-idp-69bdc9bd5c-5b8sp to minikube
  Normal   BackOff    2d               kubelet, minikube  Back-off pulling image "oryd/hydra-login-consent-node:v1.0.0-rc.16"
  Warning  Failed     2d               kubelet, minikube  Error: ImagePullBackOff
  Normal   Pulling    2d (x2 over 2d)  kubelet, minikube  Pulling image "oryd/hydra-login-consent-node:v1.0.0-rc.16"
  Warning  Failed     2d (x2 over 2d)  kubelet, minikube  Failed to pull image "oryd/hydra-login-consent-node:v1.0.0-rc.16": rpc error: code = Unknown desc = Error response from daemon: manifest for oryd/hydra-login-consent-node:v1.0.0-rc.16 not found
  Warning  Failed     2d (x2 over 2d)  kubelet, minikube  Error: ErrImagePull

Expected behavior

Docker pulls the image successfully and the example runs.

Environment

  • minikube version 1.2.0
  • helm version 2.14.1

Additional context

I was following the instructions in the tutorial here: https://k8s.ory.sh/helm/hydra

Kratos chart doesn't insist on all the required configuration

Describe the bug

By careful eyeballing of the kratos config schema, I've figured out that 0.3.0 requires:

  • kratos.config.dsn
  • kratos.config.identity.traits.default_schema_url
  • kratos.config.courier.smtp.connection_uri
  • kratos.config.urls.self.public
  • kratos.config.urls.self.admin

It doesn't seem like the chart enforces presence of the latter 4 keys in the configuration. The chart documentation says the two required values are:

  • kratos.config.dsn
  • kratos.config.secrets.session

I have opened #152 to clarify whether kratos.config.secrets.session is really needed.

To Reproduce

Steps to reproduce the behavior:

  1. Follow the chart documentation
  2. helm install
  3. Observe kratos fail to start without the above required configuration

Expected behavior

  1. Documentation directs the chart user to set the required configuration
  2. The chart enforces required configuration

Environment

  • Version: chart version 0.1.0

hydra-automigrate is logging database DSN with password

Describe the bug

Hydra auto-migrate init container in Kubernetes logs multiple lines, which include the full DSN (which includes the database password). Info log level lines are also there, which hide database user and password with asterisks.

To Reproduce

Steps to reproduce the behavior:

  1. Enable autoMigrate in helm chart
  2. Deploy the chart.
  3. Check initContainer logs.

Expected behavior

Database password does not get logged.

Environment

  • Docker image oryd/hydra:v1.0 with image ID oryd/hydra@sha256:c60c647f6f34502ec6807a8423fb9cde0128abed3128c3d203750b68bb2ef81f (Docker Hub gives timestamp 3 days ago)
  • Tested in a Google Kubernetes Engine cluster.

Additional context

Logs from the init container pod.

➜ kubectl logs -f hydra-58fd8cb8dd-h5tnm -c hydra-automigrate
Config file not found because "Config File ".hydra" Not Found in "[/]""
migrate dsn set viper 2: postgres://hydra:<PASSWORD>@<HOST>:5432/hydra
migrate dsn set viper 3: postgres://hydra:<PASSWORD>@<HOST>:5432/hydra
time="2019-09-25T21:39:30Z" level=info msg="No tracer configured - skipping tracing setup"
time="2019-09-25T21:39:30Z" level=info msg="Establishing connection with SQL database backend" dsn="postgres://*:*@<HOST>:5432/hydra?"
time="2019-09-25T21:39:30Z" level=info msg="Successfully connected to SQL database backend" dsn="postgres://*:*@<HOST>:5432/hydra?"
Got dsn: postgres://hydra:<PASSWORD>@<HOST>:5432/hydraThe following migration is planned:

| DRIVER | MODULE | ID | # |        QUERY         |
|--------|--------|----|---|----------------------|
|--------|--------|----|---|----------------------|
Successfully applied 0 SQL migrations!

adminService in hydra's values yaml causes warning

Describe the bug
While installing hydra as a dependency with Helm3, there is a warning:

coalesce.go:196: warning: cannot overwrite table with non table for adminService (map[name:<nil> port:<nil>])

To Reproduce
Have a parent chart that depends on hydra-0.3.3 Helm chart, override some values, like the fullnameOverride.

Chart.yaml:

dependencies:
  - name: hydra
    version: 0.3.3
    repository: https://k8s.ory.sh/helm/charts

values.yaml:

hydra:
  fullnameOverride: &HYDRA_FULLNAME "release-hydra"
  maester:
    hydraFullnameOverride: *HYDRA_FULLNAME
    fullnameOverride: hydra-maester

Expected behavior

There is no warning while installing hydra as a dependency.

Environment

  • Helm Version: version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

Possible Fix
Having adminService as a map removes the warning.

# Configures controller setup
maester:
  enabled: true
  # Values for the hydra admin service arguments to hydra-maester
  adminService: {}

helm/hydra: rework configMap and how secrets are handled

Is your feature request related to a problem? Please describe.
The generation of secrets in the current helm chart are not secure, as the values are stored in plaintext as part of the configMap.

In order to provide values for dsn, secrets.system and secrets.cookie someone has to set or provide values for the matching config keys such as hydra.config.dsn. Later in the _helpers.tpl the hydra.dsn(as an example here) will be propagated with the value of the provided key hydra.config.dsn.

Once they are provided and part of the config structure they aren't removed at any time and end up in the configMap in plaintext. Eventually the configMap will be mounted as a volume for the application.

As a result we end up having secrets we inject via environment variables in a secure way, while we are still mounting the same keys/values in plain text through the configMap.

Describe the solution you'd like
Remove secrets from the config structure and allow to set the keys directly:.
hydra.config.secrets.system -> hydra.secrets.system
hydra.config.secrets.cookies -> hydra.secrets.cookie
hydra.config.dsn -> hydra.dsn
Also if those keys are provided are throughout the config object, those keys and it's values should be removed from it before the configMap is stored.

As this is a breaking change in the current implementation, id like to get your input first but would be willing to bring the changes in as a PR.

Maester not synchronizing when upgrading the chart?

Describe the bug

I modified the issuer url and upgraded the chart after that.

  • I'm unable generate tokens with good credentials
  • when looking at /clients it returns null

I do see the OAuth2Client resources with kubectl, same for the secrets (same than before the upgrade).

To Reproduce

DSN: memory

Expected behavior

That Hydra pod is fully synchronized with all clients

Additional context

When upgrading the issuer URL, only Hydra pod has been restarted, not the maester one.

I tried to force restarting the maester one but nothing happens except it says Successfully Reconciled for all my clients...

In the Hydra pod I just see that when trying to log in:

time="2020-01-20T16:41:25Z" level=info msg="started handling request" method=POST remote="127.0.0.1:57982" request=/oauth2/token
time="2020-01-20T16:41:25Z" level=error msg="An error occurred" debug="Not Found" description="Client authentication failed (e.g., unknown client, no client authentication included, or unsupported authentication method)" error=invalid_client
time="2020-01-20T16:41:25Z" level=info msg="completed handling request" measure#hydra/public: https://localhost:4444/.latency=868013 method=POST remote="127.0.0.1:57982" request=/oauth2/token status=401 text_status=Unauthorized took="868.013µs"

hydra helm chart does not seem to be using the urls.login, consent & logout values in the deployment

Describe the bug

hydra provides environment variables to set the login, consent & logout URLs. But the helm chart doesn't seem to be using those values in the deployment file. Currently, only the following variables are being used in the chart: URLS_SELF_ISSUER, SECRETS_SYSTEM, SECRETS_COOKIE

To Reproduce

Install hydra helm chart version 0.0.1 using helm v3

Expected behavior

helm values of urls.logout, urls.login, urls.consent environment variables are to be used in the hydra deployment.

  • Version: helm v3, hydra chart 0.0.1
  • Environment: MacOS

I can submit a PR fixing this issue.

[Helm] Remove if-else handling of the namespace key

Since 09b61e5 we added a namespace key to the kubernetes manifests generated in Hydra helm chart. I hoped to address it via #15 & #18, but perhaps it's better to discuss here.

First of all, Helm always writes a namespace, and more than that kubernetes doesn't have a notion of "no namespace" — rather they have a "default". The principles of working with namespaces are decided by the owner of the cluster, and it is most common to separate it into staging / prod, for multiple versions of the same grouping of applications, and/or define layers and semantic groupings in the cluster such as core, monitoring, edge, etc.

Then, Helm never actually deploys without a namespace. If you template the current chart you will see that helm will ignore the if-else clause by the virtue of specifying the namespace default.

Lastly, it is a common administrative practice to forbid deployment to default for reasons of security and clear team/org boundaries. But this is not be the case for someone trying out a chart in minikube or on their personal cloud.

With these in mind, I usually specify namespace key explicitly, as we already did, to bring the template and the result of helm's execution closer together (no magic keys appearing, everything is easily readable in either form, and easy to compare). It does not need a conditional. We should advise deployment in a dedicated namespace (also protects secrets).

Consolidate subcharts of ORY Oathkeeper and ORY Hydra

Is your feature request related to a problem? Please describe.

Both ORY Oathkeeper and ORY Hydra have subcharts (example) which install the *-maester CRD Controllers.

This has resulted in several issues, apparently also related to changes on how Helm v3 processes subcharts:

I believe that the issues apply for ORY Oathkeeper and ORY Hydra alike as they both use subcharts.

Describe the solution you'd like

We either move away from subcharts and provide them as standalone charts - which would require documentation on how to set them up alongside the core project if needed - or we figure out a way to resolve those issues by keeping the subcharts.

I would prefer the second solution, if it is possible.

[helm] maester uses wrong service name

Environment

I am using the hydra chart as a requirement in a project.

dependencies:
  - name: hydra
    version: 0.0.46
    repository: "@ory"

Issue

When I install my helmchart with a random release name, maester can't reach the hydra-admin service.

maester logs:

ERROR	controller-runtime.controller	Reconciler error	{"controller": "oauth2client", "request": "default/fashionable-kangaroo-user-panel", "error": "Post http://fashionable-kangaroo-admin:4445/clients: dial tcp: lookup fashionable-kangaroo-admin on 10.43.0.10:53: no such host"}

Cause of the issue

There is obviously a mismatch between the service name and the name maester is trying to lookup:

-> # kubectl get services
fashionable-kangaroo-hydra-admin                  ClusterIP   10.43.218.112   <none>        4445/TCP   3m20s
fashionable-kangaroo-hydra-public                 ClusterIP   10.43.22.59     <none>        4444/TCP   3m20s

Note that in the maester configuration the hydra url is missing the -hydra after fashionable-kangaroo as the following output illustrates:

 -> # kubectl get deployment fashionable-kangaroo-hydra-maester -o json
.......
            "spec": {
                "containers": [
                    {
                        "args": [
                            "--metrics-addr=127.0.0.1:8080",
                            "--hydra-url=http://fashionable-kangaroo-admin",
                            "--hydra-port=4445"
                        ],
........

Workaround

If the helm release name ends with -hydra, maester works. But in that case all the generated names of the hydra resources are not as I expected. I hope the following examples make it clear.

Helm release name not ending with -hydra (fashionable-kangaroo):

-> # kubectl get deployments
NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
fashionable-kangaroo-hydra                        1/1     1            1           4m43s
fashionable-kangaroo-hydra-maester                1/1     1            1           4m43s
-> # kubectl get services
NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
fashionable-kangaroo-hydra-admin                  ClusterIP   10.43.218.112   <none>        4445/TCP   3m20s
fashionable-kangaroo-hydra-public                 ClusterIP   10.43.22.59     <none>        4444/TCP   3m20s

Helm release name ending with -hydra (random-hydra):

-> # kubectl get deployments
NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
random-hydra                              0/1     1            0           17s
random-hydra-hydra-maester                1/1     1            1           17s
-> # kubectl get services
NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
random-hydra-admin                        ClusterIP   10.43.186.131   <none>        4445/TCP   50s
random-hydra-public                       ClusterIP   10.43.59.170    <none>        4444/TCP   50s

Solution

Imho two things need to be solved.

  • Fix the naming issues so that the ressources always contain the additional -hydra
  • Fix the maester template so that it uses the full name including -hydra

Release strategy for ory/k8s

Hello 👋

It might be a good thing to have some automation for these charts. I'd like to check the interest in it before trying to implement it. First of all, I ask my self if released files from Helm should be hosted somewhere else then under docs/. I haven't researched any alternatives, but I'm thinking we will accumulate a lot of files that don't need to be there.

Secondly, how about some scripts that update these charts automatically according to the versions available for each program.

Thirdly, how about releasing versions for each helm chart? I could do some research and see if there are some nice solutions to it.

PS: I'd like to contribute, just like to know what would be positive PRs!

Installation fails with helm3 on minikube

Describe the bug

The ory/hydra chart fails to install under helm3

To Reproduce

Steps to reproduce the behavior:

  1. install helm3
  2. helm repo add ory https://k8s.ory.sh/helm/charts
  3. helm repo update
  4. try to run helm install as per the official guide
  5. the installation fails with Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(RoleBinding.roleRef): unknown field "namespace" in io.k8s.api.rbac.v1beta1.RoleRef

Expected behavior

The helm chart is installed successfully

Environment

  • Version: v1.0.0-rc.14_oryOS.12
  • Environment: helm v3.0.0, minikube v1.5.2 on Ubuntu

Oathkeeper Helm Chart (Maester as a side car) - Wrong service account name

Describe the bug

When deploying the Oathkeeper helm chart with Maester as a sidecar, Oathkeeper is not deployed due to the service account not having the correct name.

Error creating: pods "oathkeeper-6877c9cd84-" is forbidden: error looking up service account oathkeeper/oathkeeper-maester-account: serviceaccount "oathkeeper-maester-account" not found

While debuging, I discovered that the sidecar wants a service account named {{ include "oathkeeper-maester.name" . }}-maester-account (see oathkeeper deployment-sidecar.yaml) but the service account in Maester is named {{ include "oathkeeper-maester.name" . }}-account (see maester rbac.yaml)

Now, that's part of the problem... When Oathkeeper is calling the helper oathkeeper-maester.name from Maester's chart, the default value of .Chart.Name is not being resolved to Maester's chart name but rather the chart it's being called from (Oathkeeper in this case).

By default, maester's service account is called maester-account and in the sidecar deployment, it wants a certain prefix: {{ include "oathkeeper-maester.name" . }}-maester-account

To Reproduce

Steps to reproduce the behavior:

  1. minikube start
  2. git clone --branch v0.3.3 https://github.com/ory/k8s.git
  3. kubectl create ns oathkeeper
  4. helm install oathkeeper -n oathkeeper ./k8s/helm/charts/oathkeeper/ --set global.ory.oathkeeper.maester.mode=sidecar
  5. kubectl describe replicasets.apps oathkeeper-<random_id>

Expected behavior

Oathkeeper should be deployed correctly as a sidecar using the right maester's service account name.

Environment

  • minikube version: v1.9.2
  • kubectl version: v1.18.2

External secret for Oathkeeper JWKS data

Is your feature request related to a problem? Please describe.
When configuring Oathkeeper there is no mechanism in the chart to depend on an externally deployed secret for the JWKS data.

Describe the solution you'd like
There should be some way to configure the chart to use a secret that is already present in the deployment environment. It would then mount this secret instead of the one which is always created that can be configured using oathkeeper.mutatorIdTokenJWKs.

Describe alternatives you've considered
The most straightforward alternative is to just use the oathkeeper.mutatorIdTokenJWKs configuration value. The caveat of this is that you then need to make this secret available in plain text to Helm as it wants to encode and create the secret for you. This could have security implications depending on where Helm is running.

Additional context
None

Configure leader election in oathkeeper-maester

Is your feature request related to a problem? Please describe.
Currently we can't reliably scale oathkeeper-maester when it runs in controller mode, because every replica would try to re-create the ConfigMap with Rules.

Describe the solution you'd like
We should configure leader election by adding --enable-leader-election=true flag to the controller arguments

Describe alternatives you've considered
If there's always only one replica, there's no problem.

Additional context
We can't add this flag until ory/oathkeeper-maester#34 is fixed. If it is true, the internal logger is triggered.

Hydra Helm chart timed out

Hi,
I'am trying to install Hydra on my kubernetes cluster using the Helm chart.
I followed the documentation found here : https://k8s.ory.sh/helm/

But, when I try to install it using this command :

helm install --name hydra \
    --set 'hydra.config.secrets.system=strongpassword' \
    --set 'hydra.config.dsn=postgres://xxx.xxx.xxx' \
    --set 'hydra.config.urls.self.issuer=https://localhost:9000/' \
    --set 'hydra.config.urls.login=http://localhost:9020/login' \
    --set 'hydra.config.urls.consent=http://localhost:9020/consent' \
    --namespace hydra \
    ory/hydra

Nothing happens and after a certain time I have this error message :

UPGRADE FAILED
Error: timed out waiting for the condition
Error: UPGRADE FAILED: timed out waiting for the condition

I'am using v2.14.3 of helm (client and server).

Thank you

Add documentation for kratos chart

Is your feature request related to a problem? Please describe.

It's related to a problem setting up basic kratos. I get a lot of validation errors and I need to fix them one by one, because I don't know what's required and what isn't. It fails in runtime, because it cannot validate config.

Describe the solution you'd like

A documentation like e.g. hydra has https://k8s.ory.sh/helm/hydra.html which would tell me exactly which values are required.

Describe alternatives you've considered

I found this document: https://www.ory.sh/kratos/docs/reference/configuration but it looks like it's not matching everything. Additionally, it looks like the defaults are not taken into account in helm chart.

Additional context

N/A

hydra maester: DNS resolution fails when using fullnameOverride

Describe the bug

When using a fullname override that contains multiple "parts" it can cause the generated name for the admin service to fail

To Reproduce

Steps to reproduce the behavior:

Deploy the chart with the following name overrides included with the required values:

fullnameOverride: foo-ory-hydra
hydra-maester:
  fullnameOverride: foo-ory-hydra-maester

Expected behavior

I would expect to the template in _helpers.yaml to first check for .Values.adminService for setting up the hostname and port, for example

adminService:
  name: foo-ory-hydra-admin
  port: 9999

(the port number is not an issue here, as I am using the defaults, but it should not be hard coded into the maester subchart anyway, in case custom values are used for .Values.service.admin.port in the main chart)

Perhaps the maester stuff should be in the same chart rather than a sub chart so that these values can be shared (unless you can grab main chart values from a subchart? not totally sure)?

Environment

chart version 0.0.42

Helm Hydra secrets failing

Followed the instructions to create a secret, secret created OK but helm chart fails with. I can install other helm charts OK.

Error: create: failed to create: Secret "sh.helm.release.v1.hydra.existingSecret=my-secret.v1" is invalid: [metadata.name: Invalid value: "sh.helm.release.v1.hydra.existingSecret=my-secret.v1": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is 'a-z0-9?(.a-z0-9?)'), metadata.labels: Invalid value: "hydra.existingSecret=my-secret": a valid label must be an empty string or consist of alphanumeric characters, '-', '' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9.])?[A-Za-z0-9])?')]

helm install -f ./hydraconfig.yaml
--set 'ingress.public.enabled=true'
--set 'ingress.admin.enabled=true'
--set 'hydra.dangerousForceHttp=true'
--set 'hydra.autoMigrate=true'
'hydra.existingSecret=my-secret'
ory/hydra

Hydra / Maester: Make the install of custom CRDs optional and rely on Helm 3 convention for CRDs installation

Is your feature request related to a problem? Please describe.
My current dev workflow (Skaffold + Helm 3) is doing a lot of helm upgrade when iterating. Hydra Maester chart is taking a long time to be upgraded, due to the usage of a Job to install CRDs.

As there are now standardized way of installing CRDs with Helm 3 that would be faster / more standard, would it be possible to offer a way to rely on them?

Describe the solution you'd like

  • Make the installation of CRDs through a job at each helm upgrade optional with a new value parameter (that can default to true)
  • Move the CRD file into a /crds directory so that Helm 3 can install them at the first chart install only.

Describe alternatives you've considered
I see no other solution. Change my dev workflow maybe?

However, skaffold and Helm seems to be popular choices.

Additional context
https://developer.ibm.com/blogs/kubernetes-helm-3/#simplified-custom-resource-definition-crd-support
helm/helm#5871
helm/helm#6243

-> Please note that this would prevent this chart from updating the CRDs without changing its apiVersion from what I understand. While the current behavior is doing it.

If you're OK with the approach, I can do a PR.

Make use of helm test framework

Is your feature request related to a problem? Please describe.

Currently, we execute only very basic tests that use health checks to see if a service is up or not after installation.

We had several bugs in the helm charts that could have been caught if we used more advanced tests.

Describe the solution you'd like

Introduce better tests, for example:

  • Was the config map created? Does it have the right content?
  • Was the secret created and mounted? Does it have the right content?
  • Is the CRD Controller running properly and does it update the service when a CRD changes?
  • ...

Describe alternatives you've considered

Open to other ideas!

Additional context

Help would be appreciated!

hydra autoMigration pod crashing

Describe the bug

Deploying ory/hydra using ory/k8s charts if autoMigration is set to true the pod will crash and backoff with the error below.

This is due to missing --config is given. A fix will be passing --config argument to the initContainer. Instead of fixing it at rootCmd of hydra.

Config file not found because "Config File ".hydra" Not Found in "[/]""
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xd8 pc=0xac53f9]

goroutine 1 [running]:
github.com/ory/x/urlx.ParseOrFatal(0x0, 0x0, 0xc000038004, 0x75, 0x0)
	/go/pkg/mod/github.com/ory/[email protected]/urlx/parse.go:22 +0x69
github.com/ory/hydra/driver.(*RegistrySQL).CanHandle(0xc00040f710, 0xc000038004, 0x75, 0x0)
	/go/src/github.com/ory/hydra/driver/registry_sql.go:160 +0x54
github.com/ory/x/dbal.GetDriverFor(0xc000038004, 0x75, 0x75, 0x174e740, 0x7fbc8c5236d0, 0x0)
	/go/pkg/mod/github.com/ory/[email protected]/dbal/driver.go:36 +0x85
github.com/ory/hydra/driver.NewRegistry(0x105dec0, 0xc000076d20, 0x0, 0x0, 0x0, 0xc000076b40)
	/go/src/github.com/ory/hydra/driver/registry.go:59 +0x61
github.com/ory/hydra/driver.NewDefaultDriver(0x1058580, 0xc000076b40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/ory/hydra/driver/driver_default.go:21 +0xf6
github.com/ory/hydra/cmd/cli.(*MigrateHandler).MigrateSQL(0x1770720, 0x173b7c0, 0xc0004368c0, 0x0, 0x2)
	/go/src/github.com/ory/hydra/cmd/cli/handler_migrate.go:50 +0x9ff
github.com/spf13/cobra.(*Command).execute(0x173b7c0, 0xc000436860, 0x2, 0x2, 0x173b7c0, 0xc000436860)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0x173ba20, 0xdba640, 0x1770720, 0x0)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
github.com/ory/hydra/cmd.Execute()
	/go/src/github.com/ory/hydra/cmd/root.go:59 +0x32
main.main()
	/go/src/github.com/ory/hydra/main.go:33 +0x50

To Reproduce

Steps to reproduce the behavior:

  1. Deploy a brand new hydra using ory/hydra helm chart with autoMigration: true
  2. hydra pod will never be ready due to initContainer for migration crashing.

Expected behavior

Migration should happen and eventually hydra pod will be ready.

Environment

  • Version: v1.2.3, git sha hash
  • Environment: Debian, Docker, ...

Additional context

Add any other context about the problem here.

Hydra maester cannot be deployed to multiple namespaces

Describe the bug

Trying to deploy Hydra to multiple k8s namespaces results in

Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: maester-role, existing_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole, new_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole

To Reproduce

Steps to reproduce the behavior:

helm install hydra ory/hydra --version 0.3.3 -n namespace1
helm install hydra ory/hydra --version 0.3.3 -n namespace2

Results in:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "maester-role" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "namespace2": current value is "namespace1"

Expected behavior

I expect to be able to successfully deploy Hydra into different namespaces without any errors.

Environment

  • Version:
$ helm version
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

Is this intended behavior? Can hydra only be deployed once (maybe use enabledNamespaces for multiple namespaces)?

There is no oryd/hydra-login-consent-node version v1.4.6

Describe the bug

The latest version of oryd/hydra-login-consent-node available on docker hub is v1.4.3, the Helm chart of the example-idp instead is pointing to the version v1.4.6.

image:
  repository: oryd/hydra-login-consent-node
  tag: v1.4.6
  pullPolicy: IfNotPresent

To Reproduce

Try to install the example-idp and watch it fail.

Set additional sidecar + volumes

Hi,

To make a secure connection to the storage when using Google Cloud SQL, Google advises using a proxy that will manage credentials through secrets (https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine?hl=fr)... it implies having this container aside the Hydra container.

See chart examples:
https://github.com/helm/charts/blob/master/stable/sonatype-nexus/templates/deployment-statefulset.yaml#L208-L210
https://github.com/helm/charts/blob/master/stable/sonatype-nexus/templates/deployment-statefulset.yaml#L115-L117

Same here:
https://github.com/helm/charts/tree/master/stable/grafana
they use extraContainers and extraVolumes.

Having this inside your chart would help 👍

Thank you,

Update oathkeeper version for kratos

Kratos is using a different version of oathkeeper (v.0.35-beta1) than the latest version of this helm chart provides (v.0.32-beta1)

Can you please update that?

hydra-maester custom resource definition should have scope Namespaced

Documentation of the hydra-maester talks about CR handled by hydra-master as bound to namespace but the definition of CR has no scope set. This could lead to confusion on clusters where the default is set to Cluster. Explicit scope set to Namespaced would avoid such confusion.

hydra: DSN should be excluded from the configmap

Describe the bug

The DSN definition is making it's way into the configmap while the value is being consumed via the secretRef env configuration. This is in the hydra chart (though I have not tested the other charts yet)

To Reproduce

Install the chart as normal

Expected behavior

I would expect the DSN to only be stored in the secret that is set up

Environment

chart version 0.0.42

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.