Coder Social home page Coder Social logo

helm-charts's People

Contributors

erikostien-pingidentity avatar gmorgan-ping avatar henryrecker-pingidentity avatar kenneth-ping avatar marcodelas avatar pingdavidr avatar pingdevopsprogram avatar pingjamesb avatar samir-gandhi avatar timothynguyen90 avatar tsigle avatar wesleymccollam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Update default workload resource cpu/memory request sizes

Currently, the default resource memory request size for all products is:

    resources:
      requests:
        cpu: 500m
        memory: 500Mi
      limits:
        cpu: 4
        memory: 8Gi

In some products like pingfederate-admin this is woefully small at 500Mi. And for others like pingdelegator it is quite large.

An example from a simple baseline looks like:

│ NAME                                       PF READY RESTARTS STATUS             CPU MEM %CPU/R %MEM/R %CPU/L %MEM/L IP            NODE              │
│ r044-pingfederate-admin-d99547768-r9dts    ●  1/1          0 Running             12 718      2    143      0      8 10.52.11.161  ip-10-52-17-104.… │
│ r044-pingfederate-engine-5db68d687c-zr78n  ●  1/1          0 Running             14 777      2    155      0      9 10.52.48.87   ip-10-52-33-50.u… │

Recommending:

| Product               | CPU Request | CPU Limit | Mem Request | Mem Limit |
| pingaccess-*          | 0 | 2 | 1Gi     | 4Gi |
| pingdataconsole       | 0 | 2 | 500Mi   | 2Gi |
| pingdatagovernance    | 0 | 2 | 1500Mi  | 4Gi |
| pingdatagovernancepap | 0 | 2 | 750Mi   | 2Gi |
| pingdatasync.         | 0 | 2 | 750Mi   | 2Gi |
| pingdelegator         | 0 | 2 | 32Mi    | 64Mi |
| pingdirectory         | 0 | 2 | 1500Mi  | 4Gi |
| pingfederate-*        | 0 | 2 | 1Gi     | 4 Gi |

Resulting Template will output when using the everything.yaml example:

#-------------------------------------------------------------------------------------
#
#           Product         Workload   cpu-R cpu-L mem-R mem-L  Ing 
#    --------------------- ----------- ----- ----- ----- ----- -----
#  √ pingaccess-admin      deployment  0     2     1Gi   4Gi   false
#  √ pingaccess-engine     deployment  0     2     1Gi   4Gi   false
#  √ pingdataconsole       deployment  0     2     .5Gi  2Gi   false
#  √ pingdatagovernance    deployment  0     2     1.5Gi 4Gi   false
#  √ pingdatagovernancepap deployment  0     2     .75Gi 2Gi   false
#  √ pingdatasync          deployment  0     2     .75Gi 2Gi   false
#  √ pingdelegator         deployment  0     500m  32Mi  64Mi  false
#  √ pingdirectory         statefulset 0     2     2Gi   8Gi   false
#  √ pingfederate-admin    deployment  0     2     1Gi   4Gi   false
#  √ pingfederate-engine   deployment  0     2     1Gi   4Gi   false
#
#  √ ldap-sdk-tools        deployment  0     0     0     0     false
#  √ pd-replication-timing deployment  0     0     0     0     false
#
#-------------------------------------------------------------------------------------

Improved Default Naming on Global vars

The current output of the default ports looks redundant:

apiVersion: v1
data:
  PA_ENGINE_ACME_PRIVATE_PORT: "8080"
  PA_ENGINE_ADMIN_PRIVATE_PORT: "9000"
  PA_ENGINE_ENGINE_PRIVATE_PORT: "3000"
  PA_ENGINE_PRIVATE_HOSTNAME: pingaccess
  PD_ENGINE_HTTPS_PRIVATE_PORT: "443"
  PD_ENGINE_LDAP_PRIVATE_PORT: "389"
  PD_ENGINE_LDAPS_PRIVATE_PORT: "636"
  PD_ENGINE_PRIVATE_HOSTNAME: pingdirectory
  PDG_ENGINE_HTTPS_PRIVATE_PORT: "443"
  PDG_ENGINE_PRIVATE_HOSTNAME: pingdatagovernance
  PF_ADMIN_ADMIN_PRIVATE_PORT: "9999"
  PF_ADMIN_PRIVATE_HOSTNAME: pingfederate-admin
  PF_ENGINE_ENGINE_PRIVATE_PORT: "9031"
  PF_ENGINE_PRIVATE_HOSTNAME: pingfederate-engine
kind: ConfigMap
metadata:
  name: global-env-vars

this is reflective of the naming structure:
<port_name>_<pub/private>_PORT

Since this naming structure provides flexibility of renaming ports the default portnames should be changed to produce more comprehensible variable names.
like PingDirectory vars.

requested enhancement:
change PF admin and engine port names to "https"

this should give better names that could be more appropriately used as convention:

  PF_ADMIN_HTTPS_PRIVATE_PORT: "9999"
  PF_ADMIN_HTTPS_PUBLIC_PORT: "443"

ClusterIP Services port/targetPort be set to the containerPort

  services:
    ldap:
      servicePort: 389
      containerPort: 1389
      dataService: true
    ldaps:
      servicePort: 636
      containerPort: 1636
      dataService: true
      clusterService: true
    https:
      servicePort: 443
      containerPort: 1443
      ingressPort: 443
      dataService: true

When service ports are different from container port it leads to:

# Source: ping-devops/templates/pingdirectory/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: samir-pd
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pingdirectory
    helm.sh/chart: ping-devops-0.5.6
  name: samir-pd-pingdirectory
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 1443
  - name: ldap
    port: 389
    protocol: TCP
    targetPort: 1389
  - name: ldaps
    port: 636
    protocol: TCP
    targetPort: 1636
  selector:
    app.kubernetes.io/instance: samir-pd
    app.kubernetes.io/name: pingdirectory

and:

  PD_ENGINE_PRIVATE_PORT_HTTPS: "443"
  PD_ENGINE_PRIVATE_PORT_LDAP: "389"
  PD_ENGINE_PRIVATE_PORT_LDAPS: "636"

and:

----- Starting hook: /opt/staging/hooks/80-post-start.sh
Waiting until PingDirectory service is running on this Server (samir-pd-pingdirectory-1.samir-pd-pingdirectory-cluster)
        samir-pd-pingdirectory-1.samir-pd-pingdirectory-cluster:1636

Meaning, pd is looking to replicate on a port on a service that actually doesn't exist.
Now, it looks like Kubernetes is actually managing the interpretation which is cool. But this may cause some issue somewhere that we aren't catching.

Add securityContexts to testFramework containers

When trying to use an image with a specific user/group (i.e. pingtoolkit), there is currently no way to affect the securityContext on initContainers or the finalStep container.

  1. Adding ability to provide a securityContext at the following levels:
  2. Changing the default finalStep image to busybox
testFramework:
  ...
  #########################################################
  # SecurityContext for all containers
  #########################################################
  securityContext:
    runAsUser: 1000
    runAsGroup: 2000
    ...
    testSteps:
    - name: 01-init-example
      ...
      securityContext:
        runAsUser: ...
    ...
    finalStep:
      securityContext:
        runAsUser: ...

Provide ability to add additional alt-names/alt-ips to private cert generation.

Allow for a privateCert structure to contain optional arrays additionalHosts and additionalIPs:

pingaccess-admin:
  privateCert:
    generate: true
    additionalHosts:
    - pingaccess-admin.west-cluster.example.com
    - pa-admin.west-cluster.example.com
    additionalIPs:
    - 123.45.67.8

In addition, if the ingress for the product is enabled, the host(s) created for that ingress will also be added to the alt-names.

The above example (with an ingress) will create a cert used by pingaccess-admin containing:

Certificate:
    Data:
        ...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=pingaccess-admin
        ...
        X509v3 extensions:
            ...
            X509v3 Subject Alternative Name:
                DNS:rel050-pa-pingaccess-admin.ping-devops.com. pingaccess-admin.west-cluster.example.com, DNS:pa-admin.west-cluster.example.com, IP Address:123.45.67.8

Update default serviceAccount in workload related to Issue 95

When Issue #95 was resolved, the vault serviceAccountName was still pulling from:

  vault:
    hashicorp:
      serviceAccountName: vault-auth

but that default was removed from the default values.yaml. The new proper location is:

  vault:
    hashicorp:
      annotations:
        serviceAccountName: vault-auth

workload template should use the new location:

  serviceAccountName: {{ $v.vault.hashicorp.annotations.serviceAccountName }}

Calculate checksum of ConfigMaps based on the data rather than entire file

Currently, checksum's are calculated based on the entirety of the generated ConfigMaps, which includes not only the data, but also labels, annotations, etc...

If an upgrade of the chart deployment is run and some labels change (i.e. Helm chart version), then it will change the checksum and subsequently redeploy everything.

The change will only perform a checksum of the actual data attribute of the ConfigMap, thus the checksum won't change unless actual data is changed.

Add support for PingAccess clustering

With the recent changes to PingAccess 6.2.0, we can now generate an internal private cert to be used to join PingAccess admin and engines into a cluster.

This will remove the product "pingaccess" from the chart and introduce two new product names:

  • pingaccess-admin
  • pingaccess-engine

similar to the names with PingFederate.

Update helm-chart default global image tag value to 2101

The current default image tag value in helm charts is set to 2010 (Oct 2020).

With some of the updates to products and associated helm chart k8s resources (i.e. pingdirectory-cluster), it would best to update the helm chart default image tag to 2012 (Dec 2020).

Create global-env-vars hosts/ports for all products regardless if enabled

Currently, the hostnames and ports in the configmap global-env-vars are only created if that specific product is enabled. This is great as it keeps the number of environment variables to a minimum.

However, the status of this config map is used to form the checksum for the products. Once a new product is enabled while others are running, then this is changed, and ALL containers will be forced to restart, as they sense a change to the global-env-vars.

0.3.6 uses incorrect port for Ingresses

_ingress.tpl uses the ingressPort as the service port.
I believe the correct intention is to only use the ingressPort from values.yaml for variables, this should remain pointing to service port

Generate x509 certificates for PRIVATE_HOSTNAMEs

To support products (like PingAccess) to enable clustering between admin/engine over a security channel, a private certificate is required to match that of the service name (i.e. PRIVATE_HOSTNAME).

Use the helm genSignedCert mechanisms to create a tls secret to be used by the workloads to setup SSL communication between workloads.

Enable creation of a non-default service account for the workload

Currently, pods use either the default service account or, if Vault is enabled, the service account name configured for Vault. However, the chart does not have the ability to create this service account and requires that it has been created externally. In a cluster where workloads are separated by namespace, there would not be a named service account in the namespace to use for Vault. Ideally, I would like the ability for the release to create a named service account for the workload rather than requiring additional setup for the workload on the cluster side.

Cleaning and making services/ingresses easier to use

For the values.yamls, there are currently a couple of different structures to help build out k8s services and ingresses. This information is used to create hostnames in the global-env-vars configmap. As an example:

  services:
    engine:
      port: 9031
      targetPort: 9031
      dataService: true
  ingress:
    hosts:
      - host: pingfederate-engine._defaultDomain_
        paths:
        - path: /
          backend:
            servicePort: 443

creates:

  PF_ENGINE_PUBLIC_HOSTNAME=pingfederate-engine
  PF_ENGINE_PRIVATE_HOSTNAME=pingfederate-engine.{_defaultDomain_}

We also want to create additional variables of the type:

  PF_ENGINE_PUBLIC_ENGINE_PORT=9031
  PF_ENGINE_PRIVATE_ENGINE_PORT=443

It looks duplicative, however, the second use of ENGINE is because there are some products (i.e. PingDirectory), where you might have multiple ports (i.e. 443, 289, 2443, 636, ...) that you want lit up publically, and with potentially different hostnames (we'll save that for later).

Having the service and ingress ports in separate structures makes it difficult to a) manage and b) process this in a predictable manner. I think we would propose the following. I'm using an example with PD to help make the point of multiple ports:

  services:
    api:
      containerPort: 8443 <--- changed from targetPort
      servicePort: 1443   <--- changed from port
      ingressPort: 443    <--- new.  moved from ingress
      dataService: true
    data-api:
      containerPort: 9443 <--- changed from targetPort
      servicePort: 2443   <--- changed from port
      ingressPort: 2443   <--- new.  moved from ingress
      dataService: true
  ingress:
    hosts:
      - host: pingdirectory.example.com
        paths:
        - path: /api
          backend:
            serviceName: api
        - path: /directory/v1
          backend:
            serviceName: data-api

would create:

  PD_PUBLIC_HOSTNAME=pingdirectory
  PD_PUBLIC_API_PORT=443
  PD_PUBLIC_DATA_API_PORT=2443

  PD_PRIVATE_HOSTNAME=pingdirectory.example.com
  PD_PRIVATE_API_PORT=443
  PD_PRIVATE_DATA_API_PORT=2443

Change default pingdirectory values (container.resources.requests.cpu=50m and container.replicaCount=1)

Currently, the default cpu request is set to 0, which tell kubernetes pods to reserve no cpu when a PingDirectory starts. Kubernetes uses this information to determine nodes with available resources when scheduling pods. In PingDirectory's case, it is a resource hog grabbing up to a couple CPU in short timeframes. When we have multiple PingDirectory pods on the same node running, they can quick bog down the CPU.

Setting the cpu request to 50m, will provide at last some reservation of CPU, so that if there are multiple nodes, it will better even out the load.

Additionally, setting the replicaCount to 1 by default, as many cases in development, there isn't a great need to have multiple replicas. If this is the case, simply set pingdirectory.container.replicatCount=2 or any number of replica's.

Default pingaccess-admin to StatefulSet

In order to provide HA with a PingAccess cluster between admin/engine nodes, it is required that the PingAccess Admin deploy as a StatefulSet with persistence. Otherwise if the PingAccess Admin goes down, the engines would lose connectivity to that node and be unable to get further config updates and subsequently have to bounce and lose their web-session information.

For now, PingFederate console won't be a StatefulSet by default, as it's not required.

Support ability to add annotations to all resources generated

Add the ability to add annotations to all resources generated similar to current support for Labels. This will allow deployers to specify additional annotations at either the global and/or product level. An example of the values yaml would look like:

global:
  annotations:
    app.ping-devops.com/test: test-name

pingaccess-admin:
  annotations:
    app.pingaccess/version: v1234

Rearrange $v.services

We can rearrange how we accept service definitions from values yaml to have more flexibility
The new solution should allow us to:

  1. set any number of annotations on any service
  2. add loadbalancer services as needed for folks that may want to expose ldaps
  3. optionally, prepare for istio virtual service and gateway instead of ingress.

Current:

  services:
    https:
      servicePort: 9999
      containerPort: 9999
      ingressPort: 443
      dataService: true
    clusterbind:
      servicePort: 7600
      containerPort: 7600
      clusterService: true
    clusterfail:
      servicePort: 7700
      containerPort: 7700
      clusterService: true
    clusterExternalDNSHostname:

Proposed:

  services:
    data:
      annotations: {}
      https:
        servicePort: 9999
        containerPort: 9999
        ingressPort: 443
    cluster: 
      annotations: {}
      clusterbind:
        servicePort: 7600
        containerPort: 7600
        clusterService: true
      clusterfail:
        servicePort: 7700
        containerPort: 7700
        clusterService: true
    loadbalancer:
      annotations: {}

Add Support for custom sidecars and initContainers

Sidecars and initContainers are valuable for a multitude of reasons - log forwarding, metric exporting, backup jobs. Because of this they can also have many ways of being configured.

So for our first foray into sidecars with helm, it would be nice to just have the option available and nonlimiting.

Allow for defining three top level maps to provide details for:

  • sidecars
  • initContainers
  • volumes

Examples include:

sidecars:
  pd-access-logger:
    name: pd-access-log-container
    image: pingidentity/pingtoolkit:2105
    volumeMounts:
      - mountPath: /tmp/pd-access-logs/
        name: pd-access-logs
        readOnly: false
  statsd-exporter:
    name: statsd-exporter
    image: prom/statsd-exporter:v0.14.1
    args:
    - "--statsd.mapping-config=/tmp/mapping/statsd-mapping.yml"
    - "--statsd.listen-udp=:8125"
    - "--web.listen-address=:9102"
    ports:
      - containerPort: 9102
        protocol: TCP
      - containerPort: 8125
        protocol: UDP

initContainers:
  init-1:
    name: 01-init
    image: pingidentity/pingtoolkit:2105
    command: ['sh', '-c', 'echo "Initing 1" && touch /tmp/pd-access-logs/init-1']
    volumeMounts:
      - mountPath: /tmp/pd-access-logs/
        name: pd-access-logs
        readOnly: false

volumes:
  pd-access-logs:
    emptyDir: {}
  statsd-mapping:
    configMap:
      name: statsd-config
      items:
        - key: config
          path: statsd-mapping.yml

And within the product definition, allow for 3 includes:

  • includeSidecars
  • includeInitContainers
  • includeVolumes
pingdirectory:
  enabled: false
  name: pingdirectory
  volumeMounts:
    - mountPath: /opt/access-logs/
      name: pd-access-logs
  includeSidecars:
    - pd-access-logger
  includeInitContainers:
    - init-1
  includeVolumes:
    - pd-access-logs

Unable to set numerous Vault configuration options

Currently, the chart only supports configuring a few of the Hashicorp Vault annotations listed at https://www.vaultproject.io/docs/platform/k8s/injector/annotations on a workload.

In some enterprise environments, the default values will not work and it is necessary to customize the configuration in order to use the chart with Vault. Examples include when multiple clusters are configured at different auth-paths and using a custom tls-secret when Vault instances are only available internally and use an enterprise CA.

Support Annotations at Workload Level

Right now annotations are at the top level (global/product) in values.yaml and this adds them to all resources under that product. This may be misleading. We should support annotations at the workload level. For workloads, annotations should be added at .spec.template.metadata as this is where most 3rd party operators look (examples: vault, telegraf-operator).

pingfederate-engine:
  workload:
    annotations:
      telegraf.influxdata.com/class: app

would lead to:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: samir
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pingfederate-engine
    helm.sh/chart: ping-devops-0.5.1
  name: samir-pingfederate-engine
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: samir
      app.kubernetes.io/name: pingfederate-engine
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        telegraf.influxdata.com/class: app

Default ingress annotations on global cannot be overridden

The current annotations for the global ingress are set as follows

annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
kubernetes.io/ingress.class: "nginx-public"

We have found that if these are not appropriate for the environment and we need to have our own ingress annotations block, these pre-seeded annotations are mixed with the new ones, causing failure.
E.g, if we set the following in our values file (notice we've removed the ingress class and changed the name of backend-protocol annotation):

annotations:
      ingress.kubernetes.io/backend-protocol: HTTPS

we get the resulting definition, which causes a failed ingress:

│ Annotations:                     ingress.kubernetes.io/backend-protocol: HTTPS                                    │
│                                  kubernetes.io/ingress.class: nginx-public                                        │
│                                  meta.helm.sh/release-name: [obscured]                                                │
│                                  meta.helm.sh/release-namespace: [obscured]             │
│                                  nginx.ingress.kubernetes.io/backend-protocol: HTTPS                              │

It may be best to unset the default annotations as such:

annotations: {}
    #  nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    #  kubernetes.io/ingress.class: "nginx-public"

Revamp vault.hashicorp.secrets value .yaml and support per path secret

Folks usually want one secrets management tool. Either completely within the cluster (bitnami sealed secrets), or completely separate (vault).

To complete the vault use case, we should be able to bring in env variables to /run/secrets, and put other secret files in their respective location.

Potentially something like this:

  vault:
    enabled: true
    hashicorp:
      annotations:
        secret-volume-path: /run/secrets
      secretPrefix: secret/<user>@pingidentity.com/<namespace>/
      secrets:
      - name: devops-secret.env
        secret: devops-secret.env
      - name: pingaccess.lic
        secret: license
        secret-volume-path: /opt/in/somewhere

This would place the secrets:

  • devops-secret.env into the default path of run/secrets
  • pingaccess.lic into the default path of /opt/in/somewhere/pingaccess.lic

Add a initContainer wait-for PingDirectory when starting a PingFederate container

Currently, PIngFederate Admin cannot start until the PingDirectory is running. An init container with a wait-for on the PingDirectory server needs to be added if pingdirectory.enabled = true.

In the case where it's not dependent, it will add a little delay on PingFederate Admin in starting up.

The work around today is to simply bounce the PF Admin server after PD is up and running.

Server Profiles hardcoded to `master` branch

You should consider removing the SERVER_PROFILE_BRANCH=master from each chart to allow for the Images to just pull from whatever the Repo default is.

New repos in GH are defaulting to 'main` - which breaks the images when not using a Ping Baseline repo

Latest helm won't install

Version 0.3.5 is fine but on 0.3.6 I get

Error: template: ping-devops/templates/pingfederate-engine/workload.yaml:1:4: executing "ping-devops/templates/pingfederate-engine/workload.yaml" at <include "pinglib.workload" (list . "pingfederate-engine")>: error calling include: template: ping-devops/templates/pinglib/_workload.tpl:167:4: executing "pinglib.workload" at <include "pinglib.merge.templates" (append . "workload")>: error calling include: template: ping-devops/templates/pinglib/_merge-util.tpl:24:30: executing "pinglib.merge.templates" at <include $baseTemplate $paramList>: error calling include: template: ping-devops/templates/pinglib/_workload.tpl:42:28: executing "pinglib.workload.tpl" at <include (print $top.Template.BasePath "/global/configmap.yaml") $top>: error calling include: template: ping-devops/templates/global/configmap.yaml:1:4: executing "ping-devops/templates/global/configmap.yaml" at <include "pinglib.configmap" (list . "global")>: error calling include: template: ping-devops/templates/pinglib/_configmap.tpl:16:6: executing "pinglib.configmap" at <include "pinglib.merge.templates" (append . "configmap")>: error calling include: template: ping-devops/templates/pinglib/_merge-util.tpl:23:36: executing "pinglib.merge.templates" at <include $overrideTemplate $paramList>: error calling include: template: ping-devops/templates/global/configmap.yaml:32:5: executing "global.configmap" at <include "global.public.host.port" (list $top $v "PF_ENGINE" "pingfederate-engine")>: error calling include: template: ping-devops/templates/global/configmap.yaml:70:57: executing "global.public.host.port" at <"_">: invalid value; expected string

Change pingfederate-engine HPA to a default of disabled

With recent changes to release 0.4.8 and issue #89 the HPA should be disabled by default since the default cpu request is set to 0. When set to 0, the hpa can no longer calculate the cpu utilization which results in many warning/error events in k8s.

The fix will be to simply change the default values pingfederate-engine.clustering.autoscaling.enabled=false

Run Ping containers with non-root security context

Currently working through a deployment into a controlled environment that performs hardened image scans prior to deployment. We have to add the following to the deployment/statefulset manifests to get it to work:

securityContext:
  runAsNonRoot: true
  runAsUser: 1001

Can this please be added in the core chart?

When creating configMapRef's, take into account the proper release name to include

Currently, when a configMapRef is created in the workload it always assuemes a name with a preceding Release.Name.

        envFrom:
        - configMapRef:
            name: {{ $top.Release.Name }}-global-env-vars
            optional: true
        - configMapRef:
            name: {{ $top.Release.Name }}-env-vars
            optional: true

The include should be used to mimic the name that is used when naming the configMapRef and honor the setting of the addReleaseNameToResources value.

name: {{ include "pinglib.fullname" . }}-env-vars

Add support for importing a secret containing license into the container

Support the ability to add a secret or configMap to a container via a VolumeMount. A good use of this practice is to bring in product licenses into the container.

The following values.yaml can be added to any product (using pingfederate-admin as an example) container.

pingfederate-admin
  secretVolumes:
    pingfederate-license:
      items:
        license: /opt/in/instance/server/default/conf/pingfederate.lic
        hello: /opt/in/instance/server/default/hello.txt

  configMapVolumes:
   pingfederate-license-props:
      items:
        license.props: /opt/in/etc/pingfederate.properties

In this case, a secret (called pingfederate-license) and configMap (called pingfederate-license-props) will bring in a couple of key values (license, hello) and (license.props) into the container as specific files. In this case, the results will looks like:

Containers:
  pingfederate-admin:
    Mounts:
      /opt/in/etc/pingfederate.properties from pingfederate-license-props (ro,path="pingfederate.properties")
      /opt/in/instance/server/default/conf/pingfederate.lic from pingfederate-license (ro,path="pingfederate.lic")
      /opt/in/instance/server/default/hello.txt from pingfederate-license (ro,path="hello.txt")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qfwz (ro)
Volumes:
  pingfederate-license:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  pingfederate-license
    Optional:    false
  pingfederate-license-props:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      pingfederate-license-props
    Optional:  false

Update default global image tag to 2102 (Feb 2021)

Update the default global image tag in base values.yaml and remove edge from example yamls.

This was in part to update the default to the latest GA'd version of the images. Additionally, as of sprint 2103 (March 2021), all images will, by default, run as unprivileged. A different issue will be created to create some default security contexts for some images (i.e. PingFederate and PingDirectory)

Unable to mount secretVolume and configMapVolumes simultaneously

I have the following in my custom values.yaml for pingfederate-admin:

pingfederate-admin:
  enabled: true

  secretVolumes:
    pingfederate-license:
      items:
        license: /opt/in/instance/server/default/conf/pingfederate.lic

  configMapVolumes:
    pingfederate-props:
      items:
        pf-props: /opt/in/etc/pingfederate.properties 

I created the secret and configmap using the following kubectl commands:

kubectl create secret generic pingfederate-license --from-file=license=<path-to-license-file>
kubectl create configmap pingfederate-props --from-file=pf-props=<path-to-properties-file>

However, the helm chart seems to only use the configMapVolume if both volumes are defined as shown in my values.yaml config. When I remove the configMapVolumes block, and re-run helm, the license mount shows up.

I even tried switching the order these are defined in values.yaml and saw the same behavior in chart versions 0.5.1 and 0.5.3.

Let me know if I am doing something wrong here and how I can fix this.

Add clusterServiceName to product services with clusters

There is a situation with PingFederate when deployed across k8s clusters when the pingfederate-admin is in only one k8s cluster. This ends up having other k8s clusters without an admin and the naming for the cluster-service is hard to automatically set, based on the name of the product (i.e. pingfederate-admin -vs- pingfederate-engine). In some customers, they want to provide their own clusterServiceName.

Request is to add an additional setting in the product.services structure to specify a clusterServiceName if a custom one is requested.

Examples for both pingfederate-admin and pingfederate-engine

pingfederate-admin:
  services:
    clusterServiceName: pingfederate-cluster

pingfederate-engine:
  services:
    clusterServiceName: pingfederate-cluster

In this example it's intentional that the cluster service name be the same between both the admin and the engine. By default, this would be the only one set. If clusterServiceName isn't set, then it would default to: {product.name}-cluster.

And in all cases, the helm release would be appended/prepended as specified by the addReleaseNameToResource setting.

Update image.tag to 2103 (March 2021)

In the 2103 tag, all containers moved to a default unprivileged state. The tag needs to be modified to 2103. Some ares that will need to change include:

  • Security Context on StatefulSets to include a fsGroup
  • Update the services ContainerPort to unprivileged ports (i..e. 636 --> 1636)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.