pingidentity / helm-charts Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Currently, the default resource memory request size for all products is:
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 4
memory: 8Gi
In some products like pingfederate-admin
this is woefully small at 500Mi. And for others like pingdelegator
it is quite large.
An example from a simple baseline looks like:
│ NAME PF READY RESTARTS STATUS CPU MEM %CPU/R %MEM/R %CPU/L %MEM/L IP NODE │
│ r044-pingfederate-admin-d99547768-r9dts ● 1/1 0 Running 12 718 2 143 0 8 10.52.11.161 ip-10-52-17-104.… │
│ r044-pingfederate-engine-5db68d687c-zr78n ● 1/1 0 Running 14 777 2 155 0 9 10.52.48.87 ip-10-52-33-50.u… │
Recommending:
| Product | CPU Request | CPU Limit | Mem Request | Mem Limit |
| pingaccess-* | 0 | 2 | 1Gi | 4Gi |
| pingdataconsole | 0 | 2 | 500Mi | 2Gi |
| pingdatagovernance | 0 | 2 | 1500Mi | 4Gi |
| pingdatagovernancepap | 0 | 2 | 750Mi | 2Gi |
| pingdatasync. | 0 | 2 | 750Mi | 2Gi |
| pingdelegator | 0 | 2 | 32Mi | 64Mi |
| pingdirectory | 0 | 2 | 1500Mi | 4Gi |
| pingfederate-* | 0 | 2 | 1Gi | 4 Gi |
Resulting Template will output when using the everything.yaml example:
#-------------------------------------------------------------------------------------
#
# Product Workload cpu-R cpu-L mem-R mem-L Ing
# --------------------- ----------- ----- ----- ----- ----- -----
# √ pingaccess-admin deployment 0 2 1Gi 4Gi false
# √ pingaccess-engine deployment 0 2 1Gi 4Gi false
# √ pingdataconsole deployment 0 2 .5Gi 2Gi false
# √ pingdatagovernance deployment 0 2 1.5Gi 4Gi false
# √ pingdatagovernancepap deployment 0 2 .75Gi 2Gi false
# √ pingdatasync deployment 0 2 .75Gi 2Gi false
# √ pingdelegator deployment 0 500m 32Mi 64Mi false
# √ pingdirectory statefulset 0 2 2Gi 8Gi false
# √ pingfederate-admin deployment 0 2 1Gi 4Gi false
# √ pingfederate-engine deployment 0 2 1Gi 4Gi false
#
# √ ldap-sdk-tools deployment 0 0 0 0 false
# √ pd-replication-timing deployment 0 0 0 0 false
#
#-------------------------------------------------------------------------------------
The PingFederate engine curl initcontainer should have a resources block to avoid pod scheduling issues in clusters/namespaces that must require all pod resources be set
I'd like to be able to override this image path to point to our local repo as the cluster has no internet access.
Thanks
The current output of the default ports looks redundant:
apiVersion: v1
data:
PA_ENGINE_ACME_PRIVATE_PORT: "8080"
PA_ENGINE_ADMIN_PRIVATE_PORT: "9000"
PA_ENGINE_ENGINE_PRIVATE_PORT: "3000"
PA_ENGINE_PRIVATE_HOSTNAME: pingaccess
PD_ENGINE_HTTPS_PRIVATE_PORT: "443"
PD_ENGINE_LDAP_PRIVATE_PORT: "389"
PD_ENGINE_LDAPS_PRIVATE_PORT: "636"
PD_ENGINE_PRIVATE_HOSTNAME: pingdirectory
PDG_ENGINE_HTTPS_PRIVATE_PORT: "443"
PDG_ENGINE_PRIVATE_HOSTNAME: pingdatagovernance
PF_ADMIN_ADMIN_PRIVATE_PORT: "9999"
PF_ADMIN_PRIVATE_HOSTNAME: pingfederate-admin
PF_ENGINE_ENGINE_PRIVATE_PORT: "9031"
PF_ENGINE_PRIVATE_HOSTNAME: pingfederate-engine
kind: ConfigMap
metadata:
name: global-env-vars
this is reflective of the naming structure:
<port_name>_<pub/private>_PORT
Since this naming structure provides flexibility of renaming ports the default portnames should be changed to produce more comprehensible variable names.
like PingDirectory vars.
requested enhancement:
change PF admin and engine port names to "https"
this should give better names that could be more appropriately used as convention:
PF_ADMIN_HTTPS_PRIVATE_PORT: "9999"
PF_ADMIN_HTTPS_PUBLIC_PORT: "443"
services:
ldap:
servicePort: 389
containerPort: 1389
dataService: true
ldaps:
servicePort: 636
containerPort: 1636
dataService: true
clusterService: true
https:
servicePort: 443
containerPort: 1443
ingressPort: 443
dataService: true
When service ports are different from container port it leads to:
# Source: ping-devops/templates/pingdirectory/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: samir-pd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: pingdirectory
helm.sh/chart: ping-devops-0.5.6
name: samir-pd-pingdirectory
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 1443
- name: ldap
port: 389
protocol: TCP
targetPort: 1389
- name: ldaps
port: 636
protocol: TCP
targetPort: 1636
selector:
app.kubernetes.io/instance: samir-pd
app.kubernetes.io/name: pingdirectory
and:
PD_ENGINE_PRIVATE_PORT_HTTPS: "443"
PD_ENGINE_PRIVATE_PORT_LDAP: "389"
PD_ENGINE_PRIVATE_PORT_LDAPS: "636"
and:
----- Starting hook: /opt/staging/hooks/80-post-start.sh
Waiting until PingDirectory service is running on this Server (samir-pd-pingdirectory-1.samir-pd-pingdirectory-cluster)
samir-pd-pingdirectory-1.samir-pd-pingdirectory-cluster:1636
Meaning, pd is looking to replicate on a port on a service that actually doesn't exist.
Now, it looks like Kubernetes is actually managing the interpretation which is cool. But this may cause some issue somewhere that we aren't catching.
As soon as the docker images for 2105 are released, the default tag will be switched to 2105.
When trying to use an image with a specific user/group (i.e. pingtoolkit), there is currently no way to affect the securityContext on initContainers or the finalStep container.
testFramework:
...
#########################################################
# SecurityContext for all containers
#########################################################
securityContext:
runAsUser: 1000
runAsGroup: 2000
...
testSteps:
- name: 01-init-example
...
securityContext:
runAsUser: ...
...
finalStep:
securityContext:
runAsUser: ...
Allow for a privateCert structure to contain optional arrays additionalHosts
and additionalIPs
:
pingaccess-admin:
privateCert:
generate: true
additionalHosts:
- pingaccess-admin.west-cluster.example.com
- pa-admin.west-cluster.example.com
additionalIPs:
- 123.45.67.8
In addition, if the ingress for the product is enabled, the host(s) created for that ingress will also be added to the alt-names.
The above example (with an ingress) will create a cert used by pingaccess-admin containing:
Certificate:
Data:
...
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=pingaccess-admin
...
X509v3 extensions:
...
X509v3 Subject Alternative Name:
DNS:rel050-pa-pingaccess-admin.ping-devops.com. pingaccess-admin.west-cluster.example.com, DNS:pa-admin.west-cluster.example.com, IP Address:123.45.67.8
When Issue #95 was resolved, the vault serviceAccountName was still pulling from:
vault:
hashicorp:
serviceAccountName: vault-auth
but that default was removed from the default values.yaml
. The new proper location is:
vault:
hashicorp:
annotations:
serviceAccountName: vault-auth
workload template should use the new location:
serviceAccountName: {{ $v.vault.hashicorp.annotations.serviceAccountName }}
Current DNS_QUERY_LOCATION
DNS_QUERY_LOCATION: "{{ include "pinglib.fullname" . }}-cluster.{{ $top.Release.Namespace }}.svc.cluster.local"
Should reference the admin cluster, rather than the engine cluster.
Specifically preStop and postStart. Some of our products have in-memory state (PF), and others need some work to be done for a graceful shutdown (PA,PD).
As a start, could we just add these hooks as empty objects {}
to the appropriate location for sts and deploy type workloads.
The default for persistentvolume.enabled in the values.yaml is set to:
global:
workload:
statefulSet:
persistentvolume:
enabled: true
However if the deployer sets this to false
when the merge happens, it will merge true
and false
==> true
.
Currently, checksum's are calculated based on the entirety of the generated ConfigMaps, which includes not only the data, but also labels, annotations, etc...
If an upgrade of the chart deployment is run and some labels change (i.e. Helm chart version), then it will change the checksum and subsequently redeploy everything.
The change will only perform a checksum of the actual data attribute of the ConfigMap, thus the checksum won't change unless actual data is changed.
With the recent changes to PingAccess 6.2.0, we can now generate an internal private cert to be used to join PingAccess admin and engines into a cluster.
This will remove the product "pingaccess" from the chart and introduce two new product names:
similar to the names with PingFederate.
The current default image tag value in helm charts is set to 2010 (Oct 2020).
With some of the updates to products and associated helm chart k8s resources (i.e. pingdirectory-cluster), it would best to update the helm chart default image tag to 2012 (Dec 2020).
Currently, the hostnames and ports in the configmap global-env-vars are only created if that specific product is enabled. This is great as it keeps the number of environment variables to a minimum.
However, the status of this config map is used to form the checksum for the products. Once a new product is enabled while others are running, then this is changed, and ALL containers will be forced to restart, as they sense a change to the global-env-vars.
_ingress.tpl uses the ingressPort as the service port.
I believe the correct intention is to only use the ingressPort from values.yaml for variables, this should remain pointing to service port
To support products (like PingAccess) to enable clustering between admin/engine over a security channel, a private certificate is required to match that of the service name (i.e. PRIVATE_HOSTNAME).
Use the helm genSignedCert mechanisms to create a tls secret to be used by the workloads to setup SSL communication between workloads.
In environments where the curl docker image is not available (or not yet approved for use), it may be useful to have a values parameter to switch the curl initcontainer for PingFederate engine on or off, to use a wait-for command on the main container instead
Currently, pods use either the default service account or, if Vault is enabled, the service account name configured for Vault. However, the chart does not have the ability to create this service account and requires that it has been created externally. In a cluster where workloads are separated by namespace, there would not be a named service account in the namespace to use for Vault. Ideally, I would like the ability for the release to create a named service account for the workload rather than requiring additional setup for the workload on the cluster side.
For the values.yamls, there are currently a couple of different structures to help build out k8s services and ingresses. This information is used to create hostnames in the global-env-vars configmap. As an example:
services:
engine:
port: 9031
targetPort: 9031
dataService: true
ingress:
hosts:
- host: pingfederate-engine._defaultDomain_
paths:
- path: /
backend:
servicePort: 443
creates:
PF_ENGINE_PUBLIC_HOSTNAME=pingfederate-engine
PF_ENGINE_PRIVATE_HOSTNAME=pingfederate-engine.{_defaultDomain_}
We also want to create additional variables of the type:
PF_ENGINE_PUBLIC_ENGINE_PORT=9031
PF_ENGINE_PRIVATE_ENGINE_PORT=443
It looks duplicative, however, the second use of ENGINE
is because there are some products (i.e. PingDirectory), where you might have multiple ports (i.e. 443, 289, 2443, 636, ...) that you want lit up publically, and with potentially different hostnames (we'll save that for later).
Having the service and ingress ports in separate structures makes it difficult to a) manage and b) process this in a predictable manner. I think we would propose the following. I'm using an example with PD to help make the point of multiple ports:
services:
api:
containerPort: 8443 <--- changed from targetPort
servicePort: 1443 <--- changed from port
ingressPort: 443 <--- new. moved from ingress
dataService: true
data-api:
containerPort: 9443 <--- changed from targetPort
servicePort: 2443 <--- changed from port
ingressPort: 2443 <--- new. moved from ingress
dataService: true
ingress:
hosts:
- host: pingdirectory.example.com
paths:
- path: /api
backend:
serviceName: api
- path: /directory/v1
backend:
serviceName: data-api
would create:
PD_PUBLIC_HOSTNAME=pingdirectory
PD_PUBLIC_API_PORT=443
PD_PUBLIC_DATA_API_PORT=2443
PD_PRIVATE_HOSTNAME=pingdirectory.example.com
PD_PRIVATE_API_PORT=443
PD_PRIVATE_DATA_API_PORT=2443
Currently, the default cpu request is set to 0, which tell kubernetes pods to reserve no cpu when a PingDirectory starts. Kubernetes uses this information to determine nodes with available resources when scheduling pods. In PingDirectory's case, it is a resource hog grabbing up to a couple CPU in short timeframes. When we have multiple PingDirectory pods on the same node running, they can quick bog down the CPU.
Setting the cpu request to 50m, will provide at last some reservation of CPU, so that if there are multiple nodes, it will better even out the load.
Additionally, setting the replicaCount to 1 by default, as many cases in development, there isn't a great need to have multiple replicas. If this is the case, simply set pingdirectory.container.replicatCount=2
or any number of replica's.
In order to provide HA with a PingAccess cluster between admin/engine nodes, it is required that the PingAccess Admin deploy as a StatefulSet with persistence. Otherwise if the PingAccess Admin goes down, the engines would lose connectivity to that node and be unable to get further config updates and subsequently have to bounce and lose their web-session information.
For now, PingFederate console won't be a StatefulSet by default, as it's not required.
Add the ability to add annotations to all resources generated similar to current support for Labels. This will allow deployers to specify additional annotations at either the global and/or product level. An example of the values yaml would look like:
global:
annotations:
app.ping-devops.com/test: test-name
pingaccess-admin:
annotations:
app.pingaccess/version: v1234
We can rearrange how we accept service definitions from values yaml to have more flexibility
The new solution should allow us to:
Current:
services:
https:
servicePort: 9999
containerPort: 9999
ingressPort: 443
dataService: true
clusterbind:
servicePort: 7600
containerPort: 7600
clusterService: true
clusterfail:
servicePort: 7700
containerPort: 7700
clusterService: true
clusterExternalDNSHostname:
Proposed:
services:
data:
annotations: {}
https:
servicePort: 9999
containerPort: 9999
ingressPort: 443
cluster:
annotations: {}
clusterbind:
servicePort: 7600
containerPort: 7600
clusterService: true
clusterfail:
servicePort: 7700
containerPort: 7700
clusterService: true
loadbalancer:
annotations: {}
Now that the helm-chart is set for 2101, the default should not check for 2012.
Once official images are released, change the default image tag to 2106
Sidecars and initContainers are valuable for a multitude of reasons - log forwarding, metric exporting, backup jobs. Because of this they can also have many ways of being configured.
So for our first foray into sidecars with helm, it would be nice to just have the option available and nonlimiting.
Allow for defining three top level maps to provide details for:
Examples include:
sidecars:
pd-access-logger:
name: pd-access-log-container
image: pingidentity/pingtoolkit:2105
volumeMounts:
- mountPath: /tmp/pd-access-logs/
name: pd-access-logs
readOnly: false
statsd-exporter:
name: statsd-exporter
image: prom/statsd-exporter:v0.14.1
args:
- "--statsd.mapping-config=/tmp/mapping/statsd-mapping.yml"
- "--statsd.listen-udp=:8125"
- "--web.listen-address=:9102"
ports:
- containerPort: 9102
protocol: TCP
- containerPort: 8125
protocol: UDP
initContainers:
init-1:
name: 01-init
image: pingidentity/pingtoolkit:2105
command: ['sh', '-c', 'echo "Initing 1" && touch /tmp/pd-access-logs/init-1']
volumeMounts:
- mountPath: /tmp/pd-access-logs/
name: pd-access-logs
readOnly: false
volumes:
pd-access-logs:
emptyDir: {}
statsd-mapping:
configMap:
name: statsd-config
items:
- key: config
path: statsd-mapping.yml
And within the product definition, allow for 3 includes:
pingdirectory:
enabled: false
name: pingdirectory
volumeMounts:
- mountPath: /opt/access-logs/
name: pd-access-logs
includeSidecars:
- pd-access-logger
includeInitContainers:
- init-1
includeVolumes:
- pd-access-logs
Update the default image.tag to 2104 reflecting the April 2021 release of images.
Currently, the chart only supports configuring a few of the Hashicorp Vault annotations listed at https://www.vaultproject.io/docs/platform/k8s/injector/annotations on a workload.
In some enterprise environments, the default values will not work and it is necessary to customize the configuration in order to use the chart with Vault. Examples include when multiple clusters are configured at different auth-path
s and using a custom tls-secret
when Vault instances are only available internally and use an enterprise CA.
Right now annotations are at the top level (global/product) in values.yaml and this adds them to all resources under that product. This may be misleading. We should support annotations at the workload level. For workloads, annotations should be added at .spec.template.metadata
as this is where most 3rd party operators look (examples: vault, telegraf-operator).
pingfederate-engine:
workload:
annotations:
telegraf.influxdata.com/class: app
would lead to:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: samir
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: pingfederate-engine
helm.sh/chart: ping-devops-0.5.1
name: samir-pingfederate-engine
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: samir
app.kubernetes.io/name: pingfederate-engine
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
telegraf.influxdata.com/class: app
The current annotations for the global ingress are set as follows
helm-charts/charts/ping-devops/values.yaml
Lines 39 to 41 in 9f5b096
We have found that if these are not appropriate for the environment and we need to have our own ingress annotations block, these pre-seeded annotations are mixed with the new ones, causing failure.
E.g, if we set the following in our values file (notice we've removed the ingress class and changed the name of backend-protocol annotation):
annotations:
ingress.kubernetes.io/backend-protocol: HTTPS
we get the resulting definition, which causes a failed ingress:
│ Annotations: ingress.kubernetes.io/backend-protocol: HTTPS │
│ kubernetes.io/ingress.class: nginx-public │
│ meta.helm.sh/release-name: [obscured] │
│ meta.helm.sh/release-namespace: [obscured] │
│ nginx.ingress.kubernetes.io/backend-protocol: HTTPS │
It may be best to unset the default annotations as such:
annotations: {}
# nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# kubernetes.io/ingress.class: "nginx-public"
Folks usually want one secrets management tool. Either completely within the cluster (bitnami sealed secrets), or completely separate (vault).
To complete the vault use case, we should be able to bring in env variables to /run/secrets, and put other secret files in their respective location.
Potentially something like this:
vault:
enabled: true
hashicorp:
annotations:
secret-volume-path: /run/secrets
secretPrefix: secret/<user>@pingidentity.com/<namespace>/
secrets:
- name: devops-secret.env
secret: devops-secret.env
- name: pingaccess.lic
secret: license
secret-volume-path: /opt/in/somewhere
This would place the secrets:
devops-secret.env
into the default path of run/secrets
pingaccess.lic
into the default path of /opt/in/somewhere/pingaccess.lic
Currently, PIngFederate Admin cannot start until the PingDirectory is running. An init container with a wait-for on the PingDirectory server needs to be added if pingdirectory.enabled = true.
In the case where it's not dependent, it will add a little delay on PingFederate Admin in starting up.
The work around today is to simply bounce the PF Admin server after PD is up and running.
Hi. Would you be able to add support for affinity for the engines? We'd like to avoid the pods clustering within the same node and spread them instead for further resilience.
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
Thanks
You should consider removing the SERVER_PROFILE_BRANCH=master
from each chart to allow for the Images to just pull from whatever the Repo default is.
New repos in GH are defaulting to 'main` - which breaks the images when not using a Ping Baseline repo
Version 0.3.5 is fine but on 0.3.6 I get
Error: template: ping-devops/templates/pingfederate-engine/workload.yaml:1:4: executing "ping-devops/templates/pingfederate-engine/workload.yaml" at <include "pinglib.workload" (list . "pingfederate-engine")>: error calling include: template: ping-devops/templates/pinglib/_workload.tpl:167:4: executing "pinglib.workload" at <include "pinglib.merge.templates" (append . "workload")>: error calling include: template: ping-devops/templates/pinglib/_merge-util.tpl:24:30: executing "pinglib.merge.templates" at <include $baseTemplate $paramList>: error calling include: template: ping-devops/templates/pinglib/_workload.tpl:42:28: executing "pinglib.workload.tpl" at <include (print $top.Template.BasePath "/global/configmap.yaml") $top>: error calling include: template: ping-devops/templates/global/configmap.yaml:1:4: executing "ping-devops/templates/global/configmap.yaml" at <include "pinglib.configmap" (list . "global")>: error calling include: template: ping-devops/templates/pinglib/_configmap.tpl:16:6: executing "pinglib.configmap" at <include "pinglib.merge.templates" (append . "configmap")>: error calling include: template: ping-devops/templates/pinglib/_merge-util.tpl:23:36: executing "pinglib.merge.templates" at <include $overrideTemplate $paramList>: error calling include: template: ping-devops/templates/global/configmap.yaml:32:5: executing "global.configmap" at <include "global.public.host.port" (list $top $v "PF_ENGINE" "pingfederate-engine")>: error calling include: template: ping-devops/templates/global/configmap.yaml:70:57: executing "global.public.host.port" at <"_">: invalid value; expected string
With recent changes to release 0.4.8 and issue #89 the HPA should be disabled by default since the default cpu request is set to 0. When set to 0, the hpa can no longer calculate the cpu utilization which results in many warning/error events in k8s.
The fix will be to simply change the default values pingfederate-engine.clustering.autoscaling.enabled=false
Currently working through a deployment into a controlled environment that performs hardened image scans prior to deployment. We have to add the following to the deployment/statefulset manifests to get it to work:
securityContext:
runAsNonRoot: true
runAsUser: 1001
Can this please be added in the core chart?
With release of PingAuthoroize/PAP 8.3, adding to charts
Add support for deploying pingdatagovernancepap instances.
Currently, when a configMapRef is created in the workload it always assuemes a name with a preceding Release.Name
.
envFrom:
- configMapRef:
name: {{ $top.Release.Name }}-global-env-vars
optional: true
- configMapRef:
name: {{ $top.Release.Name }}-env-vars
optional: true
The include should be used to mimic the name that is used when naming the configMapRef and honor the setting of the addReleaseNameToResources
value.
name: {{ include "pinglib.fullname" . }}-env-vars
Support the ability to add a secret or configMap to a container via a VolumeMount. A good use of this practice is to bring in product licenses into the container.
The following values.yaml can be added to any product (using pingfederate-admin as an example) container.
pingfederate-admin
secretVolumes:
pingfederate-license:
items:
license: /opt/in/instance/server/default/conf/pingfederate.lic
hello: /opt/in/instance/server/default/hello.txt
configMapVolumes:
pingfederate-license-props:
items:
license.props: /opt/in/etc/pingfederate.properties
In this case, a secret (called pingfederate-license
) and configMap (called pingfederate-license-props
) will bring in a couple of key values (license, hello) and (license.props) into the container as specific files. In this case, the results will looks like:
Containers:
pingfederate-admin:
Mounts:
/opt/in/etc/pingfederate.properties from pingfederate-license-props (ro,path="pingfederate.properties")
/opt/in/instance/server/default/conf/pingfederate.lic from pingfederate-license (ro,path="pingfederate.lic")
/opt/in/instance/server/default/hello.txt from pingfederate-license (ro,path="hello.txt")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4qfwz (ro)
Volumes:
pingfederate-license:
Type: Secret (a volume populated by a Secret)
SecretName: pingfederate-license
Optional: false
pingfederate-license-props:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pingfederate-license-props
Optional: false
Update the default global image tag in base values.yaml and remove edge from example yamls.
This was in part to update the default to the latest GA'd version of the images. Additionally, as of sprint 2103 (March 2021), all images will, by default, run as unprivileged. A different issue will be created to create some default security contexts for some images (i.e. PingFederate and PingDirectory)
Effect Summary:
If you change the image name, PD/PF clusters will not be created properly.
Cause:
https://github.com/pingidentity/helm-charts/blob/master/charts/ping-devops/templates/pinglib/_service-cluster.tpl
uses pinglib.fullimagename
I have the following in my custom values.yaml
for pingfederate-admin:
pingfederate-admin:
enabled: true
secretVolumes:
pingfederate-license:
items:
license: /opt/in/instance/server/default/conf/pingfederate.lic
configMapVolumes:
pingfederate-props:
items:
pf-props: /opt/in/etc/pingfederate.properties
I created the secret and configmap using the following kubectl commands:
kubectl create secret generic pingfederate-license --from-file=license=<path-to-license-file>
kubectl create configmap pingfederate-props --from-file=pf-props=<path-to-properties-file>
However, the helm chart seems to only use the configMapVolume
if both volumes are defined as shown in my values.yaml
config. When I remove the configMapVolumes
block, and re-run helm, the license mount shows up.
I even tried switching the order these are defined in values.yaml
and saw the same behavior in chart versions 0.5.1 and 0.5.3.
Let me know if I am doing something wrong here and how I can fix this.
Is there currently a way with this helm chart to get in a k8s job that does backups to a given destination? Like we have it here in the aws k8s files https://github.com/pingidentity/ping-cloud-base/blob/master/k8s-configs/ping-cloud/base/pingdirectory/server/aws/backup.yaml ?
Or what is the common way of doing a backup of the PingDirectory data in k8s?
Great Regards
Dominik
There is a situation with PingFederate when deployed across k8s clusters when the pingfederate-admin is in only one k8s cluster. This ends up having other k8s clusters without an admin and the naming for the cluster-service is hard to automatically set, based on the name of the product (i.e. pingfederate-admin -vs- pingfederate-engine). In some customers, they want to provide their own clusterServiceName.
Request is to add an additional setting in the product.services structure to specify a clusterServiceName if a custom one is requested.
Examples for both pingfederate-admin and pingfederate-engine
pingfederate-admin:
services:
clusterServiceName: pingfederate-cluster
pingfederate-engine:
services:
clusterServiceName: pingfederate-cluster
In this example it's intentional that the cluster service name be the same between both the admin and the engine. By default, this would be the only one set. If clusterServiceName isn't set, then it would default to: {product.name}-cluster.
And in all cases, the helm release would be appended/prepended as specified by the addReleaseNameToResource
setting.
In the 2103 tag, all containers moved to a default unprivileged state. The tag needs to be modified to 2103. Some ares that will need to change include:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.