kong / charts Goto Github PK
View Code? Open in Web Editor NEWHelm chart for Kong
License: Apache License 2.0
Helm chart for Kong
License: Apache License 2.0
Gents,
I am experiencing an issue here and I am sure I am just missing something really small. I am in the process of test driving kong in my dev Kubernetes cluster. Kong configuration documentation actually explains that postgres and cassandra are the two databases supported. This chart actually has a postgres database as a dependency which can be used to run kong. The chart configuration runs kong in a databaseless mode using configmaps.
My use case is to run kong using an external already configured postgres database. The documentation states that one needs to add database configuration parameters in the env
section of the values.yaml file to achieve that. I have done exactly that but unfortunately, I am not able to get my db initialized by kong. The job that is supposed to initialize my DB hangs on waiting for database. Everything works well when I enabled the database chart and run, and also when I run in databaseless mode. But just hangs when I run using my external database.
Here is my values.yaml file:
# Default values for Kong's Helm Chart.
# Declare variables to be passed into your templates.
#
# Sections:
# - Kong parameters
# - Ingress Controller parameters
# - Postgres sub-chart parameters
# - Miscellaneous parameters
# - Kong Enterprise parameters
# -----------------------------------------------------------------------------
# Kong parameters
# -----------------------------------------------------------------------------
# Specify Kong configurations
# Kong configurations guide https://docs.konghq.com/latest/configuration
# Values here take precedence over values from other sections of values.yaml,
# e.g. setting pg_user here will override the value normally set when postgresql.enabled
# is set below. In general, you should not set values here if they are set elsewhere.
env:
log_level: "info"
plugins: "bundled,oidc"
database: "postgres"
casandra_contact_points: ${db_host}
pg_host: ${db_host}
pg_port: ${db_port}
pg_user: ${db_username}
pg_password: ${db_password}
pg_database: ${db_name}
pg_ssl: "off"
pg_ssl_verify: "off"
nginx_worker_processes: "1"
proxy_access_log: /dev/stdout
admin_access_log: /dev/stdout
admin_gui_access_log: /dev/stdout
portal_api_access_log: /dev/stdout
proxy_error_log: /dev/stderr
admin_error_log: /dev/stderr
admin_gui_error_log: /dev/stderr
portal_api_error_log: /dev/stderr
prefix: /kong_prefix/
# Specify Kong's Docker image and repository details here
image:
repository: ${repositoryUrl}/${image}
tag: ${tag}
# kong-enterprise-k8s image (Kong OSS + Enterprise plugins)
# repository: kong-docker-kong-enterprise-k8s.bintray.io/kong-enterprise-k8s
# tag: "2.0.2.0-alpine"
# kong-enterprise image
# repository: kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition
# tag: "1.5.0.0-alpine"
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## If using the official Kong Enterprise registry above, you MUST provide a secret.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
pullSecrets:
- ${imagePullSecrets}
# Specify Kong admin API service and listener configuration
admin:
# Enable creating a Kubernetes service for the admin API
# Disabling this is recommended for most ingress controller configurations
# Enterprise users that wish to use Kong Manager with the controller should enable this
enabled: true
type: ClusterIP
# If you want to specify annotations for the admin service, uncomment the following
# line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
http:
# Enable plaintext HTTP listen for the admin API
# Disabling this and using a TLS listen only is recommended for most configuration
enabled: true
servicePort: 8001
containerPort: 8001
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the admin API
enabled: true
servicePort: 8444
containerPort: 8444
# Set a target port for the TLS port in the admin API service, useful when using TLS
# termination on an ELB.
# overrideServiceTargetPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
# Kong admin ingress settings. Useful if you want to expose the Admin
# API of Kong outside the k8s cluster.
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
tls: kadmin.${domain}.tls
# Ingress hostname
hostname: kadmin.${domain}
# Map of ingress annotations.
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: ${source_range}
cert-manager.io/cluster-issuer: ${issuer}
# Ingress path.
path: /
# Specify Kong status listener configuration
# This listen is internal-only. It cannot be exposed through a service or ingress.
status:
http:
# Enable plaintext HTTP listen for the status listen
enabled: true
containerPort: 8100
tls:
# Enable HTTPS listen for the status listen
# Kong does not currently support HTTPS status listens, so this should remain false
enabled: false
containerPort: 8543
# Specify Kong proxy service and listener configuration
proxy:
# Enable creating a Kubernetes service for the proxy
enabled: true
type: ClusterIP
# If you want to specify annotations for the proxy service, uncomment the following
# line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
http:
# Enable plaintext HTTP listen for the proxy
enabled: enable
servicePort: 80
containerPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the proxy
enabled: enable
servicePort: 443
containerPort: 8443
# Set a target port for the TLS port in proxy service, useful when using TLS
# termination on an ELB.
# overrideServiceTargetPort: 8000
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
# Define stream (TCP) listen
# To enable, remove "{}", uncomment the section below, and select your desired
# ports and parameters. Listens are dynamically named after their servicePort,
# e.g. "stream-9000" for the below.
stream: {}
# # Set the container (internal) and service (external) ports for this listen.
# # These values should normally be the same. If your environment requires they
# # differ, note that Kong will match routes based on the containerPort only.
# - containerPort: 9000
# servicePort: 9000
# # Optionally set a static nodePort if the service type is NodePort
# # nodePort: 32080
# # Additional listen parameters, e.g. "ssl", "reuseport", "backlog=16384"
# # "ssl" is required for SNI-based routes. It is not supported on versions <2.0
# parameters: []
# Kong proxy ingress settings.
# Note: You need this only if you are using another Ingress Controller
# to expose Kong outside the k8s cluster.
ingress:
enabled: false
hosts:
- api.${domain}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: ${source_range}
cert-manager.io/cluster-issuer: ${issuer}
tls:
- secretName: api.${domain}.tls
hosts:
- api.${domain}
# Ingress path.
path: /
externalIPs: []
# Custom Kong plugins can be loaded into Kong by mounting the plugin code
# into the file-system of Kong container.
# The plugin code should be present in ConfigMap or Secret inside the same
# namespace as Kong is being installed.
# The `name` property refers to the name of the ConfigMap or Secret
# itself, while the pluginName refers to the name of the plugin as it appears
# in Kong.
# Subdirectories (which are optional) require separate ConfigMaps/Secrets.
# "path" indicates their directory under the main plugin directory: the example
# below will mount the contents of kong-plugin-rewriter-migrations at "/opt/kong/rewriter/migrations".
plugins: {}
# configMaps:
# - pluginName: rewriter
# name: kong-plugin-rewriter
# subdirectories:
# - name: kong-plugin-rewriter-migrations
# path: migrations
# secrets:
# - pluginName: rewriter
# name: kong-plugin-rewriter
# Inject specified secrets as a volume in Kong Container at path /etc/secrets/{secret-name}/
# This can be used to override default SSL certificates.
# Be aware that the secret name will be used verbatim, and that certain types
# of punctuation (e.g. `.`) can cause issues.
# Example configuration
# secretVolumes:
# - kong-proxy-tls
# - kong-admin-tls
secretVolumes: []
# Enable/disable migration jobs, and set annotations for them
migrations:
# Enable pre-upgrade migrations (run "kong migrations up")
preUpgrade: true
# Enable post-upgrade migrations (run "kong migrations finish")
postUpgrade: true
# Annotations to apply to migrations jobs
# By default, these disable service mesh sidecar injection for Istio and Kuma,
# as the sidecar containers do not terminate and prevent the jobs from completing
annotations:
sidecar.istio.io/inject: false
kuma.io/sidecar-injection: "disabled"
# Kong's configuration for DB-less mode
# Note: Use this section only if you are deploying Kong in DB-less mode
# and not as an Ingress Controller.
dblessConfig:
# Either Kong's configuration is managed from an existing ConfigMap (with Key: kong.yml)
configMap: ""
# Or the configuration is passed in full-text below
config:
_format_version: "1.1"
services:
# Example configuration
# - name: example.com
# url: http://example.com
# routes:
# - name: example
# paths:
# - "/example"
# -----------------------------------------------------------------------------
# Ingress Controller parameters
# -----------------------------------------------------------------------------
# Kong Ingress Controller's primary purpose is to satisfy Ingress resources
# created in k8s. It uses CRDs for more fine grained control over routing and
# for Kong specific configuration.
ingressController:
enabled: false
image:
repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
tag: 0.8.0
args: []
# Specify Kong Ingress Controller configuration via environment variables
env:
# The controller disables TLS verification by default because Kong
# generates self-signed certificates by default. Set this to false once you
# have installed CA-signed certificates.
kong_admin_tls_skip_verify: true
# If using Kong Enterprise with RBAC enabled, uncomment the section below
# and specify the secret/key containing your admin token.
# kong_admin_token:
# valueFrom:
# secretKeyRef:
# name: CHANGEME-admin-token-secret
# key: CHANGEME-admin-token-key
admissionWebhook:
enabled: false
failurePolicy: Fail
port: 8080
ingressClass: kong
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# The annotations for service account
annotations: {}
installCRDs: true
# general properties
livenessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
resources: {}
# -----------------------------------------------------------------------------
# Postgres sub-chart parameters
# -----------------------------------------------------------------------------
# Kong can run without a database or use either Postgres or Cassandra
# as a backend datatstore for it's configuration.
# By default, this chart installs Kong without a database.
# If you would like to use a database, there are two options:
# - (recommended) Deploy and maintain a database and pass the connection
# details to Kong via the `env` section.
# - You can use the below `postgresql` sub-chart to deploy a database
# along-with Kong as part of a single Helm release.
# PostgreSQL chart documentation:
# https://github.com/helm/charts/blob/master/stable/postgresql/README.md
postgresql:
enabled: false
# postgresqlUsername: kong
# postgresqlDatabase: kong
# service:
# port: 5432
# -----------------------------------------------------------------------------
# Miscellaneous parameters
# -----------------------------------------------------------------------------
waitImage:
repository: busybox
tag: latest
pullPolicy: IfNotPresent
# update strategy
updateStrategy: {}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: "100%"
# maxUnavailable: "0%"
# If you want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# readinessProbe for Kong pods
# If using Kong Enterprise with RBAC, you must add a Kong-Admin-Token header
readinessProbe:
httpGet:
path: "/status"
port: metrics
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# livenessProbe for Kong pods
livenessProbe:
httpGet:
path: "/status"
port: metrics
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Annotation to be added to Kong pods
podAnnotations: {}
# Kong pod count
replicaCount: 1
# Annotations to be added to Kong deployment
deploymentAnnotations:
kuma.io/gateway: enabled
traffic.sidecar.istio.io/includeInboundPorts: ""
# Enable autoscaling using HorizontalPodAutoscaler
autoscaling:
enabled: false
minReplicas: 2
maxReplicas: 5
## targetCPUUtilizationPercentage only used if the cluster does not support autoscaling/v2beta
targetCPUUtilizationPercentage:
## Otherwise for clusters that do support autoscaling/v2beta, use metrics
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
# Kong Pod Disruption Budget
podDisruptionBudget:
enabled: false
maxUnavailable: "50%"
podSecurityPolicy:
enabled: false
spec:
privileged: false
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
runAsGroup:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'secret'
- 'emptyDir'
allowPrivilegeEscalation: false
hostNetwork: false
hostIPC: false
hostPID: false
# Make the root filesystem read-only. This is not compatible with Kong Enterprise <1.5.
# If you use Kong Enterprise <1.5, this must be set to false.
readOnlyRootFilesystem: true
priorityClassName: ""
# securityContext for Kong pods.
securityContext:
runAsUser: 1000
serviceMonitor:
# Specifies whether ServiceMonitor for Prometheus operator should be created
enabled: false
# interval: 10s
# Specifies namespace, where ServiceMonitor should be installed
# namespace: monitoring
# labels:
# foo: bar
# -----------------------------------------------------------------------------
# Kong Enterprise parameters
# -----------------------------------------------------------------------------
# Toggle Kong Enterprise features on or off
# RBAC and SMTP configuration have additional options that must all be set together
# Other settings should be added to the "env" settings below
enterprise:
enabled: false
# Kong Enterprise license secret name
# This secret must contain a single 'license' key, containing your base64-encoded license data
# The license secret is required for all Kong Enterprise deployments
license_secret: you-must-create-a-kong-license-secret
vitals:
enabled: true
portal:
enabled: false
rbac:
enabled: false
admin_gui_auth: basic-auth
# If RBAC is enabled, this Secret must contain an admin_gui_session_conf key
# The key value must be a secret configuration, following the example at
# https://docs.konghq.com/enterprise/latest/kong-manager/authentication/sessions
session_conf_secret: you-must-create-an-rbac-session-conf-secret
# If admin_gui_auth is not set to basic-auth, provide a secret name which
# has an admin_gui_auth_conf key containing the plugin config JSON
admin_gui_auth_conf_secret: you-must-create-an-admin-gui-auth-conf-secret
# For configuring emails and SMTP, please read through:
# https://docs.konghq.com/enterprise/latest/developer-portal/configuration/smtp
# https://docs.konghq.com/enterprise/latest/kong-manager/networking/email
smtp:
enabled: false
portal_emails_from: [email protected]
portal_emails_reply_to: [email protected]
admin_emails_from: [email protected]
admin_emails_reply_to: [email protected]
smtp_admin_emails: [email protected]
smtp_host: smtp.example.com
smtp_port: 587
smtp_starttls: true
auth:
# If your SMTP server does not require authentication, this section can
# be left as-is. If smtp_username is set to anything other than an empty
# string, you must create a Secret with an smtp_password key containing
# your SMTP password and specify its name here.
smtp_username: '' # e.g. [email protected]
smtp_password_secret: you-must-create-an-smtp-password
manager:
# Enable creating a Kubernetes service for Kong Manager
enabled: false
type: NodePort
# If you want to specify annotations for the Manager service, uncomment the following
# line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
http:
# Enable plaintext HTTP listen for Kong Manager
enabled: true
servicePort: 8002
containerPort: 8002
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for Kong Manager
enabled: true
servicePort: 8445
containerPort: 8445
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
externalIPs: []
portal:
# Enable creating a Kubernetes service for the Developer Portal
enabled: false
type: NodePort
# If you want to specify annotations for the Portal service, uncomment the following
# line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
http:
# Enable plaintext HTTP listen for the Developer Portal
enabled: true
servicePort: 8003
containerPort: 8003
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the Developer Portal
enabled: true
servicePort: 8446
containerPort: 8446
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
externalIPs: []
portalapi:
# Enable creating a Kubernetes service for the Developer Portal API
enabled: false
type: NodePort
# If you want to specify annotations for the Portal API service, uncomment the following
# line, add additional or adjust as needed, and remove the curly braces after 'annotations:'.
annotations: {}
# service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
http:
# Enable plaintext HTTP listen for the Developer Portal API
enabled: true
servicePort: 8004
containerPort: 8004
# Set a nodePort which is available if service type is NodePort
# nodePort: 32080
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters: []
tls:
# Enable HTTPS listen for the Developer Portal API
enabled: true
servicePort: 8447
containerPort: 8447
# Set a nodePort which is available if service type is NodePort
# nodePort: 32443
# Additional listen parameters, e.g. "reuseport", "backlog=16384"
parameters:
- http2
ingress:
# Enable/disable exposure using ingress.
enabled: false
# TLS secret name.
# tls: kong-proxy.example.com-tls
# Ingress hostname
hostname:
# Map of ingress annotations.
annotations: {}
# Ingress path.
path: /
externalIPs: []
When running Kong in DB-mode with migration jobs enabled, it is not possible to enable the admission webhook from previously disabled state. The webhook-cert
is a volume that is mounted to the pre-upgrade-migration job, and that job must run to completion before any other templates are generated (helm.sh/hook: "pre-upgrade"
). However, the pod cannot start because the webhook-cert
can't be mounted since the key-pair/secret has not been generated, and thus the upgrade can't complete successfully.
A few possible solutions/workarounds:
webhook-cert
to the migration jobs, since it isn't needed.The kong pre-upgrade-migrations container has the following error
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:67: in function </usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:63>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/db/init.lua:358: in function </usr/local/share/lua/5.1/kong/db/init.lua:308>
[C]: in function 'pcall'
/usr/local/share/lua/5.1/kong/concurrency.lua:45: in function 'cluster_mutex'
/usr/local/share/lua/5.1/kong/cmd/utils/migrations.lua:63: in function 'up'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:172: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:88>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:45>
/usr/local/bin/kong:9: in function 'file_gen'
init_worker_by_lua:47: in function <init_worker_by_lua:45>
[C]: in function 'xpcall'
init_worker_by_lua:54: in function <init_worker_by_lua:52>
Run with --v (verbose) or --vv (debug) for more details
I thought kong migrations finish should be ran by the post-upgrade-job
.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: CustomResourceDefinition, namespace: , name: kongconsumers.configuration.konghq.com
Helm Version: version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.7"}
Kong Version: Tested against 1.4 and 1.3 Enterprise with PostgresDB enabled
During a helm upgrade --install, with runMigrations set to true, the migration fails due to the init-migrations job. Helm 3 appears to be failing as the job specs are marked as immutable. This failure will still occur with no changes to the values.yaml between upgrades.
Example command:
helm upgrade --install kong/kong kong-test -f ./values.yaml --debug
Example error:
Error: UPGRADE FAILED: cannot patch "kong-test-kong-init-migrations" with kind Job: Job.batch "kong-test-kong-init-migrations" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta... }: field is immutable
I was able to bypass this error by locally adding runInitMigrations: false
to the values.yaml file and adding it to the and conditions used to create the job in the template.
Reported in helm/charts#19563 for Istio compatibility
With the following values.yaml
:
...
env:
ssl_cert: /etc/secrets/domain.com-wildcard/tls.crt
ssl_cert_key: /etc/secrets/domain.com-wildcard/tls.key
secretVolumes:
- domain.com-wildcard
...
and a secret file like this:
apiVersion: v1
data:
tls.crt: ...
tls.key: ...
kind: Secret
metadata:
name: domain.com-wildcard
namespace: kong
type: kubernetes.io/tls
This is the response from Helm (separated for ease of reading):
λ helm install kong kong/kong -n kong -f values.yaml
Error: Deployment.apps "kong-kong" is invalid: [spec.template.spec.volumes[3].name:
Invalid value: "domain.com-wildcard":
a DNS-1123 label must consist of lower case alphanumeric characters or '-',
and must start and end with an alphanumeric character
(e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?'),
spec.template.spec.containers[1].volumeMounts[3].name: Not found: "domain.com-wildcard"]
Am I misunderstanding the way a pre-defined TLS-certificate should be inserted into Kong via the Helm chart?
Edit 1: Changed previous name representation my-secret
to a more accurate format for my case; domain.com-wildcard
. That makes the error message a DNS-1123 label...
make more sense, and possibly makes this not a proper issue.
When enabling servicemonitor, the service associated with Kong does not contains the metrics port. I'll make a PR for this issue
We have services that are private and some that are public. How would we go about creating 2 LoadBalancer
s with the helm chart to support that? Istio's helm chart supports this, but we are switching away to a Linkerd
+ Kong
setup.
The NOTES.txt template may include one or more warnings if performing an upgrade or install with deprecated configuration in values.yaml. Because of limitations in template newline chomping, this can print excess newlines.
Based on review of other charts, we can avoid this by moving string generation into a helper template that builds the complete text with string concatenation rather than templating each warning individually.
charts/.github/workflows/master.yaml
Line 30 in ba6231d
charts/.github/workflows/non-master.yaml
Line 22 in ba6231d
Shouldn't this be command: lint --config=ct.yaml
?
If using your own DB and set postgresql.enabled: false then the environment variables KONG_PG_HOST
and/or KONG_PG_PORT
will not be set. So the migration init script will hang, as it runs nc command with these environment variables (in a until
loop).
Add support for fullnameOverride in the kong.fullname
define in _helpers.tpl
Hi,
I'm trying to deploy docker kong enterprise to my k8s cluster. I want to include the kong-plugin-moesif plugin, but it is causing my kong deployment to fail with the error in the title. Below are some files I've configured. Am I missing something?
FROM kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition:1.3.0.1-alpine
USER root
RUN luarocks install --server=http://luarocks.org/manifests/moesif kong-plugin-moesif
env:
plugins: bundled, kong-plugin-moesif
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: kong-plugin-moesif
labels:
global: "true"
disabled: false
config:
application_id: {{ .Values.moesif.applicationId }}
plugin: kong-plugin-moesif
With the following values provided for auto scaling I receive the following error when attempting to deploy to Google Kuberentes Engine running the latest version. If I disable autoscaling everything works fine but I would like to have this feature enabled.
Values Yaml:
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
GKE Error:
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2"
Helm upgrade recreates service account, which in turn recreates the secret token
for the service account. This breaks current deployments. Below are symptoms:
kubectl -n kong get serviceaccount -l app.kubernetes.io/name=kong-app -o jsonpath='{.items[].secrets}'
[map[name:kong-tst-kong-app-token-k7qv2]]
kubectl -n kong get pod kong-tst-kong-app-cc9787b65-fwzrd -o json | jq '.spec.containers[] | select(.name == "ingress-controller").volumeMounts'
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kong-tst-kong-app-token-k7qv2",
"readOnly": true
}
]
(Note: Token is the same)
kubectl -n kong get serviceaccount -l app.kubernetes.io/name=kong-app -o jsonpath='{.items[].secrets}'
[map[name:kong-tst-kong-app-token-rrzfh]]
kubectl -n kong get pod kong-tst-kong-app-cc9787b65-fwzrd -o json | jq '.spec.containers[] | select(.name == "ingress-controller").volumeMounts'
[
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "kong-tst-kong-app-token-k7qv2",
"readOnly": true
}
]
(Note: Tokens are different)
From the logs (after upgrade):
[tiller] 2020/01/31 17:01:46 deleting pre-upgrade hook kong-tst-kong-app for release kong-tst due to "before-hook-creation" policy
[kube] 2020/01/31 17:01:46 Starting delete for "kong-tst-kong-app" ServiceAccount
[kube] 2020/01/31 17:01:46 Waiting for 60 seconds for delete to be completed
[kube] 2020/01/31 17:01:48 building resources from manifest
[kube] 2020/01/31 17:01:48 creating 1 resource(s)
[kube] 2020/01/31 17:01:48 Watching for changes to ServiceAccount kong-tst-kong-app with timeout of 5m0s
[kube] 2020/01/31 17:01:48 Add/Modify event for kong-tst-kong-app: ADDED
E0131 18:09:30.368424 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1beta1.Ingress: Unauthorized
E0131 18:09:31.206518 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.KongCredential: Unauthorized
E0131 18:09:31.371432 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1beta1.Ingress: Unauthorized
E0131 18:09:32.214002 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.KongCredential: Unauthorized
E0131 18:09:32.379662 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1beta1.Ingress: Unauthorized
This is due to a pre hook that creates the service account and then deletes
(then is created again).
The fix is to remove something introduced by helm/charts@bf12a71 I understand why it was introduced, but it breaks upgrades. My suggestion is to instead manually add the service account and reference it in the values file.
PR incoming....
In order to use AWS EKS IAM Roles for Service Accounts feature, one must annotate the service account to specify what IAM role admin wants to grant to that service account.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
Looking at the values.yaml for enterprise portal, there isn't a way to configure portal_auth_conf
when portal _auth: openid-connect
. The only option, which is also required, is to provide a session_conf_secret
. According to the docs here: https://docs.konghq.com/enterprise/1.3-x/developer-portal/configuration/authentication/sessions/, session_conf
cannot be applied to openid-connect.
Changelog doc has no table of content. Add one to quickly look up changes for a specific release.
I have been trying to configure acme plugin for a while but I couldn't succeed.
According to https://github.com/Kong/kong-plugin-acme, the plugin installation requires adding lua_ssl_trusted_certificate to kong.conf file
or
according to this document (https://docs.konghq.com/hub/kong-inc/acme/), it requires adding nginx_proxy_lua_ssl_trusted_certificate to kong.conf file.
I tried both passing under "env" as it is mentioned in the chart readme file. But it didnt work. Kong gateway pod is going into CrashBackLoop.
Can you please add support for ACME plugin to the helm chart or can someone guide me to install it properly ?
When changing something in the helm release values that generally shouldn't cause a restart of the Kong pods, they get restarted anyway when the admission controller is enabled. This appears to happen because the checksum for the admission webhook config is added as an annotation on the Kong pods, and the admission webhook config changes every time because a new CA/cert is being generated each time.
It would be nice to have an option to provide your own CA/cert for the admission webhook to prevent this unnecessary Kong restart. It seems that the cert is dependent on the CN generated from the release/service name which complicates things a bit, but it should still be possible.
Hi Kong's team
Please help me about this issue.
After use helm install Kong on kubernetes i ve tried to manual assign cluster ip to admin.
my config below.
helm install mykong kong/kong
--set proxy.type=ClusterIP,proxy.clusterIP=10.96.10.1
--set admin.enabled=true,admin.type=ClusterIP,admin.clusterIP=10.96.10.2 \
proxy.type=ClusterIP,proxy.clusterIP=10.96.10.1 >> This work like a charm. IP 10.96.10.1 assign to kong proxy without problem.
problem start here ,In an admin config there s no options for setting admin.clusterIP in documents,
Ive tried admin.clusterIP=10.96.10.2 as above but theres no luck.
Is there another way to set admin.clusterIP or i might missing something ?
Thank you in advance
See the scenario described in https://discuss.konghq.com/t/clusterrole-and-clusterrolebinding-creation/6435/2?u=traines
Using --watch-namespace
should allow for Kong deployments that do not create any ClusterRole for their ServiceAccount, instead placing all permissions into the namespaced Role. However, this is not tested and should be coordinated with the controller (not strictly required, but it should probably have its own Role-only example manifest that the chart tracks).
CRDs are always cluster-wide, but are easier to manage separately (as-is, Helm 3 always manages them separately, and only has the ability to create them alongside the release for convenience--the CRDs are not part of the release). The (Cluster)RoleBinding, by comparison, is tied to a specific ServiceAccount, so it needs to be part of the release for the template to bind it correctly.
Proposed changes:
This is probably contingent on Kong/kubernetes-ingress-controller#717
Kong configured as db-less with Kong Ingress Controller enabled.
Kong Chart version: 1.0.0
The ingress-controller logs are full of RBAC errors for both K8 objects and the Kong CRDs:
reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Unauthorized
reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.KongConsumer: Unauthorized
leaderelection.go:324] error retrieving resource lock kong-internal/kong-ingress-controller-leader-kong-kong: Unauthorized
I know there's an existing issue #9 that will fix a number of typos. I believe I've found another one.
In the _helpers.tpl kong.controller-container definition, the election-id argument is configured as:
--election-id=kong-ingress-controller-leader-{{ .Values.ingressController.ingressClass }}
which becomes --election-id=kong-ingress-controller-leader-kong
while the controller-rbac-resources.yaml sets the Role configmap resource name as:
Perhaps a fix for the leader election id could be added to #9? I'm hoping the resolution of all the typos will fix the RBAC issues.
Current default appears to be 1.4:
charts/charts/kong/values.yaml
Line 38 in b1131b6
It is understandable that this might not be top priority for the chart, but it should be done sooner or later.
The feature of installing custom plugins is great.
But all .lua files must be in the same directory because of configmap issue.
It would be interesting to add tar.gz or .rock in configmap with all directory structure and dependencies and to unpack in startup time.
Configmap is readonly, maybe you could merge all plugins to an emptyDir.
Would someone from Kong be willing to accept/ verify a PR in helm/hub that added this repo to the listing? It would make the chart(s) here available on hub.helm.sh.
Currently, Kong's default deployment contain a securityContext section:
https://github.com/Kong/charts/blob/master/charts/kong/values.yaml#L474
This was due to limitations in Kong which prevented to run it as non-root by default. This is not the case anymore so this should be removed to remove friction in deployment the controller on OpenShift platform.
The documentation notes that one can specify the the number of replicas for the ingressController. However, this value is not actually available for configuration. As a result, I would assume the ingressController always defaults to 1 instance. Would be nice to be able to configure the number of replicas for the ingressController.
This issue collects proposed breaking changes for the next major release of the Kong Helm chart. It will be updated over time based on the status of the proposals.
If you are a chart user and believe a proposed change would be incompatible with your environment, please let us know in the comments.
Checked items are code complete and expected to arrive in the new version. Unchecked items are as of yet undecided, and may or may not have some WIP code available.
http
/tls
configuration blocks (https://github.com/Kong/charts/blob/kong-1.3.1/charts/kong/values.yaml#L111-L127) for services in favor of a generic listener configuration that supports stream listens as well. << stream listens are distinct enough that it doesn't really make sense to consolidate configuration of them. We've consolidated most of the underlying template logic to simplify development, but there's no compelling reason to change the UX here also.required for custom prometheus relabel config support of service labels: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitorspec
When using the kong helm chart's functionality to create an Ingress
resource for the kong proxy, it's not possible to use it with the ALB Ingress controller to configure a SSL redirect at the ALB.
That configuration requires adding an additional path entry to the generated ingress rule like:
- backend:
serviceName: ssl-redirect
servicePort: use-annotation
path: /*
The serviceName
and servicePort
values are arbitrary and have no relation to Kong itself. They refer to an annotation on the Ingress
resource like:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_302"}}'
The helm chart only allows specifying the path
pattern and optionally a list of hosts to create rules for (each rule with only a single path).
I know this is a bit of a niche case, but it would be nice to have the flexibility to configure this with the helm chart rather than needing to create a separate Ingress
resource outside of helm.
I have a k8s cluster, and dev env, test env, and prod env are deployed on the cluster. Now I want to use a kong for dev and test env, and another kong for prod. How to deploy?
I'm not sure if it's a Kong problem or a Helm problem, but for some reason when I enable the autoscaling feature the HPA template is being rendered using API autoscaling/v1
and according to kubectl api-versions
I do have autoscaling/v2beta2
avaliable on my cluster (I'm using AKS).
$ kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
I was checking the HPA template and the condition is very straightforward:
apiVersion: {{ .Capabilities.APIVersions.Has "autoscaling/v2beta2" | ternary "autoscaling/v2beta2" "autoscaling/v1" }}
It's look like a Helm issue, but I would like to receive some feedback from you before opening an issue on Helm repo.
Helm version: 3.2.0
K8s client version: 1.18
K8s server version: 1.15.10
Hi,
We like very much that the chart supports CRDs, and are looking forward for a Service CRD. Thanks to such a CRD, a backend application pipeline could change Kong's configuration so the gateway would point to the new backend application version. It would facilitate a lot dependency management between apps and the gateway. Is such a feature already considered?
Thank you
If Kong is being installed multiple times in a single k8s cluster, it should be possible to install CRDs as a release and then install all Kong releases without CRDs.
Associated CRDs with any of the releases results in all other releases to be broken when the release that includes CRDs is removed.
K8s 1.16 promoted CustomResourceDefinition from v1beta1 to v1
See the 3rd changelog bullet point for details as well as the blog post overview.
Kong should upgrade the CRDs to v1 to take advantage of these new features. I'm not sure what the support policy is for these charts since it would require k8s 1.16. Perhaps you can use helm's Capabilities.Has if older k8s versions should continue to be supported?
Helm 2 is now deprecated and the recommended version is Helm 3.
We need to announce Helm 2 is deprecated for our chart and update our documents to use Helm 3 by default.
install with
helm install kong kong/kong --set ingressController.installCRDs=false --set admin.enabled=true
kubectl get pods -o wide
get info like
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kong-kong-77467b7cc5-vxhpk 2/2 Running 1 27m 10.244.5.102 k8s-slave2 <none> <none>
redis-ha-server-0 2/2 Running 0 59d 10.244.6.15 k8s-slave1 <none> <none>
redis-ha-server-1 2/2 Running 8 34d 10.244.5.29 k8s-slave2 <none> <none>
kubectl get svc
get info like
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-kong-admin NodePort 10.96.250.92 <none> 8444:32523/TCP 26m
kong-kong-proxy LoadBalancer 10.106.237.95 <pending> 80:30653/TCP,443:31998/TCP 26m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 62d
redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 59d
redis-ha-announce-0 ClusterIP 10.105.83.32 <none> 6379/TCP,26379/TCP 59d
redis-ha-announce-1 ClusterIP 10.101.226.41 <none> 6379/TCP,26379/TCP 59d
pods seems work alright
kubectl exec -it kong-kong-77467b7cc5-vxhpk /bin/sh
/ $ wget https://127.0.0.1:8444/upstreams --no-check-certificate -q -O -
{"next":null,"data":[{"healthchecks":{"active":{"https_verify_certificate":true,"https_sni":null,"http_path":"\/","timeout":1,"concurrency":10,"healthy":{"http_statuses":[200,302],"interval":0,"successes":0},"unhealthy":{"http_statuses":[429,404,500,501,502,503,504,505],"tcp_failures":0,"timeouts":0,"http_failures":0,"interval":0},"type":"http"},"passive":{"unhealthy":{"http_failures":0,"http_statuses":[429,500,503],"tcp_failures":0,"timeouts":0},"type":"http","healthy":{"successes":0,"http_statuses":[200,201,202,203,204,205,206,207,208,226,300,301,302,303,304,305,306,307,308]}}},"hash_on":"none","id":"6b458355-2b4f-58ad-9717-cc89f496477a","algorithm":"round-robin","name":"kuboard.kube-system.http.svc","host_header":null,"hash_fallback_header":null,"tags":null,"hash_on_cookie":null,"hash_on_header":null,"hash_on_cookie_path":"\/","created_at":1581477500,"hash_fallback":"none","slots":10000}]}
but the service admin node port and porxy node port is not working
[root@k8s-master ~]# telnet 127.0.0.1 32523
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root@k8s-master ~]# telnet 127.0.0.1 30653
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root@k8s-master ~]# telnet 127.0.0.1 31998
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Helm introduced the ability to declare CRDs in a new crds
directory (https://helm.sh/docs/topics/chart_best_practices/custom_resource_definitions/). This solves an issue of using Kong as a dependency (subchart) of an umbrella chart and using the CRD (for example KongPlugin). This is currently not possible (or at the very least very challenging) in Helm 2 charts; see here: (helm/helm#2994).
Implementing helm 3 crds
would be a nice feature enhancement.
Hello, if kong.fullname has .Release.Name and we can't override fullname, integration controller can't connect to svc/RELEASE-kong-proxy which was deployed in other release.
charts/charts/kong/templates/_helpers.tpl
Line 13 in 0896866
Issue: If admissionWebhook
is enabled, the rendered CONTROLLER_ADMISSION_WEBHOOK_LISTEN
is invalid.
Steps to reproduce this issue:
helm template kong/kong --generate-name --set ingressController.installCRDs=false --set ingressController.admissionWebhook.enabled=true
the result is
- name: CONTROLLER_ADMISSION_WEBHOOK_LISTEN
value: "0.0.0.0:%!d(float64=8080)"
We would like ClusterRole to allow creation of "kongingresses".
Can we add "create" to "kongingresses" in the "ClusterRole"?
For the time being, we created a separate ClusterRole, but it would be good if the ClusterRole provided this permission so that we don't have to create a separate ClusterRole.
I want to to try out Kong as API-gateway on my PI 4 running K8S.
I figured that the image used in the current chart version does not find an image for the arm64 architecture, but found out there was a dedicated docker image for arm, so I overrode that.
Still, I was unsuccessful spinning up a pod of arm64v8/kong:ubuntu
.
Output on stdout:
standard_init_linux.go:211: exec user process caused "exec format error"
This indicates the platform does not match.
I am not 100% sure this is the place to ask, since my setup is K8S (K3S distribution) and deployment via Helm, so feel free to redirect!
I installed with
helm install kong --namespace=kong --skip-crds --set ingressController.installCRDs=false --set image.repository=arm64v8/kong --set image.tag=ubuntu kong/kong
The pod triggered seems to hold the proper (overridden) image (from kubectl describe
):
proxy:
Container ID: containerd://211e508357f16194b8a6c003579c26ace13c87408a91396b479a5f52f91d3a19
Image: arm64v8/kong:ubuntu
Image ID: docker.io/arm64v8/kong@sha256:39e4b224efc8b79152c7f2cdc7fd9f89e86dbb514b8257cf2e246bcc41f78a8a
Thanks for your support and the great work on Kong!
Oliver
Remark: this is a crossposting from the docker-kong repo, since I really don't understand to which part (chart, image) the issue is more related to. Linking those issues.
I wonder if it's supported to enable go plugin support via the chart somehow?
Thanks
Despite the migration jobs having the following annotations
annotations:
helm.sh/hook: "post-upgrade"
helm.sh/hook-delete-policy: "before-hook-creation"
None of the migration jobs were deleted after the jobs runs successfully.
controller-rbac-resources.yaml line 6 has a typo .Release.namespace
Hi,
I have my admin/manager values .yaml configured like so:
admin:
useTLS: false
enabled: true
type: NodePort
ingress:
enabled: true
hostname:
valueFrom:
secretKeyRef:
name: kong-admin-hostname
key: hostname
manager:
tls:
enabled: false
type: NodePort
ingress:
enabled: true
hostname:
valueFrom:
secretKeyRef:
name: kong-manager-hostname
key: hostname
I'm receiving this error when deploying kong.
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
coalesce.go:199: warning: destination for hostname is a table. Ignoring non-table value
Error: YAML parse error on w2g-kong/charts/kong/templates/migrations-post-upgrade.yaml: error converting YAML to JSON: yaml: line 82: could not find expected ':'
helm.go:76: [debug] error converting YAML to JSON: yaml: line 82: could not find expected ':'
It appears the helm chart is having trouble parsing the hostname value. I was able to confirm this by hardcoding the hostname value and the chart deployed successfully.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.