Coder Social home page Coder Social logo

8gears / n8n-helm-chart Goto Github PK

View Code? Open in Web Editor NEW
170.0 12.0 89.0 179 KB

A Kubernetes Helm chart for n8n a Workflow Automation Tool. Easily automate tasks across different services.

Home Page: https://artifacthub.io/packages/helm/open-8gears/n8n

License: Apache License 2.0

Smarty 100.00%
helm-chart n8n kubernetes self-hosted workflow workflow-automation

n8n-helm-chart's Introduction

Important

We would like to bring this Helm chart to the next level, in terms of automation, governance and documentation. This can only be achieved with a diverse community. Hence, we are looking for additional maintainers and contributors to join this project. Reach out to us if you are interested in contributing.

n8n Helm Chart for Kubernetes

n8n is an extendable workflow automation tool.

Artifact HUB

The Helm chart source code location is github.com/8gears/n8n-helm-chart

Requirements

Before you start, make sure you have the following tools ready:

  • Helm >= 3.8
  • external Postgres DB or external MySQL | embedded SQLite (bundled with n8n)
  • Helmfile (Optional)

Configuration

The values.yaml file is divided into a n8n specific configuration section, and a Kubernetes deployment-specific section.

The shown values represent Helm Chart defaults, not the application defaults. In many cases, the Helm Chart defaults are empty. The comments behind the values provide a description and display the application default.

These n8n config options should be attached below the root elements secret: or config: in the values.yaml. (See the typical-values-example section).

You decide what should go into secret and what should be a config. There is no restriction, mix and match as you like.

Installation

Install chart

helm install my-n8n oci://8gears.container-registry.com/library/n8n --version 0.23.1

N8N Specific Config Section

Every possible n8n config value can be set, even if it is not mentioned in the excerpt below. All application config settings are described in the: n8n configuration options. Treat the n8n provided config documentation as the source of truth, this Charts just forwards everything to n8n.

database:
  type:   # Type of database to use - Other possible types ['sqlite', 'mariadb', 'mysqldb', 'postgresdb'] - default: sqlite
  tablePrefix:      # Prefix for table names - default: ''
  postgresdb:
    database:       # PostgresDB Database - default: n8n
    host:           # PostgresDB Host - default: localhost
    password:       # PostgresDB Password - default: ''
    port:           # PostgresDB Port - default: 5432
    user:           # PostgresDB User - default: root
    schema:         # PostgresDB Schema - default: public
    ssl:
      ca:             # SSL certificate authority - default: ''
      cert:           # SSL certificate - default: ''
      key:            # SSL key - default: ''
      rejectUnauthorized:    # If unauthorized SSL connections should be rejected - default: true
  mysqldb:
    database:        # MySQL Database - default: n8n
    host:            # MySQL Host - default: localhost
    password:        # MySQL Password - default: ''
    port:            # MySQL Port - default: 3306
    user:            # MySQL User - default: root
credentials:
  overwrite:
    data:        # Overwrites for credentials - default: "{}"
    endpoint:    # Fetch credentials from API - default: ''

executions:
  process:              # In what process workflows should be executed - possible values [main, own] - default: own
  timeout:              # Max run time (seconds) before stopping the workflow execution - default: -1
  maxTimeout:           # Max execution time (seconds) that can be set for a workflow individually - default: 3600
  saveDataOnError:      # What workflow execution data to save on error - possible values [all , none] - default: all
  saveDataOnSuccess:    # What workflow execution data to save on success - possible values [all , none] - default: all
  saveDataManualExecutions:    # Save data of executions when started manually via editor - default: false
  pruneData:            # Delete data of past executions on a rolling basis - default: false
  pruneDataMaxAge:      # How old (hours) the execution data has to be to get deleted - default: 336
  pruneDataTimeout:     # Timeout (seconds) after execution data has been pruned - default: 3600
generic:
  timezone:       # The timezone to use - default: America/New_York
path:           # Path n8n is deployed to - default: "/"
host:           # Host name n8n can be reached - default: localhost
port:           # HTTP port n8n can be reached - default: 5678
listen_address: # IP address n8n should listen on - default: 0.0.0.0
protocol:       # HTTP Protocol via which n8n can be reached - possible values [http , https] - default: http
ssl_key:        # SSL Key for HTTPS Protocol - default: ''
ssl_cert:       # SSL Cert for HTTPS Protocol - default: ''
security:
  excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: ''
  basicAuth:
    active:     # If basic auth should be activated for editor and REST-API - default: false
    user:       # The name of the basic auth user - default: ''
    password:   # The password of the basic auth user - default: ''
    hash:       # If password for basic auth is hashed - default: false
  jwtAuth:
    active:               # If JWT auth should be activated for editor and REST-API - default: false
    jwtHeader:            # The request header containing a signed JWT - default: ''
    jwtHeaderValuePrefix: # The request header value prefix to strip (optional) default: ''
    jwksUri:              # The URI to fetch JWK Set for JWT authentication - default: ''
    jwtIssuer:            # JWT issuer to expect (optional) - default: ''
    jwtNamespace:         # JWT namespace to expect (optional) -  default: ''
    jwtAllowedTenantKey:  # JWT tenant key name to inspect within JWT namespace (optional) - default: ''
    jwtAllowedTenant:     # JWT tenant to allow (optional) - default: ''
endpoints:
  rest:             # Path for rest endpoint  default: rest
  webhook:          # Path for webhook endpoint  default: webhook
  webhookTest:      # Path for test-webhook endpoint  default: webhook-test
  webhookWaiting:   # Path for test-webhook endpoint  default: webhook-waiting
externalHookFiles:  # Files containing external hooks. Multiple files can be separated by colon - default: ''
nodes:
  exclude:          # Nodes not to load - default: "[]"
  errorTriggerType: # Node Type to use as Error Trigger - default: n8n-nodes-base.errorTrigger
# the list goes on...

Values

The values file consists of n8n specific sections config and secret where you paste the n8n config like shown above.

# The n8n related part of the config
config: # Dict with all n8n config options
#    database:
#      type: postgresdb
#      postgresdb:
#        database: n8n
#        host: localhost
#
# existingSecret and secret are exclusive, with existingSecret taking priority.
# existingSecret: "" # Use an existing Kubernetes secret, e.g created by hand or Vault operator.
secret: # Dict with all n8n config options, unlike config the values here will end up in a secret.
#    database:
#      postgresdb:
#        password: here_db_root_password

##
##
## Common Kubernetes Config Settings
persistence:
  ## If true, use a Persistent Volume Claim, If false, use emptyDir
  ##
  enabled: false
  type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
  ## Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  ## PVC annotations
  #
  # If you need this annotation include it under values.yml file and pvc.yml template will add it.
  # This is not maintained at Helm v3 anymore.
  # https://github.com/8gears/n8n-helm-chart/issues/8
  #
  # annotations:
  #   helm.sh/resource-policy: keep
  ## Persistent Volume Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## Persistent Volume size
  ##
  size: 1Gi
  ## Use an existing PVC
  ##
  # existingClaim:

# Set additional environment variables on the Deployment
extraEnv: { }
# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on
#   WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/

replicaCount: 1

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: [ ]
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: { }

podSecurityContext: { }
# fsGroup: 2000

securityContext: { }
  # capabilities:
  #   drop:
#   - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  type: ClusterIP
  port: 80
  annotations: { }

ingress:
  enabled: false
  annotations: { }
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: [ ]
  tls: [ ]
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: { }
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
#   memory: 128Mi
# requests:
#   cpu: 100m
#   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: { }

tolerations: [ ]

affinity: { }

scaling:
  enabled: false

  worker:
    count: 2
    concurrency: 2

  webhook:
    enabled: false
    count: 1

  redis:
    host:
    password:

redis:
  enabled: false
  # Other default redis values: https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml

Typical Values Example

A typical example of a config in combination with a secret.

# values.yaml

config:
  database:
    type: postgresdb
    postgresdb:
      host: 192.168.0.52
secret:
  database:
    postgresdb:
      password: 'big secret'

Setup

helm install -f values.yaml -n n8n deploymentname n8n

Scaling

n8n provides a queue-mode, where the workload is shared between multiple instances of same n8n installation.
This provides a shared load over multiple instances and a limited high availability, because the controller instance remains as Single-Point-Of-Failure.

With the help of an internal/external redis server and by using the excellent BullMQ, the tasks can be shared over different instances, which also can run on different hosts.

See docs about this Queue-Mode

To enable this mode within this helm chart, you simply should set scaling.enable to true. This chart is configured to spawn two worker instances.

scaling:
  enabled: true

You can define to spawn more workers, by set scaling.worker.count to a higher number. Also, it is possible to define your own external redis server.

scaling:
  enabled: true
  redis:
    host: "redis-hostname"
    password: "redis-password-if-set"

If you want to use the internal redis server, set redis.enable = true. By default, no redis server is spawned.

At last scaling option is it possible to create dedicated webhook instances, which only process the webhooks. If you set scaling.webhook.enabled=true, then webhook processing on the main instance is disabled and by default a single webhook instance is started.

Chart Deployment

  1. Update the Chart.yaml with the new version number for the chart and/or app.
  2. In GitHub create a new release with the the chart version number as the tag and title.

n8n-helm-chart's People

Contributors

abaus-vc avatar albertollamaso avatar bamsammich avatar bartlaarhoven avatar biclighter81 avatar dantulovsky avatar davidspek avatar dolohow avatar egandro avatar fk-flip avatar gmemstr avatar karitham avatar kirioxx avatar kraihn avatar m600x avatar markuskepert avatar rkosegi avatar sebastiansterk avatar stuszynski avatar swarnat avatar tartanlegrand avatar tomaszkiewicz avatar toxsick avatar vad1mo avatar victorbjorklund avatar xeruf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

n8n-helm-chart's Issues

When scaling.webhook is enabled webhook-test paths are returning 404

ingress:
  enabled: true
  ...
scaling:
  enabled: true
  webhook:
    enabled: true

I have the configuration above for webhooks. When I deploy n8n and trigger a webhook based on test-url, I always get 404. I changed the ingress definition of path:/webhook-test/ to use main n8n pod it works as expected.

My guess is that setting https://github.com/n8n-io/n8n/blob/master/packages/cli/src/config/schema.ts#L718 (N8N_DISABLE_PRODUCTION_MAIN_PROCESS) to true only affects the production url, so the test url should still point to main n8n

Readiness probe failed: Get "[...] connect: connection refused

Hey,

I'm running into the following problem and am out of ideas how to fix it.
My cluster is running k3s and I'm using fluxcd, other charts worked just fine so far.

19m         Normal    Scheduled              pod/n8n-784b44868b-v98r4     Successfully assigned n8n/n8n-784b44868b-v98r4 to mymachine
19m         Normal    Pulled                 pod/n8n-784b44868b-v98r4     Container image "n8nio/n8n:1.7.1" already present on machine
19m         Normal    Created                pod/n8n-784b44868b-v98r4     Created container n8n
19m         Normal    Started                pod/n8n-784b44868b-v98r4     Started container n8n
19m         Warning   Unhealthy              pod/n8n-784b44868b-v98r4     Readiness probe failed: Get "http://10.42.0.36:5678/healthz": dial tcp 10.42.0.36:5678: connect: connection refused
17m         Normal    WaitForFirstConsumer   persistentvolumeclaim/n8n    waiting for first consumer to be created before binding

So from my perspective, the pvc doesn't get created, because it the failed Readiness probe? There is also not ingress to be found.
I tried different variations, both without custom values and without any values supplied.

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: n8n
  namespace: n8n
spec:
  chart:
    spec:
      chart: n8n
      reconcileStrategy: ChartVersion
      sourceRef:
        kind: HelmRepository
        name: open-8gears
      version: 0.13.0
  interval: 10m0s
  values:
    config:
      #executions:
      #  process:                # In what process workflows should be executed - possible values [main, own] - default: own
      #  timeout:                # Max run time (seconds) before stopping the workflow execution - default: -1
      #  maxTimeout:            # Max execution time (seconds) that can be set for a workflow individually - default: 3600
      #  saveDataOnError:        # What workflow execution data to save on error - possible values [all , none] - default: all
      #  saveDataOnSuccess:    # What workflow execution data to save on success - possible values [all , none] - default: all
      #  saveDataManualExecutions:    # Save data of executions when started manually via editor - default: false
      #  pruneData:            # Delete data of past executions on a rolling basis - default: false
      #  pruneDataMaxAge:        # How old (hours) the execution data has to be to get deleted - default: 336
      #  pruneDataTimeout:        # Timeout (seconds) after execution data has been pruned - default: 3600
      generic:
        timezone: Europe/Vienna    # The timezone to use - default: America/New_York
      #path:         # Path n8n is deployed to - default: "/"
      host: n8n.custom.domain        # Host name n8n can be reached - default: localhost
      port: 80        # HTTP port n8n can be reached - default: 5678
    ingress:
      enabled: true
      className: traefik
      tls:
       - hosts:
           - n8n.custom.domain
         secretName: custom-tls
      hosts:
        - host: n8n.custom.domain
          paths:
            - path: /
    persistence:
      enabled: true
      accessModes:
        - ReadWriteOnce
      size: 2Gi
    readinessProbe:
      httpGet:
        path: /healthz
        port: http
        initialDelaySeconds: 120
      # periodSeconds: 10
        timeoutSeconds: 30
      # failureThreshold: 6
      # successThreshold: 1

configuration param not declared

In using the helm chart available as of today...

I'm getting the following error during container create:

Loading configuration overwrites from:
- /n8n-config/config.json
- /n8n-secret/secret.json

Error: configuration param 'database.mariadb.host' not declared in the
schema
configuration param 'database.mariadb.user' not declared in the schema
configuration param 'database.mariadb.password' not declared in the schema

My values is:

# Default helm values for n8n.
# Default values within the n8n application can be found under https://github.com/n8n-io/n8n/blob/master/packages/cli/config/index.ts
n8n:
  encryption_key: *******
defaults: 

config: # Dict with all n8n json config options
  database:
    mariadb:
      host: mariadb-galera-k8wide.mariadb-k8wide.svc.kubespray.local
      user: n8n
    type: mariadb

secret: # Dict with all n8n json config options, unlike config the values here will end up in a secret.
  database:
    mariadb:
      password: *******
      user: n8n

# Typical Example of a config in combination with a secret.
# config:
#    database:
#      type: postgresdb
#      postgresdb:
#        host: 192.168.0.52
# secret:
#    database:
#      postgresdb:
#        password: 'big secret'
## ALL possible n8n Values
#
#database:
#  type:             # Type of database to use - Other possible types ['sqlite', 'mariadb', 'mongodb', 'mysqldb', 'postgresdb'] - default: sqlite
#  tablePrefix:      # Prefix for table names - default: ''
#  mongodb:
#    connectionUrl:  # MongoDB Connection URL - default: mongodb://user:password@localhost:27017/database
#  postgresdb:
#    database:       # PostgresDB Database - default: n8n
#    host:           # PostgresDB Host - default: localhost
#    password:        # PostgresDB Password - default: ''
#    port:            # PostgresDB Port - default: 5432
#    user:            # PostgresDB User - default: root
#    schema:            # PostgresDB Schema - default: public
#    ssl:
#      ca:            # SSL certificate authority - default: ''
#      cert:            # SSL certificate - default: ''
#      key:            # SSL key - default: ''
#      rejectUnauthorized:    # If unauthorized SSL connections should be rejected - default: true
#  mysqldb:
#    database:        # MySQL Database - default: n8n
#    host:            # MySQL Host - default: localhost
#    password:        # MySQL Password - default: ''
#    port:            # MySQL Port - default: 3306
#    user:            # MySQL User - default: root
#credentials:
#  overwrite:
#    data:        # Overwrites for credentials - default: "{}"
#    endpoint:    # Fetch credentials from API - default: ''
#
#executions:
#  process:                # In what process workflows should be executed - possible values [main, own] - default: own
#  timeout:                # Max run time (seconds) before stopping the workflow execution - default: -1
#  maxTimeout:            # Max execution time (seconds) that can be set for a workflow individually - default: 3600
#  saveDataOnError:        # What workflow execution data to save on error - possible values [all , none] - default: all
#  saveDataOnSuccess:    # What workflow execution data to save on success - possible values [all , none] - default: all
#  saveDataManualExecutions:    # Save data of executions when started manually via editor - default: false
#  pruneData:            # Delete data of past executions on a rolling basis - default: false
#  pruneDataMaxAge:        # How old (hours) the execution data has to be to get deleted - default: 336
#  pruneDataTimeout:        # Timeout (seconds) after execution data has been pruned - default: 3600
#generic:
#  timezone:     # The timezone to use - default: America/New_York
#path:         # Path n8n is deployed to - default: "/"
#host:         # Host name n8n can be reached - default: localhost
#port:         # HTTP port n8n can be reached - default: 5678
#listen_address: # IP address n8n should listen on - default: 0.0.0.0
#protocol:       # HTTP Protocol via which n8n can be reached - possible values [http , https] - default: http
#ssl_key:        # SSL Key for HTTPS Protocol - default: ''
#ssl_cert:       # SSL Cert for HTTPS Protocol - default: ''
#security:
#  excludeEndpoints: # Additional endpoints to exclude auth checks. Multiple endpoints can be separated by colon - default: ''
#  basicAuth:
#    active:     # If basic auth should be activated for editor and REST-API - default: false
#    user:       # The name of the basic auth user - default: ''
#    password:   # The password of the basic auth user - default: ''
#    hash:       # If password for basic auth is hashed - default: false
#  jwtAuth:
#    active:               # If JWT auth should be activated for editor and REST-API - default: false
#    jwtHeader:            # The request header containing a signed JWT - default: ''
#    jwtHeaderValuePrefix: # The request header value prefix to strip (optional) default: ''
#    jwksUri:              # The URI to fetch JWK Set for JWT authentication - default: ''
#    jwtIssuer:            # JWT issuer to expect (optional) - default: ''
#    jwtNamespace:         # JWT namespace to expect (optional) -  default: ''
#    jwtAllowedTenantKey:  # JWT tenant key name to inspect within JWT namespace (optional) - default: ''
#    jwtAllowedTenant:     # JWT tenant to allow (optional) - default: ''
#endpoints:
#  rest:       # Path for rest endpoint  default: rest
#  webhook:    # Path for webhook endpoint  default: webhook
#  webhookTest: # Path for test-webhook endpoint  default: webhook-test
#externalHookFiles: # Files containing external hooks. Multiple files can be separated by colon - default: ''
#nodes:
#  exclude: # Nodes not to load - default: "[]"
#  errorTriggerType: # Node Type to use as Error Trigger - default: n8n-nodes-base.errorTrigger
# Set additional environment variables on the Deployment
extraEnv: {}
# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on
#   WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/
##
##
##
##
## Common Kubernetes Config Settings
persistence:
  ## If true, use a Persistent Volume Claim, If false, use emptyDir
  ##
  enabled: true
  type: dynamic
  ## Persistent Volume Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## Persistent Volume size
  ##
  size: 10Gi
  ## Use an existing PVC
  ##
  # existingClaim:

replicaCount: 2

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
#   drop:
#   - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  annotations: {}
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: n8n.mydomain.com
      paths: [ / ]
  tls: [ { hosts: [ n8n.mydomain.com ] } ]
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
#   cpu: 100m
#   memory: 128Mi
# requests:
#   cpu: 100m
#   memory: 128Mi
autoscaling:
  enabled: true
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}
database:
  mariadb:
    database: n8n
    host: mariadb-galera-k8wide.mariadb-k8wide.svc.kubespray.local
    password: ****
    port: 3306
    user: n8n
  type: mariadb

ConfigMap's config.json looks like:

{
  "database": {
    "mariadb": {
      "host": "mariadb-galera-k8wide.mariadb-k8wide.svc.kubespray.local",
      "user": "n8n"
    },
    "type": "mariadb"
  }
}

So I don't understand why its complaining it can't find an entrypoint for mariadb.host or mariadb.user

Annotations resource-policy is not longer managed by Helm v3

Hello,

Firstly, thanks for this great repository content and also for helmfile support!
During testing this content I saw that there is still following annotation which Helm v3 no longer supports.

image

The annotation "helm.sh/resource-policy": keep instructs Helm to skip deleting this resource when a helm operation (such as helm uninstall, helm upgrade or helm rollback) would result in its deletion.
However, this resource becomes orphaned. Helm will no longer manage it in any way.
This can lead to problems if using helm install --replace on a release that has already been uninstalled, but has kept resources.

Here is documentation as well regarding this topic Helm.

As I'm using helmfile for deployment, and currently pvc template can't exclude this annotation.

image

Sincerely,
Igor

The 'mountPath' of the PVC is not correctly pointing to the '/home/node/.n8n' path.

Hello, sorry if I am misunderstanding something, but when I deploy with Helm and enable persistence with the default SQLite database, the persistent volume is mounted at /root/.n8n, while all data is generated under /home/node/.n8n.

This implies that when I delete the pod, all data is lost. A workaround is to modify the deployment and map it to /home/node/.n8n, like this:
volumeMounts:
- mountPath: /home/node/.n8n
name: data

But modifying the deployment in this way results in a loss of Helm deployment capability.

Environment:
kubernetes on k3s
n8nio/n8n Tag: 1.6.1
Chart Version: n8n-0.11.0
Helm Version:"v3.12.3"

The values.yaml that I use:
https://pastebin.com/zYNE1CnF

mySQL errors on fresh install + HPA or replicas > 1 not working?

Green lit deployment of either 112 or 120 using this helm chart, with a fresh db each time results in:

  1. when saving the first flow I get:
ERROR RESPONSE
TypeError: Cannot read property 'toString' of undefined
at /usr/local/lib/node_modules/n8n/dist/src/Server.js:290:35
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async /usr/local/lib/node_modules/n8n/dist/src/ResponseHelper.js:76:26
  1. aftewards, i'm unable to save anything else and get this error:
at addChunk (internal/streams/readable.js:309:12)
at readableAddChunk (internal/streams/readable.js:284:9)
at Socket.Readable.push (internal/streams/readable.js:223:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
code: 'ER_DUP_ENTRY',
errno: 1062,
sqlState: '23000',
sqlMessage: "Duplicate entry '0' for key 'PRIMARY'",
query: 'INSERT INTO `workflow_entity`(`id`, `name`, `active`, `nodes`, `connections`, `createdAt`, `updatedAt`, `settings`, `staticData`) VALUES (DEFAULT, ?, ?, ?, ?, ?, ?, ?, DEFAULT)',
parameters: [
'asdf',
0,
'[{"parameters":{},"name":"Start","type":"n8n-nodes-base.start","typeVersion":1,"position":[250,300]}]',
'{}',
2021-05-19T06:05:44.301Z,
2021-05-19T06:05:44.301Z,
'{}'
]
}

2nd issue is that:
If one attempts to manually or via HPA scale above 1, or statically set the the amount of replicas to 2 in the values.yaml -

any extra containers fail with ReadWriteOnce errors - ie the storage PVC is already attached to the first pod.

Changing this to ReadWriteMany would require volumeMode: Block, when using something like CephRBD

Is scaling not functinoing here?

Lossig sessions when deployed on GKE

First things first, thank you very much for this chart it made my life quite a lot easier,
because I'm fairly new to Kubernetes and it helped a lot to understand everything a bit better.

My problem is that it seems like n8n is losing its session quite often with error messages on the client like:

The connection to https://n8n.xxx.xxx/rest/push?sessionId=0jrmptauh7e was interrupted while the page was loading.

and the server logs show:

The session "0jrmptauh7e" is not registered.

I'm now not sure if it's related to my Terraform/K8S setup or n8n it self.
this is my Terraform config:

resource "helm_release" "n8n" {
  count           = 1
  depends_on      = [kubernetes_namespace.n8n, google_sql_database.n8n, google_sql_user.n8n]
  repository      = "https://8gears.container-registry.com/chartrepo/library"
  chart           = "n8n"
  version         = var.helm_version
  name            = var.release_name
  namespace       = var.namespace
  recreate_pods   = true
  values = [
    "${file("n8n_values.yaml")}"
  ]
  set_sensitive {
    name  = "n8n.encryption_key"
    value = var.n8n_encryption_key
  }
  set {
    name  = "config.database.postgresdb.host"
    value = data.terraform_remote_state.cluster.outputs.database_connection
  }
  set {
    name  = "config.database.postgresdb.user"
    value = var.db_username
  }
  set_sensitive {
    name  = "secret.database.postgresdb.password"
    value = var.db_password
  }
  set {
    name  = "config.security.basicAuth.user"
    value = var.username
  }
  set_sensitive {
    name  = "config.security.basicAuth.password"
    value = var.password
  }
}

resource "google_compute_managed_ssl_certificate" "n8n_ssl" {
  name = "${var.release_name}-ssl"
  managed {
    domains = ["n8n.xxx.xxx"]
  }
}

resource "kubernetes_ingress" "n8n_ingress" {
  depends_on = [google_compute_managed_ssl_certificate.n8n_ssl]
  metadata {
    name = "${var.release_name}-ingress"
    namespace = helm_release.n8n[0].namespace
    annotations = {
	  "ingress.kubernetes.io/compress-enable"       = "false",
      "ingress.gcp.kubernetes.io/pre-shared-cert"   = google_compute_managed_ssl_certificate.n8n_ssl.name
    }
  }
  spec {
    backend {
      service_name = helm_release.n8n[0].name
      service_port = 80
    }
    rule {
      http {
        path {
          backend {
            service_name = helm_release.n8n[0].name
            service_port = 80
          }
        }
      }
    }
  }
}

and this is my valued file:

# The n8n related part of the config

config: # Dict with all n8n config options
  protocol: https
  port: 8080
  database:
    type: postgresdb
  security:
    basicAuth:
      active: true
secret: # Dict with all n8n config options, unlike config the values here will end up in a secret.
  database:
    postgresdb:
      password: ""
##
##
## Common Kubernetes Config Settings
persistence:
  ## If true, use a Persistent Volume Claim, If false, use emptyDir
  ##
  enabled: false
  type: emptyDir # what type volume, possible options are [existing, emptyDir, dynamic] dynamic for Dynamic Volume Provisioning, existing for using an existing Claim
  ## Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  ## PVC annotations
  #
  # If you need this annotation include it under values.yml file and pvc.yml template will add it.
  # This is not maintained at Helm v3 anymore.
  # https://github.com/8gears/n8n-helm-chart/issues/8
  #
  # annotations:
  #   helm.sh/resource-policy: keep
  ## Persistent Volume Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## Persistent Volume size
  ##
  size: 1Gi
  ## Use an existing PVC
  ##
  # existingClaim:

# Set additional environment variables on the Deployment
extraEnv:
  VUE_APP_URL_BASE_API: https://n8n.xxx.xxx/
  WEBHOOK_TUNNEL_URL: https://n8n.xxx.xxx/
# Set this if running behind a reverse proxy and the external port is different from the port n8n runs on
#   WEBHOOK_TUNNEL_URL: "https://n8n.myhost.com/

replicaCount: 1

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
# fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  type: NodePort
  port: 80

ingress:
  enabled: false
  annotations: {}
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts: []
  # - host: chart-example.local
  #   paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources:
  {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
# requests:
#   cpu: 100m
#   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

Thank you in advance for any help. ๐Ÿ™Œ

Unable to use non-semver image tags

Hi!
Due to the various semantic versioning checks for the image tags, you cannot use a non-semver compliant tag. Particularly Im trying to use the ai-beta tag.

e.g. this in deployments.yaml is a failure point.

          volumeMounts:
            - name: data
              mountPath: {{ if semverCompare ">=1.0" (.Values.image.tag | default .Chart.AppVersion) }}/home/node/.n8n{{ else }}/root/.n8n{{ end }}

And of course when attempting to deploy, this fails pointing out the tag isnt semver compliant. I realise these are non-production tags so would you be willing to support these?

Set `pruneData` default to `true`

This default doesn't seem to come from this chart. Though I think that false is not a smart default for this chart as it will eventually cause trouble. Also, there further seem to be (good?) defaults for pruneDataMaxAge and pruneDataTimeout.

basic auth in values yaml does not work

Hey! It looks like basic auth from the values.yaml does not work.
It works fine if added via extra envs, but not in the security part of values.yaml.
Is it a bug or not done yet?

cannot run with read only root filesystem because /home/node/.cache directory cannot be created

When using read only root filesystem in the securityContext:

Error: ENOENT: no such file or directory, mkdir '/home/node/.cache'
2024-01-16T17:47:37.516Z | error    | Error: Exiting due to an error. "{ file: 'LoggerProxy.js', function: 'exports.error' }"
2024-01-16T17:47:37.517Z | error    | Error: ENOENT: no such file or directory, mkdir '/home/node/.cache' "{ file: 'LoggerProxy.js', function: 'exports.error' }"

And there is no extraVolumes/extraVolumeMounts in the values to propagate an emptyDir volume to this path to mitigate the issue.

Unable to initialize DB

Hi
This is quite simple. Trying to deploy the chart by running the just the command in Readme: helm install my-n8n oci://8gears.container-registry.com/library/n8n --version 0.20.1 -f n8n-values.yaml. The overridden values are just:

#ย n8n-values.yaml

config:
  database:
    type: postgresdb
    postgresdb:
      database: n8n
      host: 10.109.205.83
      user: postgres
      password: '<apassword>'

secret:
  database:
    postgresdb:
      password: '<apassword>'

The host address is because in the cluster I have deployed the postgres backend and I've checked it has connectivity from other services in the cluster.

I'm clarifying about the connectivity as the issue I'm running in is just as it follows (pod logs ๐Ÿ‘‡ ):

Loading config overwrites [ '/n8n-config/config.json', '/n8n-secret/secret.json' ]
Last session crashed
Initializing n8n process
Error: There was an error initializing DB
DatabaseError: database "n8n" does not exist

I'm sure I'm missing something but I can't figure out what, so appreciate any hint/suggestion.

Cheers

How to specify port behind proxy?

Hi,

I have an envoy_proxy (amabasador) in front of n8n. If I set:

host: myhost.com
port: 443 # this is the port of the envoy proxy

then n8n fails to start because it fails to bind to port 443 (since it's not root).

If I leave port as the default, then the external webhook URL is: myhost.com:5678, which doesn't work from the outside.

How can I run n8n on it's default port, but have it properly set the webhook URL to myhost.com:443?

Thank you!
Dan

503 Service Unavailable on repository fetch

Hello,

When running:
helm install my-n8n oci://8gears.container-registry.com/library/n8n --version 0.20.1, I have:

Error: INSTALLATION FAILED: unexpected status from HEAD request to https://8gears.container-registry.com/v2/library/n8n/manifests/0.20.1: 503 Service Unavailable

Previously during the day (2h ago), we had:


Error: INSTALLATION FAILED: failed to do request: Head "https://8gears.container-registry.com/v2/library/n8n/manifests/0.20.1": EOF

Is something wrong with the repository? Is there any official mirror available?

BasicAuth with Hash is not working

Hello

I am trying to setup basic auth with n8n. I am using the hashed option which looking at the source code seems to indicate that a bcrypt hash should be set as N8N_BASIC_AUTH_PASSWORD.

When i enable hash:

config:  
 security:
    basicAuth:
      active: true
      hash: true

the basic auth on my browser keeps saying that the password is wrong. I have generated the bcrypt password using the following code:

var salt = bcrypt.genSaltSync(10);
console.log(process.env.N8N_PASSWORD)
var hash = bcrypt.hashSync(process.env.N8N_PASSWORD, salt);
console.log(hash)

Any reason why this is not working? Seems to work when i set hash to false and use normal password authentication.

Thanks

I have isolated the webhooks - production is working / test isnt / Chart: 0.23.1

I am using separated workers

scaling:
  enabled: true
  worker:
    count: 2
    concurrency: 2
   webhook:
     enabled: true
     count: 1

Not working*

curl https://n8n.mycluster/webhook-test/test1
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /webhook-test/test1</pre>
</body>
</html>

Working*

curl https://n8n.mycluster/webhook/test1
{"message":"Workflow executed successfully"}

** Also working **

  • when I disable the webhook support

semverCompare when version is in place

We are trying to use chart with automated pulling of newest version and images. We are fetching Chart and use helm template to gather images from templated chart.
We are getting

helm template n8n n8n-0.12.1.tgz

Error: template: n8n/templates/deployment.yaml:73:31: executing "n8n/templates/deployment.yaml" at <semverCompare ">=1.0" .Values.image.tag>: error calling semverCompare: Invalid Semantic Version

Use --debug flag to render out invalid YAML

Is there possibility evaluate version after it was fetched ?

n8n pod failing to configure - no encryption key

According to values.yaml,

n8n:
  encryption_key: # n8n creates a random encryption key automatically on the first launch and saves it in the ~/.n8n folder. That key is used to encrypt the credentials before they get saved to the database.

I'm using helm on microk8s. With default values.yaml file,
$ microk8s helm -f values.yaml install n8n open-8gears/n8n --debug
results in pod with CreateContainerConfigError, specifically

  Type     Reason             Age              From               Message
  ----     ------             ----             ----               -------
  Warning  Failed             3s (x3 over 4s)  kubelet            Error: secret "n8n" not found

No .n8n folder is created.
I managed to bypass the issue by typing encryption key into values.yaml manually.

Helm 3.7+ version incompatibility

Hey!
Looks like a latest (0.20.1) version of the chart is not compatible with recent helm versions due to the changes in helm 3.7 about OCI image format Release notes.

Could you please use new version of helm to build actual image of th chart, so it's possible to install n8n with recent helm version?

"webhook.enabled" means "webhook disabled"

Hello,

in line 264 values.yaml: https://github.com/8gears/n8n-helm-chart/blob/master/values.yaml
.Values.scaling.webhook.enabed is set to false:

 webhook:
    enabled: false
    count: 1

However, in line 95-97 _helpers.tpl: https://github.com/8gears/n8n-helm-chart/blob/master/templates/_helpers.tpl
the environment variable N8N_DISABLE_PRODUCTION_MAIN_PROCESS is set to true, if .Values.scaling.webhook.enabled is set to true:

{{- if .Values.scaling.webhook.enabled }}
- name: "N8N_DISABLE_PRODUCTION_MAIN_PROCESS"
  value: "true"
{{ end }}

Description of N8N_DISABLE_PRODUCTION_MAIN_PROCESS:

Disable production webhooks from main process. This helps ensure no HTTP traffic load to main process when using webhook-specific processes.

so seemlingly "enabling" webhooks actually disables them for production use. Debugging this caused us hours of headache.

su-exec: setgroups: Operation not permitted

My n8n pod is crashing with this error:

su-exec: setgroups: Operation not permitted
su-exec: setgroups: Operation not permitted
su-exec: setgroups: Operation not permitted
su-exec: setgroups: Operation not permitted

I think the error is due to the commits made today. I will pin to a commit made beforehand and report back.

Prevent workflow execution on master service

Hello together,
is it somehow possible to prevent the n8n master service from executing workflow especially if they are triggered by the manual "Execute workflow" node? We have some workflows which have high resource demands and if one of them fails because of lack of resources the master service, which provides the ui, should not crash. I only found the environment variable N8N_DISABLE_PRODUCTION_MAIN_PROCESS which only prevents webhook execution on the main process but not workflow execution.

Thank you for your help.

autoscaling worker pods

I've been playing with queue mode and autoscaling, but it seems that the chart only autoscales (with hpa) the main pod, and not the worker pods.
even though in the worker deployment there is an if-else that checks with autoscaling is enabled.
Am I missing something?
What is the correct approach to scaling worker pods based on executions/resources/other metrics?

Thanks!

kubernetes 1.21 wrong type for value

Hello
I've tried all possible things but it's doesn't work even when I let default value it doesn't work and I got the error
`helm install --namespace=default --timeout=10m0s --values=/home/shell/helm/values-n8n-0.6.0.yaml --version=0.6.0 --wait=true n8n /home/shell/helm/n8n-0.6.0.tgz
Error: INSTALLATION FAILED: template: n8n/templates/deployment.yaml:45:35: executing "n8n/templates/deployment.yaml" at <.Values.config>: wrong type for value; expected map[string]interface {}; got interface {}``
My values.yml

config: port: 5687 database: tablePrefix: "af-aws" host: "srg.asght.com" protocol : "https" port: 5687 security: basicAuth: active: true user: 'aefaef' # The name of the basic auth user - default: '' password: 'dqg4njzgJ' # The password of the basic auth user - default: '' jwtAuth: active: true ingress: enabled: true hosts: - host: "srg.asght.com" paths: [] tls: - secretName: "tls-rancher" hosts: - "srg.asght.com" persistence: enabled: true type: "dynamic" storageClass: "gp2" accessModes: - ReadWriteOnce size: 5Gi image: repository: n8nio/n8n pullPolicy: IfNotPresent tag: "latest" autoscaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 80

The rancher version is 2.6.3 and Kube version is 1.21.14

Unable to connect AWS Postgress to n8n

Describe the problem/error/question
Looking to deploy n8n on Internal Kubernetes environment using Helm to simplify the installation.

What is the error message (if any)?
I have deployed an external RDS Postgres DB in AWS but am having issues in connecting the n8n application to the service.

This is how the variables have been setup - seems to continue to default to internal DB

database:
 type:             postgresdb
 tablePrefix:      # Prefix for table names - default: ''
 postgresdb:
   database:       # PostgresDB Database - default: n8n
   host:           <AWSHOST ADDRESS>
   password:       <AWSHOST PASSWORD>
   port:            # PostgresDB Port - default: 5432
   user:            <AWSHOST USERNAME>
   schema:            # PostgresDB Schema - default: public

Permission denied when using persistent volumes

chart version = 0.20.1
when using

persistence:
  enabled: true
  type: dynamic

deploying this for the first time getting this error

Error: EACCES: permission denied, open '/home/node/.n8n/config'
    Code: EACCES

Running on EKS

Error installing helm chart: deployment.yaml wrong type for value

Hello,
I was trying to install the helm chart and got the following error:

$  helm -f values.yaml install n8n 8gears/n8n
Error: template: n8n/templates/deployment.yaml:43:35: executing "n8n/templates/deployment.yaml" at <.Values.config>: wrong type for value; expected map[string]interface {}; got interface {}

The error refers to this line of code: https://github.com/8gears/n8n-helm-chart/blob/master/templates/deployment.yaml#L43

I'm not setting the port in my values.yaml so it's left commented in.

While I investigate a fix (and perhaps with a PR) I thought I'd report here first.

Happy to provide additional info if needed.

Service annotations not being written into n8n/templates/service.yaml

I have defined some values in a values.yaml file for the Service like so (I am deploying on GKE and want to associate a BackendConfig with the Service)

service:
  type: ClusterIP
  port: 80
  annotations: # THIS ISNT BEING WRITTEN INTO THE TEMPLATED KUBERNETES CONFIG....
    # Backend Config created along side the ingress
    cloud.google.com/backend-config: '{"ports":{"http":"whitelist-cloudflare-only-via-security-policies", "https":"whitelist-cloudflare-only-via-security-policies"}}'

However, when I use the --dry-run and --debug flags, what I see is:

 ---
 # Source: n8n/templates/service.yaml
apiVersion: v1
 kind: Service
 metadata:
   name: n8n
   labels:
     helm.sh/chart: n8n-0.5.0
     app.kubernetes.io/name: n8n
     app.kubernetes.io/instance: n8n
     app.kubernetes.io/version: "0.111.0"
     app.kubernetes.io/managed-by: Helm
 spec:
   type: ClusterIP
   ports:
     - port: 80
       targetPort: http
       protocol: TCP
       name: http
   selector:
     app.kubernetes.io/name: n8n
     app.kubernetes.io/instance: n8n

Which suggests that the annotations part of the config referenced here is not being written into the template.

The question

Is my syntax wrong? The readme suggests annotations is just an object. Or is there something that needs to be changed in the template itself?

White Labelling

Hello,

With n8n helm chart version can i change The colors and logo?

helm repo add error

I want to add this helm chart via

helm repo add n8n https://github.com/8gears/n8n-helm-chart 

Bug I got error:

Error: looks like "https://github.com/8gears/n8n-helm-chart" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/8gears/n8n-helm-chart/index.yaml : 404 Not Found

I don't know what's the index.yaml is, but I can see some "professional" charts (such as bitnami and grafana) follow this template.

404 - failed to fetch

I just upgraded my helm release to latest and the CI pipeline is failing because of
โ”‚ Error: looks like "[https://8gears.container-registry.com/chartrepo/library/"](https://8gears.container-registry.com/chartrepo/library/%22) is not a valid chart repository or cannot be reached: failed to fetch https://8gears.container-registry.com/chartrepo/library/index.yaml : 404 Not Found

Did something recently changed?

Allow use of existing secrets for Postgres authentication

Currently. any secrets you want the n8n deployment to use have to be placed explicitly in the values.yaml file; it is not possible to use existing secrets to provide credentials, for example for the postgres db password. However, this may be useful in certain scenarios.

Allowing to set additional environment variables, similar to what extraEnv does, but with a valueFrom/secretKeyRef instead of a fixed value would be one way to allow this.

Pod does not start

values.yaml:

config:
  database:
    type: postgresdb
    postgresdb:
      host: n8n-db
      user: n8n
  executions:
    pruneData: true
  generic:
    timezone: Europe/Warsaw
  host: CUT
  protocol: https
  userManagement:
    emails:
      smtp:
        host: <CUT>
        port: 587
        auth:
          user: <CUT>
        sender: "n8n <CUT>"
n8n:
  encryption_key: <CUT>
secret:
  database:
    postgresdb:
      password: <CUT>
  userManagement:
    emails:
      smtp:
        auth:
          pass: <CUT>
image:
  tag: 1.0.5
ingress:
  enabled: true
  annotations:
    acme.cert-manager.io/http01-ingress-class: traefik
    cert-manager.io/cluster-issuer: letsencrypt
  hosts:
    - host: CUT
      paths: [/]
  tls:
    - secretName: n8n-cert
      hosts:
        - CUT
resources:
  requests:
    cpu: 100m
    memory: 512Mi
  limits:
    cpu: 1000m
    memory: 1024Mi
> kubectl logs n8n...
Loading config overwrites [ '/n8n-config/config.json', '/n8n-secret/secret.json' ]
UserSettings were generated and saved to: /home/node/.n8n/config
> kubectl describe n8n...
  Normal   Scheduled  12m                   default-scheduler  Successfully assigned n8n/n8n to k1
  Normal   Pulled     11m (x4 over 12m)     kubelet            Container image "n8nio/n8n:1.0.5" already present on machine
  Normal   Created    11m (x4 over 12m)     kubelet            Created container n8n
  Normal   Started    11m (x4 over 12m)     kubelet            Started container n8n
  Warning  Unhealthy  11m (x8 over 12m)     kubelet            Readiness probe failed: Get "http://10.42.0.10:5678/healthz": dial tcp 10.42.0.10:5678: connect: connection refused
  Warning  BackOff    2m42s (x49 over 12m)  kubelet            Back-off restarting failed container

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.