Coder Social home page Coder Social logo

helm-charts's Introduction

t3n Helm Charts

license: MIT standard-readme compliant

A chart repository for the Kubernetes package manager Helm.

Project Status

As of PR #75 we dropped support for Helm v2 and switched to Helm Chart apiVersion v2. We also switched labels to Recommended Labels. A detailed introduction on how to migrate without downtime can be found here

This project is still under active development, so you might run into issues. If you do, please don't be shy about letting us know, or better yet, contribute a fix or feature. We will also add more charts over time, so keep an eye on this repository.

Table of Contents

Background

This repository contains charts to deploy Neos & Flow Applications for Kubernetes Helm. Charts are curated application definitions for Kubernetes Helm. For more information about installing and using Helm, see its README.md. To get a quick introduction to charts see this chart guide.

Install

To add the t3n charts for your local client, run helm repo add:

$ helm repo add t3n https://storage.googleapis.com/t3n-helm-charts
"t3n" has been added to your repositories

Usage

You can then run helm search repo t3n to see available charts. To install a chart just run helm install t3n/<chart>.

For more information on using Helm, refer to the Helm's documentation.

Contributing

PRs accepted. Pipeline is using Helm v3.

Small note: If editing the Readme, please conform to the standard-readme specification.

License

MIT

helm-charts's People

Contributors

alarlecke avatar cesarempathy avatar coldfire84 avatar cryptk avatar das-nagnag avatar desaintmartin avatar francesco995 avatar helmecke avatar jnmcfly avatar johannessteu avatar kishorviswanathan avatar lausser avatar mcfedr avatar mschmidt291 avatar proteus1121 avatar ruckc avatar st0rmingbr4in avatar stuart-c-moore avatar travis-amp avatar trinhpham avatar wojtre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

[Feature Request]: Key to be retrieved from a Secret in Kubernetes

Is your feature request related to a problem? Please describe.

Currently, the option to deploy SnipeIT involves writing the key to a file in a git repository (if you want to have all your helm charts fully reproducible in case of an emergency)

Describe the solution you'd like

I'd like the app to pick the secret from a Kubernetes secret, which can be then be used in conjunction with ExternalSecretOperator so passwords remain secure.

Alternatives you've considered

Currently I'm hosting it in a git private submodule, but this is far from ideal. The config is spread out, it's less secure, and almost all modern software uses Secrets to import this config onto pods.

Snipe-IT: Fresh install on Kubernetes (AKS) leeds to HTTP 500

Hi,

i have tried to install the Version 3.3.0 of the Helm-Chart on Kubernetes, but it leeds to a HTTP 500 when I try to run the Setup.
I'm able to open the Pre-Flight-Check and it tells me, that everything is fine, but when I try to run the next step, it result in this:

Module ssl disabled.
To activate the new configuration, you need to run:
service apache2 restart
Migrating: 2015_02_26_091228_add_accessories_user_table

Illuminate\Database\QueryException : SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'accessories_users' already exists (SQL: create table accessories_users (id int unsigned not null auto_increment primary key, user_id int null, accessory_id int null, assigned_to int null, created_at timestamp null, updated_at timestamp null) default character set utf8mb4 collate 'utf8mb4_unicode_ci' engine = InnoDB)

at /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:669
665| // If an exception occurs when attempting to run a query, we'll format the error
666| // message to include the bindings with SQL, which will make this exception a
667| // lot more helpful to the developer instead of just the database's errors.
668| catch (Exception $e) {

669| throw new QueryException(
670| $query, $this->prepareBindings($bindings), $e
671| );
672| }
673|

Exception trace:

1 Doctrine\DBAL\Driver\PDO\Exception::("SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'accessories_users' already exists")
/var/www/html/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDO/Exception.php:18

2 Doctrine\DBAL\Driver\PDO\Exception::new()
/var/www/html/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:114

Please use the argument -v to see more details.
Configuration cache cleared!
Configuration cache cleared!
Configuration cached successfully!

Environment:

  • Azure AKS
  • Azure MySQL-Flexible (also tested with no external Database)

Any ideas what is wrong here?

Cheers,
David

Cannot configure busybox image repository

There is currently no way to configure the image or image repository for Busybox, which is used as an init container in deployment.yaml. This makes the chart unusable in an air-gapped environment.

Bug: Catch-22 when migrating to Kubernetes cluster

There's an issue with restoring a Snipe-IT instance in a Kubernetes cluster using the snipeit helm chart. According to the documentation, moving a Snipe-IT instance from one server to another is a matter of copying files from the backup to the new server instance, setting the environment variable BACKUP_ENV to true before you start migrating.

The BACKUP_ENV variable prevents the installer running when accessing the site before migrating, resulting instead in the page erroring out until you've migrated all the files.

This, however, also causes the /health endpoint to also throw PHP exceptions, which means the kubernetes controller will judge it to be unhealthy and kill the pod before you have the chance to migrate everything, leaving the pod in an endless CrashLoopBackOff state. Unfortunately, the only way to prevent the endless restart state is to actually copy the files, mainly the .env file to the proper location in the pod container.

Currently, there's no way to prevent this without building your own custom container image which copies the .env file into the pod when building, and then make the chart pull that image instead.

There are probably multiple ways of solving this, but one way is to allow the user to specify their own .env file in values.yaml to prevent this from happening.

Fresh installation with external mysql gives error: failed to open stream: Permission denied

Hi, I'm doing a fresh install and using external mysql mysql Ver 14.14 Distrib 5.7.32-35, for Linux (x86_64) using 6.2 but gets below error:

file_put_contents(/var/www/html/storage/framework/sessions/LBHi7b2PtLRv7xc5Z12QKB3bNQJnk6g2x4I5t50l): failed to open stream: Permission denied

I realised pods are running with root user and docker user isn't able to access /var/www/html/storage/framework/sessions directory.
running chown docker storage/framework/ -R command this inside pods fixed the issue.

Am I missing anything here or Do I need to run this command with every fresh install?
Thanks!

snipe-it cannot connect to mysql on fresh install

Doing a fresh install of t3n/snipe-it, it cannot connect to mysql:

D'oh! Looks like we can't connect to your database. Please update your database settings in your .env file. Your database says: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client (SQL: select 2 + 2)

(on /setup landing page)

I have tried playing around with a few things (including adding values

mysql:
  args:
    - "--default-authentication-plugin=mysql_native_password"

), but no luck.

I've tried examining the t3n/mysql chart, and can't figure anything out to make this work.

Failed to set ports name

Using the name tcp-mqtt, the Helm chart still recognizes it as mqtt only.
values.yaml:
image

argocd:
image

Helm install failed: template: unifi-video/templates/service-udp.yaml:15:6: executing "unifi-video/templates/service-udp.yaml" at <(.Values.service.loadBalancerIP) and eq .Values.service.type "LoadBalancer">: can't give argument to non-function .Values.service.loadBalancerIP

Hello!

I just tried to install unifi-video using your helm chart! Thanks for making this helm chart!

I do however have 1 problem with it, that it wont install :) I am getting the following error:
Helm install failed: template: unifi-video/templates/service-udp.yaml:15:6: executing "unifi-video/templates/service-udp.yaml" at <(.Values.service.loadBalancerIP) and eq .Values.service.type "LoadBalancer">: can't give argument to non-function .Values.service.loadBalancerIP

This is the config I used:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: unifi-video
  namespace: unifi
spec:
  interval: 5m
  chart:
    spec:
      chart: unifi-video
      version: 1.0.0
      sourceRef:
        kind: HelmRepository
        name: t3n-charts
        namespace: flux-system
      interval: 5m
  values:
    image:
      repository: pducharme/unifi-video-controller
      tag: 3.10.10
      pullPolicy: IfNotPresent

    persistence:
      config:
        enabled: true
        mountPath: /config
        storageClass: vsphere
        accessMode: ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        size: 5Gi

    service:
      type: LoadBalancer
      loadBalancerIP: 192.168.1.201
      
    securityContext:
      allowPrivilegeEscalation: true
      privileged: true
      capabilities:
        add: ["SYS_ADMIN", "DAC_READ_SEARCH"]

Did I do anything wrong, or might this be a bug?

helm install is success but snipeit UI is distorted with latest docker image version

I have used latest helm chart for snipeit and installed version 5.3.0. later I tried to upgrade the version from v5.3.0 to v6.0.8. If i directly update the docker image in the helm chart and do helm upgrade the UI seems to be distorted to me. I dont see any documentation for upgrading to latest version on k8s.

Screenshot 2022-08-02 at 3 51 24 PM

when i try to logout it is redirecting me to my app url with 503 error.

For upgardinh I just simply edited deployment.yaml file and updated it to the image version I wanted to upgrade.

In between upgrading from v5.3.0 to v6.0.8 i upgraded to v5.3.1 (worked), v5.3.2(worked), v5.3.4(worked), v5.3.10(issue) .... v6.0.8(issue)

Please help to figure out the issue and fix this. @helmecke can you please help

SnipeIT Helm installer failing out-of-the-box

Hi, I tried to install the aforementioned chart, but got a lot of bumps on the road. I hope you can clarify what did I do wrong or if that is not the case it can be a good basis to improve this the otherwise quite handy chart.

Our cluster:

  • managed by rancher, docker installation v2.4.8
  • kubernetes v18.8
  • docker v19.3.12
  • all worker nodes have 8vCPU, and 16 gigs of RAM
  • for Helm installation v3.4.1 is used

First, my values.yaml:

image:
  repository: snipe/snipe-it
  tag: v5.0.7
  pullPolicy: IfNotPresent

resources:
  requests:
    cpu: 0.2m
    memory: 1G

config:
  snipeit:
    env: production
    debug: false
    url: "http://snipeit.mycompany.net"
    key: <some-key-to-evade-snipeit-comin-after-us>
    timezone: Europe/Berlin
    locale: en
    envConfig: 
      APP_URL: http://snipeit.mycompany.net
      MYSQL_PORT_3306_TCP_ADDR: snipeit-mysql.snipeit.svc.cluster.local
      MYSQL_PORT_3306_TCP_PORT: 3306
      MYSQL_USER: snipeit
      MYSQL_PASSWORD: <some-secure-snipeit-password>

mysql:
  enabled: true
  mysqlUser: snipeit
  mysqlPassword: <some-secure-snipeit-password>
  mysqlDatabase: snipeit
  mysqlRootPassword: <some-secure-root-password>

  persistence:
    enabled: true
    storageClass: longhorn-retain
    accessMode: ReadWriteOnce
    size: 2Gi

persistence:
  enabled: true
  accessMode: ReadWriteOnce
  storageClass: longhorn-retain
  size: 2Gi

  www:
    mountPath: /var/lib/snipeit
    subPath: www
  sessions:
    mountPath: /var/www/html/storage/framework/sessions
    subPath: sessions

ingress:
  enabled: true
  path: /
  hosts:
    - snipeit.mycompany.net

Any not mentioned value is left at its original value. The command I used to deploy this in its own namespace:
helm install snipeit t3n/snipeit -f values.yaml

After this I spent some hours to debugging, finally pinpointing down our problems to these points:

  1. It may be the our PV provisioner (longhorn) but permissions on the /storage folder causing readiness checks to fail, and eventually a CrashLoopBackoff.
    For quick-and-dirty, a kubectl exec and a chmod -R 777 did the trick.
  2. The mysql container is v8.0, so it comes with default authentication plugin caching_sha2_plugin, which makes it impossibble to anything php-based to connect.
    After changing it for the snipeit account to mysql_native_password the problem went away.
    2b. In order to do that, you need the mysql root password, which is not documented, but can be get from a secret. Because I caould not login with that, I used the mysqlRootPassword value from the t3n/mysql chart. It works beutifully, kudos for that!
  3. Default db name is with hyphen.
    Probably a no-brainer for most people, but when you use mysql to modify it, it MUST be between backticks, which can be a though one for someoone just getting known helm.
  4. The GUI setup died 3 times at the database setup part.
    From the logs it looks like the oauth_authentication_token table is created twice. Strangely when ran php artisan migrate:fresh from the container, it ran without a hassle, and the GUI acknowledged the existing DB and I could finish the setup.

Although we managed to get it up and running, I would like to get some insight how this could be evaded, or these are general and we just should fork and create a PR for these ourselves.

Following the instructions on how to use the matomo/stable helm chart displays a missing salt file error.

Following the instructions located at: https://artifacthub.io/packages/helm/t3n/matomo?modal=install results in the following error message:

Error: execution error at (matomo/templates/deployment.yaml:59:24): .Values.salt is not set.

The salt appears to be set to blank at: https://github.com/t3n/helm-charts/blob/master/matomo/values.yaml

Running with

unms upgrade to 0.1.4

First of all, thx for your unms chart ! I got it working with unms 0.13.3, which is pinned as default in value.yml.
But this version is already 8 month old and I'd like to use a newer one. However, only changing the image tag to latest release (0.14.4) breaks with the following error in the main umns pod:

$ node ./cli/migrate.js
warning Cannot find a suitable global folder. Tried these: "/usr/local, /home/app/.yarn"
Migration failed: { error: password authentication failed for user "unms"
    at Connection.parseE (/home/app/unms/node_modules/pg/lib/connection.js:555:11)
    at Connection.parseMessage (/home/app/unms/node_modules/pg/lib/connection.js:380:19)
    at Socket.<anonymous> (/home/app/unms/node_modules/pg/lib/connection.js:120:22)
    at Socket.emit (events.js:189:13)
    at addChunk (_stream_readable.js:284:12)
    at readableAddChunk (_stream_readable.js:265:11)
    at Socket.Readable.push (_stream_readable.js:220:10)
    at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)

I also tried unsuccessfully removing the helm chart completely, and then redeploying with 0.14.4, to rule out db migration errors, but this is clearly an authentication issue.

Do you have any experiences with upgrading / running a newer version ?

Support for TLS certificates in Mosquitto

Currenty, the mosquitto helm chart doesn't seem to support providing secrets for certificates, keys and so on that are described in the "Certificate based SSL/TLS Support" section of the configuration docs.

Are there any plans to support this?

Maybe it would be a good idea to support an arbitrary number of secrets that can be mounted into the file system, so that they can be referenced in the configuration file. This would help when configuring multiple listeners that need different certificates. See https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod for a starting point.

Websocket connection option!!

Team,

I could not find an option in the mosquitto config of the helm charts for enabling the web socket connection to the mqtt server.

Advertisement: External-Service Operator

Hi,

while searching through similar Helm charts I found yours. Maybe my helm Chart is also interesting for you: https://github.com/CrowdfoxGmbH/cfcharts/tree/master/charts/external-service-operator

This will deploy an operator not only creating the endpoint and service resources like you do, but also is capable of healthchecking them.

Please take a look. If you need help or support, create an Issue at our charts repository or directly here: https://github.com/CrowdfoxGmbH/external-service-operator

German Grüße
Alwin

Snipeit: Readiness/Liveness probe failed: Get http://10.33.135.142:80/login: dial tcp 10.33.135.142:80: connect: connection refused

Hi Team,

I cloned latest version of snipeit chart and modified values.yaml as below:

image:
  repository: snipe/snipe-it
  tag: v4.9.4
  pullPolicy: IfNotPresent

service:
  type: NodePort
  http:
    port: 80
  https:
    port: 443

config:
  snipeit:
    env: production
    debug: true
    url: https://myurl.local
    key: "base64:MU48gIU3bWN7z0Z1U/SXYUIiZSJxp3MuHxNPPS4haW4="
    timezone: Etc/GMT
    locale: en_GB

mysql:
  enabled: true
  mysqlUser: snipeit
  mysqlPassword: "snipeit"
  mysqlDatabase: db-snipeit

  persistence:
    enabled: false
persistence:
  enabled: false
mysql-backup:
  enabled: false

ingress:
  enabled: true
  annotations:
    <Own annotations>
  path: /
  hosts:
    - myurl.local
  tls:
    - secretName: snipeit-tls-secret
      hosts:
        - myurl.local

But after doing helm install, pod doesn't come up and gives below errors:

25m Warning Unhealthy pod/snipeit-gb4-65486dfd78-25zk9 Readiness probe failed: Get http://10.33.135.142:80/login: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
25m Warning Unhealthy pod/snipeit-gb4-65486dfd78-25zk9 Readiness probe failed: Get http://10.33.135.142:80/login: dial tcp 10.33.135.142:80: connect: connection refused
25m Warning Unhealthy pod/snipeit-gb4-65486dfd78-25zk9 Liveness probe failed: Get http://10.33.135.142:80/login: dial tcp 10.33.135.142:80: connect: connection refused

Also got below error for pod:

Module ssl disabled.
To activate the new configuration, you need to run:
service apache2 restart
2020-09-01 22:11:56,294 CRIT Supervisor running as root (no user in config file)
2020-09-01 22:11:56,297 INFO supervisord started with pid 1
2020-09-01 22:11:57,300 INFO spawned: 'exit_on_any_fatal' with pid 21
2020-09-01 22:11:57,302 INFO spawned: 'apache' with pid 22
2020-09-01 22:11:57,388 INFO spawned: 'run_schedule' with pid 23
2020-09-01 22:11:58,488 INFO success: exit_on_any_fatal entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-09-01 22:11:58,488 INFO success: apache entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-09-01 22:11:58,488 INFO success: run_schedule entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.33.135.18. Set the 'ServerName' directive globally to suppress this message
No scheduled commands are ready to run.

I'm not able to understand why supervisor is killing the process.

Also I'd like to know is config.snipeit.url is same as ingress.hosts? Please help!

Upgrading snipeit fails with "Error: failed to parse values.yaml"

I sucessfully installed this some months ago with muc hthe same .yaml. I am now trying to make some changes but am receving the following errors:

Error: failed to parse values.yaml: error converting YAML to JSON: yaml: line 85: did not find expected ',' or '}'
Error: failed to parse values.yaml: error converting YAML to JSON: yaml: line 93: did not find expected node content

I believe this is being caused by the ingress definition in values.yaml, but this was just copied and pasted from the default values.yaml with a few tweaks:

ingress:
  enabled: true
  annotations: {
    acme.cert-manager.io/http01-edit-in-place: "true"   # line 85
    cert-manager.io/cluster-issuer: "letsencrypt-production"
  }
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /*
  hosts:
    - assets.masterst.art
  tls: [                                                            #line 93
    - secretName: tls-snipeit
      hosts:
        - assets.masterst.art
  ]

Any help is appreciated

Snipe-IT chart does not work with helm v3.2.1

Hello friends,

I am using the following command:

helm install snipeit t3n/snipeit --set Secret.data.APP_KEY="base64:supersecretkey"

and I get the following result:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Secret.data.APP_KEY

This appears to have to do with the chart not being ready for v3.2.x. I'm new to Helm, so haven't had success trying to fix this myself yet.

Any suggestions on how I can troubleshoot this?

tarek : )

Can not setup Snipe-IT - 500 SERVER ERROR

We have installed Snipe-IT on our K8s cluster 1.26.6. We have activated the internal MySQL for this.

At the beginning, MySQL states that a login with root is not possible (because of the password). After a short time, however, this status recovers. However, I doubt that this is how it should be.

When everything works, you call the URL and click on Next: Create Databse Tables, it takes a very long time, the snipeit pod is restarted (which is why the connection is briefly interrupted) and then you get a 500 | SERVER ERROR.

What are we doing wrong? Our values.yaml looks like this:

config:
  snipeit:
    url: https://asset.example.com
    key: "base64:xxxx"

mysql:
  enabled: true

ingress:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
  hosts:
    - asset.example.com
  tls:
    - secretName: snipeit-internal-tls
      hosts:
        - asset.example.com

And this is what the web UI looks like at the beginning:

snipe-it-issue

If we set image.tag to the current version v6.1.2, all options within the web UI are green, but after a certain time we get an Error 500 again.

Permission denied for laravel.log for snipeit

We are trying to install snipeit with helm on our k8s cluster but we are getting the following error:
The stream or file "/var/www/html/storage/logs/laravel.log" could not be opened in append mode: failed to open stream: Permission denied

Our values.yaml :

replicaCount: 1

  config:
    snipeit:
      env: production
      url: example.at
      timezone: Europe/Vienna
      locale: en
      envConfig:
        MAIL_HOST: mail.example.at
        MAIL_PORT: 587
        MAIL_USERNAME: [email protected]
        MAIL_FROM_ADDR: [email protected]
        MAIL_FROM_NAME: Snipe-IT

  mysql:
    enabled: true
    mysqlUser: example
    mysqlDatabase: example
    persistence:
      enabled: true
      storageClass: "ceph-filesystem"
      accessMode: ReadWriteOnce
      size: 2Gi

  persistence:
    enabled: true
    storageClass: "ceph-filesystem"
    accessMode: ReadWriteOnce
    size: 2Gi
    annotations: { "helm.sh/resource-policy": keep }

  revisionHistoryLimit: 4

  service:
    type: ClusterIP

  ingress: 
    enabled: false

Our Chart.yaml

...
dependencies:
  - name: snipeit
    version: 3.4.0
    repository: https://storage.googleapis.com/t3n-helm-charts

One workaround is to do chmod 777 storage/logs/laravel.log on fresh install but after that is almost impossible to do.

Migration failed: { SequelizeConnectionError: password authentication failed for user "postgres"

root@tho-kub04:~# kubectl logs unms-746fdc98d9-zmwtn -n unms
Running docker-entrypoint yarn start
Running UNMS container as root
Creating service user app (uid=1000)
Creating directories and setting permissions
creating /home/app/unms/supportinfo
setting permissions on /home/app/unms/supportinfo
creating /home/app/unms/data/cert
setting permissions on /home/app/unms/data/cert
creating /home/app/unms/data/images
setting permissions on /home/app/unms/data/images
creating /home/app/unms/data/firmwares
setting permissions on /home/app/unms/data/firmwares
creating /home/app/unms/data/logs
setting permissions on /home/app/unms/data/logs
creating /home/app/unms/data/config-backups
setting permissions on /home/app/unms/data/config-backups
creating /home/app/unms/data/unms-backups
setting permissions on /home/app/unms/data/unms-backups
creating /home/app/unms/data/import
setting permissions on /home/app/unms/data/import
creating /home/app/unms/data/update
setting permissions on /home/app/unms/data/update
Linking /home/app/unms/public/site-images -> /home/app/unms/data/images
Linking /home/app/unms/public/firmwares -> /home/app/unms/data/firmwares
Stepping down from root: su-exec "/usr/local/bin/docker-entrypoint.sh" "yarn start"
Running docker-entrypoint yarn start
Waiting for database containers
psql: fe_sendauth: no password supplied
Background append only file rewriting started
Exec yarn start
yarn run v1.9.4
warning Skipping preferred cache folder "/home/app/.cache/yarn" because it is not writable.
warning Selected the next writable cache folder in the list, will be "/tmp/.yarn-cache-1000".
$ npm run backup:apply && npm run migrate && node --max_old_space_size=2048 index.js
warning Cannot find a suitable global folder. Tried these: "/usr/local, /home/app/.yarn"

> [email protected] backup:apply /home/app/unms
> node ./cli/apply-backup

UNMS BACKUP start
There is no UNMS backup found
UNMS BACKUP finished

┌───────────────────────────────────────────────────────┐
│                npm update check failed                │
│          Try running with sudo or get access          │
│         to the local update config store via          │
│ sudo chown -R $USER:$(id -gn $USER) /home/app/.config │
└───────────────────────────────────────────────────────┘

> [email protected] migrate /home/app/unms
> node ./cli/migrate.js

Migration failed: { SequelizeConnectionError: password authentication failed for user "postgres"
    at /home/app/unms/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:110:20
    at Connection.<anonymous> (/home/app/unms/node_modules/pg/lib/client.js:185:5)
    at Connection.emit (events.js:182:13)
    at Socket.<anonymous> (/home/app/unms/node_modules/pg/lib/connection.js:121:12)
    at Socket.emit (events.js:182:13)
    at addChunk (_stream_readable.js:283:12)
    at readableAddChunk (_stream_readable.js:264:11)
    at Socket.Readable.push (_stream_readable.js:219:10)
    at TCP.onread (net.js:639:20)
  name: 'SequelizeConnectionError',
  message: 'password authentication failed for user "postgres"',
  parent:
   { error: password authentication failed for user "postgres"
       at Connection.parseE (/home/app/unms/node_modules/pg/lib/connection.js:554:11)
       at Connection.parseMessage (/home/app/unms/node_modules/pg/lib/connection.js:381:17)
       at Socket.<anonymous> (/home/app/unms/node_modules/pg/lib/connection.js:117:22)
       at Socket.emit (events.js:182:13)
       at addChunk (_stream_readable.js:283:12)
       at readableAddChunk (_stream_readable.js:264:11)
       at Socket.Readable.push (_stream_readable.js:219:10)
       at TCP.onread (net.js:639:20)
     name: 'error',
     length: 104,
     severity: 'FATAL',
     code: '28P01',
     detail: undefined,
     hint: undefined,
     position: undefined,
     internalPosition: undefined,
     internalQuery: undefined,
     where: undefined,
     schema: undefined,
     table: undefined,
     column: undefined,
     dataType: undefined,
     constraint: undefined,
     file: 'auth.c',
     line: '337',
     routine: 'auth_failed' },
  original:
   { error: password authentication failed for user "postgres"
       at Connection.parseE (/home/app/unms/node_modules/pg/lib/connection.js:554:11)
       at Connection.parseMessage (/home/app/unms/node_modules/pg/lib/connection.js:381:17)
       at Socket.<anonymous> (/home/app/unms/node_modules/pg/lib/connection.js:117:22)
       at Socket.emit (events.js:182:13)
       at addChunk (_stream_readable.js:283:12)
       at readableAddChunk (_stream_readable.js:264:11)
       at Socket.Readable.push (_stream_readable.js:219:10)
       at TCP.onread (net.js:639:20)
     name: 'error',
     length: 104,
     severity: 'FATAL',
     code: '28P01',
     detail: undefined,
     hint: undefined,
     position: undefined,
     internalPosition: undefined,
     internalQuery: undefined,
     where: undefined,
     schema: undefined,
     table: undefined,
     column: undefined,
     dataType: undefined,
     constraint: undefined,
     file: 'auth.c',
     line: '337',
     routine: 'auth_failed' } }
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] migrate: `node ./cli/migrate.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] migrate script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I'm not exactly sure what I'm missing here. I haven't done any modifications to your values.yaml other than adding a few modifications to PVC's.

root@tho-kub04:~# kubectl get pods -n unms
NAME                          READY   STATUS    RESTARTS   AGE
unms-746fdc98d9-zmwtn         0/1     Error     5          3m52s
unms-nginx-7666565dc9-dg9wx   1/1     Running   0          14m
unms-postgresql-0             1/1     Running   0          3m39s
unms-rabbitmq-ha-0            1/1     Running   0          67m
unms-rabbitmq-ha-1            1/1     Running   0          68m
unms-rabbitmq-ha-2            1/1     Running   0          67m
unms-redis-master-0           1/1     Running   0          68m

values.yaml:

replicaCount: 1
revisionHistoryLimit: 0

## PodDisruptionBudget
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
# maxUnavailable: 1

image:
  repository: ubnt/unms
  tag: 0.13.3
  pullPolicy: IfNotPresent

  nginx:
    repository: ubnt/unms-nginx
    tag: 0.13.3
    pullPolicy: IfNotPresent

service:
  type: LoadBalancer
  annotations: {}

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: traefik
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - unms.internal.example.com
  #tls:
  #  - secretName: unms-tls
  #    hosts:
  #      - example.local

# Leave default to use rabbitmq of this chart
rabbitmq-ha:
  host: unms-rabbitmq-ha
  port: "5672"
  rabbitmqUsername: guest
  rabbitmqPassword: guest
  rabbitmqAuth:
    enabled: true
    config: |
      auth_mechanisms.1 = PLAIN
      auth_mechanisms.2 = AMQPLAIN
  persistence:
    enabled: true
    persistence:
      storageClassName: "managed-nfs-storage"
      accessMode: ReadWriteOnce
      size: 5Gi
  prometheus:
    operator:
      enabled: false

# Leave default to use redis of this chart
redis:
  host: unms-redis-master
  port: "6379"
  cluster:
    enabled: false
  usePassword: false
  master:
    persistence:
      enabled: true
      storageClass: "managed-nfs-storage"
      accessMode: ReadWriteOnce
      size: 5Gi


# Leave default to use postgresql of this chart
postgresql:
  host: unms-postgresql
  port: "5432"
  postgresPassword: Password123@
  postgresDatabase: unms
  persistence:
    storageClass: "managed-nfs-storage"
    accessMode: ReadWriteOnce
    size: 5Gi
    resourcePolicy: nil

persistence:
  enabled: true
  annotations: {}
  accessMode: ReadWriteOnce
  existingClaim: ""
  ## database data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClassName: "managed-nfs-storage"
  size: 5Gi

resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  #requests:
  #  cpu: 100m
  #  memory: 128Mi

mysql-backup:
  enabled: false

Snipeit- Exception couldn't save customfieldset

Getting Exception : couldn't save customfieldset
when using external database, it works fine when I use internal mysql.

This is what I've in database:

mysql> select * from custom_fields;
+----+-------------+-----------------------------------------------------------------------------------------------------+---------+------------+------------+
| id | name | format | element | created_at | updated_at |
+----+-------------+-----------------------------------------------------------------------------------------------------+---------+------------+------------+
| 1 | MAC Address | regex:/^[a-fA-F0-9]{2}:[a-fA-F0-9]{2}:[a-fA-F0-9]{2}:[a-fA-F0-9]{2}:[a-fA-F0-9]{2}:[a-fA-F0-9]{2}$/ | text | NULL | NULL |
+----+-------------+-----------------------------------------------------------------------------------------------------+---------+------------+------------+
1 row in set (0.00 sec)

mysql> select * from custom_field_custom_fieldset;
Empty set (0.00 sec)

mysql> select * from custom_fieldsets;
+----+------------------------+
| id | name |
+----+------------------------+
| 1 | Asset with MAC Address |
+----+------------------------+
1 row in set (0.00 sec)

mysql> select * from custom_field_custom_fieldset;
Empty set (0.00 sec)

mysql> desc custom_field_custom_fieldset;
+--------------------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+------------+------+-----+---------+-------+
| custom_field_id | int(11) | NO | | NULL | |
| custom_fieldset_id | int(11) | NO | | NULL | |
| order | int(11) | NO | | NULL | |
| required | tinyint(1) | NO | | NULL | |
+--------------------+------------+------+-----+---------+-------+
4 rows in set (0.00 sec)

[2021-02-10 16:35:25] production.ERROR: couldn't save customfieldset {"exception":"[object] (Exception(code: 0): couldn't save customfieldset at /var/www/html/database/migrations/2015_09_22_003413_migrate_mac_address.php:20)#75 /var/www/html/app/Http/Middleware/ReferrerPolicyHeader.php(17): Illuminate\Routing\Pipeline->Illuminate\Routing\{closure}(Object(Illuminate\Http\Request))
#76 /var/www/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(149): App\Http\Middleware\ReferrerPolicyHeader->handle(Object(Illuminate\Http\Request), Object(Closure))


Please help!

[mosquitto] ExternalTrafficPolicy can only be set on NodePort and LoadBalancer service

Hello,

The PR #154 introduce the following bug in K8S 1.21:

Service is invalid: spec.externalTrafficPolicy: Invalid value: "Cluster": ExternalTrafficPolicy can only be set on NodePort and LoadBalancer service

The parameter ExternalTrafficPolicy cannot be used if type: ClusterIP.

A quick fix could be:

{{- if ne .Values.service.type "ClusterIP" }}
  externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}

What do you think of it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.