Coder Social home page Coder Social logo

mattermost / mattermost-helm Goto Github PK

View Code? Open in Web Editor NEW
161.0 36.0 145.0 1023 KB

Mattermost Helm charts for Kubernetes

License: Apache License 2.0

Makefile 4.68% Smarty 74.19% Shell 6.05% Dockerfile 0.90% Mustache 14.18%
mattermost mattermost-helm chart helm

mattermost-helm's Introduction

Mattermost Helm Charts Release

Artifact Hub

This repository collects a set of Helm charts curated by Mattermost.

Click on the following links to see installation instructions for each chart:

Usage

Helm must be installed and initialized to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

$ helm repo add mattermost https://helm.mattermost.com

Contributing

We welcome contributions. Please refer to our contribution guidelines for details.

Local Development

Requirements

  1. Install GNU make.
  2. Install Docker.
  3. Install Kind.

Verify Changes

To verify changes and execute lint, please execute below command:

make lint

Testing

To execute chart tests locally, please execute below command:

make test

mattermost-helm's People

Contributors

agusl88 avatar angeloskyratzakos avatar astraldawn avatar chapa avatar coreyhulen avatar cpanato avatar crspeller avatar d-wierdsma avatar dbpolito avatar erezo9 avatar flipenergy avatar gabrieljackson avatar grundleborg avatar jasonblais avatar jonathanwiemers avatar jseiser avatar jwilander avatar khos2ow avatar mattermod avatar mjnagel avatar mustdiechik avatar patatman avatar phoinixgrr avatar ricosega avatar spirosoik avatar stafot avatar streamer45 avatar stylianosrigas avatar szymongib avatar unified-ci-app[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mattermost-helm's Issues

Chart not working with Kubernetes 1.16

I am trying to deploy the mattermost-team-edition but it is not working due the mysql old chart version

elm install --name mattermost -f values.yaml mattermost/mattermost-team-edition --debug --dry-run
[debug] Created tunnel using local port: '49544'

[debug] SERVER: "127.0.0.1:49544"

[debug] Original chart version: ""
[debug] Fetched mattermost/mattermost-team-edition to /Users/shahbour/.helm/cache/archive/mattermost-team-edition-3.7.0.tgz

[debug] CHART PATH: /Users/shahbour/.helm/cache/archive/mattermost-team-edition-3.7.0.tgz

Error: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

If i disable mysql it do work .

MySQL has been fixed so i think we should point to higher version .

Crash Loop Back-off on OpenShift 3.11

Hi,

after I install mattermost the mysql chart has a crash loop back off all the time.

The only and last message in the logs of the mysql container is

"
Initializing database
"

No securityContext set

The deployment does not set the securityContext. Can the securityContext be added to the deployment(s)?

configJSON is not working as per readme

Trying to set the siteurl as per readme doesn't work:

configJSON:
  ServiceSettings:
    SiteURL: "https://mattermost.example.com"

The work around is to set it via extraEnvVars MM_SERVICESETTINGS_SITEURL.

[MM-TE] Conflict between values.yaml and dbsecret (MM_CONFIG) when using postgres

The comment in values.yaml says to add postgres:// to externalConnectionString when using postgres.
https://github.com/mattermost/mattermost-helm/blob/master/charts/mattermost-team-edition/values.yaml#L93

However, this is ignored in the dbsecret settings.
https://github.com/mattermost/mattermost-helm/blob/master/charts/mattermost-team-edition/templates/secret-mattermost-dbsecret.yaml#L15

As a result, dbsecret (MM_CONFIG) is incorrect.
postgres://postgres://username:password...

I think this is the cause of this comment.
#231 (comment)

[mattermost-team-edition] read only file system on start up

While installing the mattermost team edition helm chart on my GKE for the first time, i got a read only file system error without modifying anything :

Error message :

{"level":"error","ts":1563295457.9649801,"caller":"app/config.go:54","msg":"Failed to update config","error":"failed to persist: failed to write file: open /mattermost/config/config.json: read-only file system","errorVerbose":"open /mattermost/config/config.json: read-only file system

**My values.yaml file : **

Image tag

image:
tag: 5.12.1

Persistense for mattermost

persistence:
data:
enabled: true
size: 20Gi
storageClass: standard
accessMode: ReadWriteOnce

Ingress params to access the cluster from outside

ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: XXX
hosts:
- XXX
tls:
- secretName: XXX-tls
hosts:
- XXX

Mysql Params

mysql:
enabled: true
mysqlRootPassword: "Password"
mysqlUser: "USer"
mysqlPassword: "Password"
mysqlDatabase: db_name

repository: mysql
tag: 5.7
imagePullPolicy: IfNotPresent

persistence:
enabled: true
storageClass: standard
accessMode: ReadWriteOnce
size: 10Gi

Config File for mattermost

configJSON: {
"ServiceSettings": {
"SiteURL": "XXXX",
"GoogleDeveloperKey": "XXXXX",
"EnablePostUsernameOverride": true,
"EnablePostIconOverride": true,
"EnableLinkPreviews": true,
},
"TeamSettings": {
"SiteName": "XXX",
"EnableCustomBrand": true,
},
"EmailSettings": {
"SendEmailNotifications": true,
"RequireEmailVerification": true,
"SMTPUsername": "XXX",
"SMTPPassword": "XXX",
"EnableSMTPAuth": true,
"SMTPServer": "XXX",
"SMTPPort": "587",
"ConnectionSecurity": "STARTTLS",
"EnableEmailBatching": true,
},
"PluginSettings": {
"Enable": true,
"EnableUploads": true,
}
}

where to put http-snippet?

looking at the values.yaml it's not entirely clear where we should put this code:

        http-snippet: |
          proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

I tried to look around but it seems pretty unique, should this go under annotations or in the configuration-snippet block? is it not prefixed with nginx.ingress.kubernetes.io/?

[focalboard] cannot parse 'port' as int

Hi,

@stafot

Today I am trying to install focalboard using helm chart.

I will report some issues.

  1. pvc.yaml has problem
    after I edited like below, it works. please refer other tool's charts.
remove storageClass syntax in metadata
...
spec:
  {{- if .Values.persistence.storageClass }}
  storageClassName: "{{ .Values.persistence.storageClass }}"
  {{- end }}
  1. v0.7.0 docker image does not exist so I use v0.6.7

  2. after install, it does not work.

$ k -n focalboard-stage logs focalboard-65897d7798-tlpqm 
2021/06/21 12:03:03 Focalboard Server
2021/06/21 12:03:03 Version: 0.6.7
2021/06/21 12:03:03 Edition: linux
2021/06/21 12:03:03 Build Number: dev
2021/06/21 12:03:03 Build Date: Mon May 24 17:25:23 UTC 2021
2021/06/21 12:03:03 Build Hash: f1b8d88d6badeffadb259653bc4f9560710e79b6
2021/06/21 12:03:03 Unable to read the config file: 1 error(s) decoding:

* cannot parse 'port' as int: strconv.ParseInt: parsing "tcp://10.96.31.43:80": invalid syntax
  1. when if I installed postgresql, following https://www.focalboard.com/download/personal-edition/ubuntu/#install-postgresql-recommended,
    how to add/edit /opt/focalboard/config.json
    I think we need config map for this.

Multiple replicas

Some folders are not defined in volumes ( plugins for example ).

The chart does not work with multiple replicas.

Cannot make changes in System Console due to read only file system

Hi all!
I deployed Mattermost on my GKE cluster and PVCs enabled (RWO access mode) via the helm chart and with the following values.yaml:

# Default values for mattermost-team-edition.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
  repository: mattermost/mattermost-team-edition
  tag: 5.11.0
  imagePullPolicy: IfNotPresent

initContainerImage:
  repository: appropriate/curl
  tag: latest
  imagePullPolicy: IfNotPresent

## How many old ReplicaSets for Mattermost Deployment you want to retain
revisionHistoryLimit: 1

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
## ref: https://docs.gitlab.com/ee/install/requirements.html#storage
##
persistence:
  ## This volume persists generated data from users, like images, attachments...
  ##
  data:
    enabled: true
    size: 10Gi
    ## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
    ## Default: volume.alpha.kubernetes.io/storage-class: default
    ##
    # storageClass:
    accessMode: ReadWriteOnce
  # existingClaim: ""

service:
  type: ClusterIP
  externalPort: 8065
  internalPort: 8065

ingress:
  enabled: false
  path: /
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # certmanager.k8s.io/issuer:  your-issuer
    # nginx.ingress.kubernetes.io/proxy-body-size: 50m
    # nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    # nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    # nginx.ingress.kubernetes.io/proxy-buffering: "on"
    # nginx.ingress.kubernetes.io/configuration-snippet: |
    #   proxy_cache mattermost_cache;
    #   proxy_cache_revalidate on;
    #   proxy_cache_min_uses 2;
    #   proxy_cache_use_stale timeout;
    #   proxy_cache_lock on;
    #### To use the nginx cache you will need to set an http-snippet in the ingress-nginx configmap
    #### http-snippet: |
    ####     proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
  hosts:
    - mattermost.example.com
  tls:
    # - secretName: mattermost.example.com-tls
    #   hosts:
    #     - mattermost.example.com


## If use this please disable the mysql chart by setting mysql.enable to false
externalDB:
  enabled: false

  ## postgres or mysql
  externalDriverType: ""

  ## postgres:  "postgres://<USERNAME>:<PASSWORD>@<HOST>:5432/<DATABASE_NAME>?sslmode=disable&connect_timeout=10"
  ## mysql:     "<USERNAME>:<PASSWORD>@tcp(<HOST>:3306)/<DATABASE_NAME>?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
  externalConnectionString: ""

mysql:
  enabled: true
  mysqlRootPassword: "MyPassword"
  mysqlUser: "MyUser"
  mysqlPassword: "MyPassword"
  mysqlDatabase: mattermost

  repository: mysql
  tag: 5.7
  imagePullPolicy: IfNotPresent

  persistence:
    enabled: true
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""
    accessMode: ReadWriteOnce
    size: 10Gi
  # existingClaim: ""

## Additional pod annotations
extraPodAnnotations: {}

## Additional env vars
extraEnvVars: []
  # This is an example of extra env vars when using with the deployment with GitLab Helm Charts
  # - name: POSTGRES_PASSWORD_GITLAB
  #   valueFrom:
  #     secretKeyRef:
  #       # NOTE: Needs to be manually created
  #       # kubectl create secret generic gitlab-postgresql-password --namespace <NAMESPACE> --from-literal postgres-password=<PASSWORD>
  #       name: gitlab-postgresql-password
  #       key: postgres-password
  # - name: POSTGRES_USER_GITLAB
  #   value: gitlab
  # - name: POSTGRES_HOST_GITLAB
  #   value: gitlab-postgresql
  # - name: POSTGRES_PORT_GITLAB
  #   value: "5432"
  # - name: POSTGRES_DB_NAME_MATTERMOST
  #   value: mm5
  # - name: MM_SQLSETTINGS_DRIVERNAME
  #   value: "postgres"
  # - name: MM_SQLSETTINGS_DATASOURCE
  #   value: postgres://$(POSTGRES_USER_GITLAB):$(POSTGRES_PASSWORD_GITLAB)@$(POSTGRES_HOST_GITLAB):$(POSTGRES_PORT_GITLAB)/$(POSTGRES_DB_NAME_MATTERMOST)?sslmode=disable&connect_timeout=10

## Additional init containers
extraInitContainers: []
  # This is an example of extra Init Container when using with the deployment with GitLab Helm Charts
  # - name: bootstrap-database
  #   image: "postgres:9.6-alpine"
  #   imagePullPolicy: IfNotPresent
  #   env:
  #     - name: POSTGRES_PASSWORD_GITLAB
  #       valueFrom:
  #         secretKeyRef:
  #           name: gitlab-postgresql-password
  #           key: postgres-password
  #     - name: POSTGRES_USER_GITLAB
  #       value: gitlab
  #     - name: POSTGRES_HOST_GITLAB
  #       value: gitlab-postgresql
  #     - name: POSTGRES_PORT_GITLAB
  #       value: "5432"
  #     - name: POSTGRES_DB_NAME_MATTERMOST
  #       value: mm5
  #   command:
  #     - sh
  #     - "-c"
  #     - |
  #       if PGPASSWORD=$POSTGRES_PASSWORD_GITLAB psql -h $POSTGRES_HOST_GITLAB -p $POSTGRES_PORT_GITLAB -U $POSTGRES_USER_GITLAB -lqt | cut -d \| -f 1 | grep -qw $POSTGRES_DB_NAME_MATTERMOST; then
  #       echo "database already exist, exiting initContainer"
  #       exit 0
  #       else
  #       echo "Database does not exist. creating...."
  #       PGPASSWORD=$POSTGRES_PASSWORD_GITLAB createdb -h $POSTGRES_HOST_GITLAB -p $POSTGRES_PORT_GITLAB -U $POSTGRES_USER_GITLAB $POSTGRES_DB_NAME_MATTERMOST
  #       echo "Done"
  #       fi

# NOTE: These acts as the default values for the config.json file read by the
# mattermost server itself. You can override the configJSON object just like any
# Helm template value. Since it is an object, the object you provide will merge
# with these defaults. Also note that this is YAML, so you can choose to use
# either JSON or YAML as JSON is a subset of YAML. No matter what you choose,
# the config.json file that will be generated will be correctly JSON formatted.
configJSON: {
  "ServiceSettings": {
    "SiteURL": "",
    "LicenseFileLocation": "",
    "ListenAddress": ":8065",
    "ConnectionSecurity": "",
    "TLSCertFile": "",
    "TLSKeyFile": "",
    "UseLetsEncrypt": false,
    "LetsEncryptCertificateCacheFile": "./config/letsencrypt.cache",
    "Forward80To443": false,
    "ReadTimeout": 300,
    "WriteTimeout": 300,
    "MaximumLoginAttempts": 10,
    "GoroutineHealthThreshold": -1,
    "GoogleDeveloperKey": "",
    "EnableOAuthServiceProvider": false,
    "EnableIncomingWebhooks": true,
    "EnableOutgoingWebhooks": true,
    "EnableCommands": true,
    "EnableOnlyAdminIntegrations": false,
    "EnablePostUsernameOverride": false,
    "EnablePostIconOverride": false,
    "EnableLinkPreviews": false,
    "EnableTesting": false,
    "EnableDeveloper": false,
    "EnableSecurityFixAlert": true,
    "EnableInsecureOutgoingConnections": false,
    "EnableMultifactorAuthentication": false,
    "EnforceMultifactorAuthentication": false,
    "AllowCorsFrom": "",
    "SessionLengthWebInDays": 30,
    "SessionLengthMobileInDays": 30,
    "SessionLengthSSOInDays": 30,
    "SessionCacheInMinutes": 10,
    "WebsocketSecurePort": 443,
    "WebsocketPort": 80,
    "WebserverMode": "gzip",
    "EnableCustomEmoji": false,
    "RestrictCustomEmojiCreation": "all",
    "RestrictPostDelete": "all",
    "AllowEditPost": "always",
    "PostEditTimeLimit": 300,
    "TimeBetweenUserTypingUpdatesMilliseconds": 5000,
    "EnablePostSearch": true,
    "EnableUserTypingMessages": true,
    "EnableUserStatuses": true,
    "ClusterLogTimeoutMilliseconds": 2000
  },
  "TeamSettings": {
    "SiteName": "Mattermost",
    "MaxUsersPerTeam": 50000,
    "EnableTeamCreation": true,
    "EnableUserCreation": true,
    "EnableOpenServer": true,
    "RestrictCreationToDomains": "",
    "EnableCustomBrand": false,
    "CustomBrandText": "",
    "CustomDescriptionText": "",
    "RestrictDirectMessage": "any",
    "RestrictTeamInvite": "all",
    "RestrictPublicChannelManagement": "all",
    "RestrictPrivateChannelManagement": "all",
    "RestrictPublicChannelCreation": "all",
    "RestrictPrivateChannelCreation": "all",
    "RestrictPublicChannelDeletion": "all",
    "RestrictPrivateChannelDeletion": "all",
    "RestrictPrivateChannelManageMembers": "all",
    "UserStatusAwayTimeout": 300,
    "MaxChannelsPerTeam": 50000,
    "MaxNotificationsPerChannel": 1000
  },
  "SqlSettings": {
    "DriverName": "",
    "DataSource": "",
    "DataSourceReplicas": [],
    "DataSourceSearchReplicas": [],
    "MaxIdleConns": 20,
    "MaxOpenConns": 35,
    "Trace": false,
    "AtRestEncryptKey": "",
    "QueryTimeout": 30
  },
  "LogSettings": {
    "EnableConsole": true,
    "ConsoleLevel": "INFO",
    "EnableFile": true,
    "FileLevel": "INFO",
    "FileFormat": "",
    "FileLocation": "",
    "EnableWebhookDebugging": true,
    "EnableDiagnostics": true
  },
  "PasswordSettings": {
    "MinimumLength": 5,
    "Lowercase": false,
    "Number": false,
    "Uppercase": false,
    "Symbol": false
  },
  "FileSettings": {
    "EnableFileAttachments": true,
    "MaxFileSize": 52428800,
    "DriverName": "local",
    "Directory": "./data/",
    "EnablePublicLink": false,
    "PublicLinkSalt": "",
    "ThumbnailWidth": 120,
    "ThumbnailHeight": 100,
    "PreviewWidth": 1024,
    "PreviewHeight": 0,
    "ProfileWidth": 128,
    "ProfileHeight": 128,
    "InitialFont": "luximbi.ttf",
    "AmazonS3AccessKeyId": "",
    "AmazonS3SecretAccessKey": "",
    "AmazonS3Bucket": "",
    "AmazonS3Region": "",
    "AmazonS3Endpoint": "s3.amazonaws.com",
    "AmazonS3SSL": false,
    "AmazonS3SignV2": false
  },
  "EmailSettings": {
    "EnableSignUpWithEmail": true,
    "EnableSignInWithEmail": true,
    "EnableSignInWithUsername": true,
    "SendEmailNotifications": false,
    "RequireEmailVerification": false,
    "FeedbackName": "",
    "FeedbackEmail": "",
    "FeedbackOrganization": "",
    "SMTPUsername": "",
    "SMTPPassword": "",
    "EnableSMTPAuth": "",
    "SMTPServer": "",
    "SMTPPort": "",
    "ConnectionSecurity": "",
    "InviteSalt": "",
    "SendPushNotifications": true,
    "PushNotificationServer": "https://push-test.mattermost.com",
    "PushNotificationContents": "generic",
    "EnableEmailBatching": false,
    "EmailBatchingBufferSize": 256,
    "EmailBatchingInterval": 30,
    "SkipServerCertificateVerification": false
  },
  "RateLimitSettings": {
    "Enable": false,
    "PerSec": 10,
    "MaxBurst": 100,
    "MemoryStoreSize": 10000,
    "VaryByRemoteAddr": true,
    "VaryByHeader": ""
  },
  "PrivacySettings": {
    "ShowEmailAddress": true,
    "ShowFullName": true
  },
  "SupportSettings": {
    "TermsOfServiceLink": "https://about.mattermost.com/default-terms/",
    "PrivacyPolicyLink": "https://about.mattermost.com/default-privacy-policy/",
    "AboutLink": "https://about.mattermost.com/default-about/",
    "HelpLink": "https://about.mattermost.com/default-help/",
    "ReportAProblemLink": "https://about.mattermost.com/default-report-a-problem/",
    "SupportEmail": "[email protected]"
  },
  "AnnouncementSettings": {
    "EnableBanner": false,
    "BannerText": "",
    "BannerColor": "#f2a93b",
    "BannerTextColor": "#333333",
    "AllowBannerDismissal": true
  },
  "GitLabSettings": {
    "Enable": false,
    "Secret": "",
    "Id": "",
    "Scope": "",
    "AuthEndpoint": "",
    "TokenEndpoint": "",
    "UserApiEndpoint": ""
  },
  "LocalizationSettings": {
    "DefaultServerLocale": "en",
    "DefaultClientLocale": "en",
    "AvailableLocales": ""
  },
  "NativeAppSettings": {
    "AppDownloadLink": "https://about.mattermost.com/downloads/",
    "AndroidAppDownloadLink": "https://about.mattermost.com/mattermost-android-app/",
    "IosAppDownloadLink": "https://about.mattermost.com/mattermost-ios-app/"
  },
  "AnalyticsSettings": {
    "MaxUsersForStatistics": 2500
  },
  "WebrtcSettings": {
    "Enable": false,
    "GatewayWebsocketUrl": "",
    "GatewayAdminUrl": "",
    "GatewayAdminSecret": "",
    "StunURI": "",
    "TurnURI": "",
    "TurnUsername": "",
    "TurnSharedKey": ""
  },
  "DisplaySettings": {
    "CustomUrlSchemes": [],
    "ExperimentalTimezone": true
  },
  "TimezoneSettings": {
    "SupportedTimezonesPath": "timezones.json"
  },
  "PluginSettings": {
    "Enable": true,
    "EnableUploads": true,
    "Directory": "./plugins",
    "ClientDirectory": "./client/plugins",
    "Plugins": {},
    "PluginStates": {}
  }
}

Since the ingress part seems to be broken as well I wrote my own ingress.yaml but that can be discussed in another issue ticket.

When I try to make changes in the System Console I get the following errors in the logs:

"level":"error","ts":1559820160.7593083,"caller":"mlog/log.go:172","msg":"An error occurred saving the configuration","path":"/api/v4/config","request_id":"gz4jwiwafp8gxp9duyp5kezory","ip_addr":"X.X.X.X","user_id":"riwxbd8713nazbu44kjeeec6ay","method":"PUT","err_where":"saveConfig","http_code":500,"err_details":"failed to persist: failed to write file: open /mattermost/config/config.json: read-only file system"}

This is pretty odd because the PVC's access mode is RWO. What's the problem here?

Helm Chart certmanager-issuer requirements/testing

This suggestion was provided in this MR: https://gitlab.com/gitlab-org/charts/gitlab/merge_requests/767#63c75b578053b4cfeed7df755c9eafb8110770e8_0_31.

"This will assume that you have installed a certmanager-issuer in the same namespace, but if this is a helm chart standalone from the gitlab helm chart, you probably havn't. So to make things work some additional steps are required.

(Optional but smooth) Setup some defaults for gitlabs certmanager.ingressShim
Install a ClusterIssuer
Add suitable annotation

  1. (Optional) Setup default values for GitLab's certmanager Helm chart dependency
    this is how i configured smooth defaults for my certmanager.
    these are values provided to my gitlab helm chart
    certmanager:
    ingressShim:
    defaultIssuerName: "letsencrypt-prod"
    defaultIssuerKind: "ClusterIssuer"
    defaultACMEChallengeType: "http01"
  2. Installed a cluster wide issuers manually (prod / staging)
    apiVersion: certmanager.k8s.io/v1alpha1
    kind: ClusterIssuer
    metadata:
    name: letsencrypt-prod
    spec:
    acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
    name: letsencrypt-prod-acme-key
    http01: {}

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-staging-acme-key
http01: {}
3. Configured your ingresses for various charts with certmanager compatible annotations
See: https://docs.cert-manager.io/en/latest/tasks/issuing-certificates/ingress-shim.html#supported-annotations
(SEE: https://github.com/helm/charts/blob/master/stable/mattermost-team-edition/values.yaml#L39)
ingress:
enabled: true
annotations:
## If you have configured certmanager with default values, this will do
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: gitlab-nginx

## Without certmanager being configured with default values, you need to specify what issuer you want to use
# kubernetes.io/ingress.class: gitlab-nginx
# certmanager.k8s.io/cluster-issuer: letsencrypt-prod

hosts:
- mattermost.example.com
tls:
- secretName: mattermost.example.com-tls
hosts:
- mattermost.example.com"

The current documentation is working as expected, but it would be helpful to have these steps tested against the process to ascertain whether they improve the process substantially and require documentation changes or if following them would have a negative impact on future versions. Any other feedback on the comments/proposed changes would be welcome.

Supported MM Versions

What are the supported versions of Mattermost in this Helm Chart?
The default is a 5.13 but this is already outdated.
Can I use the most recent Docker Image of MM?

[MM EE] Deprecated and removed apiVersions

As of right now, the Enterprise Edition chart only uses Ingress apiVersion extensions/v1beta1 or networking.k8s.io/v1beta1:
https://github.com/mattermost/mattermost-helm/blob/master/charts/mattermost-enterprise-edition/templates/_helpers.tpl#L49-#L55

Additionally, inside the mattermost-elasticsearch subchart, DaemonSet apiVersion extensions/v1beta1 is being used, when that was removed back in Kubernetes 1.16:
https://github.com/mattermost/mattermost-helm/blob/master/charts/mattermost-enterprise-edition/charts/mattermost-elasticsearch/templates/ds-master.yaml#L2

We are new users trying to install this on Kubernetes 1.19 and can't install this with elasticsearch enabled until the DaemonSet apiVersion is updated. The Ingress one is not a problem for now but will be for anyone wanting to use Kubernetes 1.22, when networking.k8s.io/v1beta1 will be removed.

File Permissions / User ID with enhanced security OpenShift 3.11

Hi,

I did try a bit. I made the Mattermos application start fine. Thought the are no write permissions on the PVC and the container that should be fixed. Logs dont work and plugin uploads dont work. I try to provide details. Here is the frist set.

$ ls -lisa /mattermost/logs
total 0
805544752 0 drwxr-xr-x. 2 root root  6 Mar 15  2019 .
539336916 0 drwxr-xr-x. 1 root root 20 Mar 15  2019 ..
$ ls -lisa
total 216
539336916   0 drwxr-xr-x. 1 root root     20 Mar 15  2019 .
 23910439   0 drwxr-xr-x. 1 root root     35 Dec 18 00:49 ..
539336917   4 -rw-r--r--. 1 root root   1239 Mar 15  2019 MIT-COMPILED-LICENSE.md
539336918 192 -rw-r--r--. 1 root root 193796 Mar 15  2019 NOTICE.txt
539336919   8 -rw-r--r--. 1 root root   5291 Mar 15  2019 README.md
805544732   0 drwxr-xr-x. 2 root root     40 Mar 15  2019 bin
 23908977   8 drwxr-xr-x. 6 root root   4096 Mar 15  2019 client
805544746   0 drwxr-xr-x. 1 root root     25 Dec 18 00:49 config
    24654   0 drwxrwxrwx. 3 root root     19 Dec 18 00:49 data
287374129   0 drwxr-xr-x. 2 root root     44 Mar 15  2019 fonts
539337173   0 drwxr-xr-x. 2 root root    255 Mar 15  2019 i18n
805544752   0 drwxr-xr-x. 2 root root      6 Mar 15  2019 logs
 23910430   0 drwxr-xr-x. 2 root root     56 Mar 15  2019 prepackaged_plugins
287374132   4 drwxr-xr-x. 2 root root   4096 Mar 15  2019 templates
$ chmod 777 logs
chmod: changing permissions of 'logs': Operation not permitted
$ whoami
whoami: cannot find name for user ID 1000400000
$ touch data/test
{"level":"error","ts":1576632693.6850626,"caller":"web/context.go:52","msg":"Plugins have been disabled. Please check your logs for details.","path":"/api/v4/plugins/statuses","request_id":"4w3x8kkd3td4pgs7zzy54cn5fo","ip_addr":"90.187.22.29","user_id":"3wdg3x5msbfoiffgorncqnp8xy","method":"GET","err_where":"GetPluginStatuses","http_code":501,"err_details":""}
--
  | 2019-12-18 01:31:33.68514844 +0000 UTC m=+2537.717978344 write error: can't open new logfile: open /mattermost/logs/mattermost.log: permission denied
  | {"level":"info","ts":1576632752.072719,"caller":"scheduler/worker.go:78","msg":"Worker: Job is complete","worker":"Plugins","job_id":"yfa3efnb1p8wjj8trqshefkqfc"}
  | 2019-12-18 01:32:32.074378491 +0000 UTC m=+2596.107208426 write error: can't open new logfile: open /mattermost/logs/mattermost.log: permission denied
  | {"level":"error","ts":1576632792.2408912,"caller":"web/context.go:52","msg":"Plugins have been disabled. Please check your logs for details.","path":"/api/v4/plugins","request_id":"33kkg3gpcpyfzxnm4ywajdsq8a","ip_addr":"90.187.22.29","user_id":"3wdg3x5msbfoiffgorncqnp8xy","method":"POST","err_where":"installPlugin","http_code":501,"err_details":""}
  | 2019-12-18 01:33:12.241014569 +0000 UTC m=+2636.273844479 write error: can't open new logfile: open /mattermost/logs/mattermost.log: permission denied
  | {"level":"info","ts":1576632812.084993,"caller":"scheduler/worker.go:78","msg":"Worker: Job is complete","worker":"Plugins","job_id":"k9rxwctbmtnhmxj4eofzz7ud7o"}
  | 2019-12-18 01:33:32.085486769 +0000 UTC m=+2656.118316733 write error: can't open new logfile: open /mattermost/logs/mattermost.log: permission denied
  | {"level":"info","ts":1576632872.088391,"caller":"scheduler/worker.go:78","msg":"Worker: Job is complete","worker":"Plugins","job_id":"8xtzjmjw6tdojkk1aeeo7ftm1w"}

No Documentation on Using S3

There is nothing indicating this chart can be used directly with S3. Ive merged in changes to be able to provide the service account the ability to handle IAM Access, but Im still not able to find enough documentation to actually configure this to store its data on an S3 bucket.

Our existing MM installation runs on EC2, with the /data folder on S3 and DB on RDS, we want to migrate to MM on Kubernetes and just keep using what we already have.

Issues upgrading to MM-TE 5 chart

Hello, I have followed the instructions when upgrading from MM-TE 4 to 5, but gotten stuck:

cp config/config.json /tmp
~ $ ./bin/mattermost config migrate /tmp/config-mm.json "mysql://mmuser:mmuser123@tcp(mm-te-mysql:3306)/mattermost?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"

/tmp/config-mm.json seems to be a generic config file, not the one I am actually using, typo in the upgrade instructions? Should it not be ./bin/mattermost config migrate /tmp/config.json ...?

I'm now getting (on Mattermost 5.29.0 with the 5.0 chart):

Error: failed to load configuration: failed to initialize: failed to create Configurations table: dial tcp: lookup postgres on 10.152.183.10:53: no such host

Notably, my database server's DNS is not postgres.

Any advise on how to proceed would be much appreciated! 🙂

EDIT: Just to clarify, I also use postgres, like @xtermi2.

[mattermost-team-edition] Allow extra pod annotations

Hi,

My use case is to backup Mattermost with Velero/Restic application. This works if the pod has an annotation like this one:

backup.velero.io/backup-volumes: "mattermost-data"

The chart doesn't yet allow to add such annotation.

I propose to add a new parameter in values.yaml:

## Additional pod annotations
extraPodAnnotations: {}

Passwords not stored in secrets

Secrets should be storable in existing Kubernetes secrets so that they don't have to be passed directly to helm. This is possible for certificates, but not for:

  • External DB password
  • SMTP user password
  • Licence file

This is important when using tools like helmfile, for instance, because then the configuration values are checked into source control, and it's important that there is an alternative way of handling secret values. Referencing existing secrets is a flexible way to do this.

how can we use plugins?

since system console settings are read-only, would you please provide an example for enabling plugins (e.g. jira)?

Profile uploads will fail due to permissions issue on /mattermmost/data

Hello,

The PVC is mounted with root permissions and not mattermost user ones. As a consequence, uploads for pictures profils will fails as it can't create /mattermost/data/users

kubectl exec -ti zooming-saola-mattermost-team-edition-7f6c9ff6dd-sw99s -- ls -al /mattermost/ |grep data
drwxr-xr-x    3 root     root          4096 Sep 19 20:11 data

How can I fix this permission issue as I can't chown files and sudo is not present either.

Cant start pod.

Using NFS for file storage. Used to work now not working. What user and group does the nfs share need to be set to.

Error: failed to load configuration: failed to create store: unable to load on store creation: failed to persist: failed to write file: open /mattermost/config/config.json: permission denied

Unable to update to newest version: Problem with file storage settings

After an update from v5.13.2 to v5.17.1 (mattermost-team-edition) (official helm chart kubernetes deployment) our mattermost instance throws the following error while starting:

{"level":"error","ts":1575923795.6970422,"caller":"app/server_app_adapters.go:134","msg":"Problem with file storage settings: TestFileConnection: api.file.test_connection.local.connection.app_error, WriteFile: api.file.write_file_locally.writing.app_error, open data/testfile: permission denied"}

The instance goes online nevertheless but all media files in all channels are missing now.
A rollback fixes this problem.

What do we have to additionally consider to be able to update our instance to the newest available version?

Unable to connect to Mattermost server using mobile App

I migrated Mattermost from a bare-metal server to a kubernetes-cluster. Everything works again, except for the mobile app (neither Anroid nor Iphone works).

I get the error:

Enter a valid email or username and/or password

But if I use the same credentials to login via Browser (on mobile or desktop) it works.
I think it boils down to the same issue described here: https://forum.mattermost.org/t/unable-to-connect-to-mattermost-server-using-android-phone/4894

Now I cannot use the tools suggested as my setup is inside a private network - I do use tls-encryption tho, but it is consumed by the nginx-ingress so not with mattermost.

Is the chart with its default configuration tested with mobile apps?

The helm chart also notes that in order to use nginx-cache I need some snippet - could this be responsible for the login-issues?

helm chart looks like this:

I migrated Mattermost from a bare-metal server to a kubernetes-cluster. Everything works again, except for the mobile app (neither Anroid nor Iphone works).

I get the error:
> Enter a valid email or username and/or password

But if I use the same credentials to login via Browser (on mobile or desktop) it works.
I think it boils down to the same issue described here: https://forum.mattermost.org/t/unable-to-connect-to-mattermost-server-using-android-phone/4894

Now I cannot use the tools suggested as my setup is inside a private network - I do use tls-encryption tho, but it is consumed by the nginx-ingress so not with mattermost.

Is the chart with its default configuration tested with mobile apps?

The helm chart also notes that in order to use nginx-cache I need some snippet - could this be responsible for the login-issues?

helm chart looks like this:
```yaml
# Default values for mattermost-team-edition.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
  repository: mattermost/mattermost-team-edition
  tag: 5.27.0
  imagePullPolicy: IfNotPresent

initContainerImage:
  repository: appropriate/curl
  tag: latest
  imagePullPolicy: IfNotPresent

## How many old ReplicaSets for Mattermost Deployment you want to retain
revisionHistoryLimit: 1

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
## ref: https://docs.gitlab.com/ee/install/requirements.html#storage
##
persistence:
  ## This volume persists generated data from users, like images, attachments...
  ##
  data:
    enabled: true
    size: 10Gi
    ## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
    ## Default: volume.alpha.kubernetes.io/storage-class: default
    ##
    # storageClass:
    accessMode: ReadWriteOnce
  # existingClaim: ""
  plugins:
    enabled: true
    size: 1Gi
    ## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
    ## Default: volume.alpha.kubernetes.io/storage-class: default
    ##
    # storageClass:
    accessMode: ReadWriteOnce
  # existingClaim: ""

service:
  type: ClusterIP
  externalPort: 8065
  internalPort: 8065
  annotations: {}
  # loadBalancerIP:
  loadBalancerSourceRanges: []

ingress:
  enabled: true
  path: /
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx-internal
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: 50m
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-buffering: "on"
#     nginx.ingress.kubernetes.io/configuration-snippet: |
#       proxy_cache mattermost_cache;
#       proxy_cache_revalidate on;
#       proxy_cache_min_uses 2;
#       proxy_cache_use_stale timeout;
#       proxy_cache_lock on;
    #### To use the nginx cache you will need to set an http-snippet in the ingress-nginx configmap
    #### http-snippet: |
    ####     proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
  hosts:
    - mattermost.internal.example.com
  tls:
    - secretName: mattermost.internal.example.com-tls
      hosts:
       - mattermost.internal.example.com

route:
  enabled: false

## If use this please disable the mysql chart by setting mysql.enable to false
externalDB:
  enabled: false

  ## postgres or mysql
  externalDriverType: ""

  ## postgres:  "postgres://<USERNAME>:<PASSWORD>@<HOST>:5432/<DATABASE_NAME>?sslmode=disable&connect_timeout=10"
  ## mysql:     "<USERNAME>:<PASSWORD>@tcp(<HOST>:3306)/<DATABASE_NAME>?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
  externalConnectionString: ""

mysql:
  enabled: true
  mysqlRootPassword: ""
  mysqlUser: ""
  mysqlPassword: ""
  mysqlDatabase: mattermost

  repository: mysql
  imageTag: '8.0.18'
  testFramework:
    enabled: false

  persistence:
    enabled: true
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    storageClass: ""
    accessMode: ReadWriteOnce
    size: 10Gi
  # existingClaim: ""

## Additional pod annotations
extraPodAnnotations: {}

## Additional env vars
extraEnvVars: []
  # This is an example of extra env vars when using with the deployment with GitLab Helm Charts
  # - name: POSTGRES_PASSWORD_GITLAB
  #   valueFrom:
  #     secretKeyRef:
  #       # NOTE: Needs to be manually created
  #       # kubectl create secret generic gitlab-postgresql-password --namespace <NAMESPACE> --from-literal postgres-password=<PASSWORD>
  #       name: gitlab-postgresql-password
  #       key: postgres-password
  # - name: POSTGRES_USER_GITLAB
  #   value: gitlab
  # - name: POSTGRES_HOST_GITLAB
  #   value: gitlab-postgresql
  # - name: POSTGRES_PORT_GITLAB
  #   value: "5432"
  # - name: POSTGRES_DB_NAME_MATTERMOST
  #   value: mm5
  # - name: MM_SQLSETTINGS_DRIVERNAME
  #   value: "postgres"
  # - name: MM_SQLSETTINGS_DATASOURCE
  #   value: postgres://$(POSTGRES_USER_GITLAB):$(POSTGRES_PASSWORD_GITLAB)@$(POSTGRES_HOST_GITLAB):$(POSTGRES_PORT_GITLAB)/$(POSTGRES_DB_NAME_MATTERMOST)?sslmode=disable&connect_timeout=10

## Additional init containers
extraInitContainers: []
  # This is an example of extra Init Container when using with the deployment with GitLab Helm Charts
  # - name: bootstrap-database
  #   image: "postgres:9.6-alpine"
  #   imagePullPolicy: IfNotPresent
  #   env:
  #     - name: POSTGRES_PASSWORD_GITLAB
  #       valueFrom:
  #         secretKeyRef:
  #           name: gitlab-postgresql-password
  #           key: postgres-password
  #     - name: POSTGRES_USER_GITLAB
  #       value: gitlab
  #     - name: POSTGRES_HOST_GITLAB
  #       value: gitlab-postgresql
  #     - name: POSTGRES_PORT_GITLAB
  #       value: "5432"
  #     - name: POSTGRES_DB_NAME_MATTERMOST
  #       value: mm5
  #   command:
  #     - sh
  #     - "-c"
  #     - |
  #       if PGPASSWORD=$POSTGRES_PASSWORD_GITLAB psql -h $POSTGRES_HOST_GITLAB -p $POSTGRES_PORT_GITLAB -U $POSTGRES_USER_GITLAB -lqt | cut -d \| -f 1 | grep -qw $POSTGRES_DB_NAME_MATTERMOST; then
  #       echo "database already exist, exiting initContainer"
  #       exit 0
  #       else
  #       echo "Database does not exist. creating...."
  #       PGPASSWORD=$POSTGRES_PASSWORD_GITLAB createdb -h $POSTGRES_HOST_GITLAB -p $POSTGRES_PORT_GITLAB -U $POSTGRES_USER_GITLAB $POSTGRES_DB_NAME_MATTERMOST
  #       echo "Done"
  #       fi

# Add additional volumes and mounts, for example to add SAML keys in the app or other files the app server may need to access
extraVolumes: []
  # - hostPath:
  #     path: /var/log
  #   name: varlog
extraVolumeMounts: []
  # - name: varlog
  #   mountPath: /host/var/log
  #   readOnly: true

configJSON: {
  "ServiceSettings": {
    "SiteURL": "https://mattermost.internal.example.com",
    "WebsocketURL": "",
    "LicenseFileLocation": "",
    "ListenAddress": ":8065",
    "ConnectionSecurity": "",
    "TLSCertFile": "",
    "TLSKeyFile": "",
    "UseLetsEncrypt": false,
    "LetsEncryptCertificateCacheFile": "./config/letsencrypt.cache",
    "Forward80To443": false,
    "ReadTimeout": 300,
    "WriteTimeout": 300,
    "MaximumLoginAttempts": 10,
    "GoroutineHealthThreshold": -1,
    "GoogleDeveloperKey": "",
    "EnableOAuthServiceProvider": false,
    "EnableIncomingWebhooks": true,
    "EnableOutgoingWebhooks": true,
    "EnableCommands": true,
    "EnableOnlyAdminIntegrations": true,
    "EnablePostUsernameOverride": false,
    "EnablePostIconOverride": false,
    "EnableBotAccountCreation": false,
    "EnableUserAccessTokens": false,
    "EnableLinkPreviews": true,
    "EnableTesting": false,
    "EnableDeveloper": false,
    "EnableSecurityFixAlert": true,
    "EnableInsecureOutgoingConnections": false,
    "AllowedUntrustedInternalConnections": "",
    "EnableMultifactorAuthentication": false,
    "EnforceMultifactorAuthentication": false,
    "AllowCorsFrom": "",
    "AllowCookiesForSubdomains": false,
    "SessionLengthWebInDays": 30,
    "SessionLengthMobileInDays": 30,
    "SessionLengthSSOInDays": 30,
    "SessionCacheInMinutes": 10,
    "SessionIdleTimeoutInMinutes": 0,
    "WebsocketSecurePort": 443,
    "WebsocketPort": 80,
    "WebserverMode": "gzip",
    "EnableCustomEmoji": true,
    "EnableEmojiPicker": true,
    "EnableGifPicker": true,
    "GfycatApiKey": "",
    "GfycatApiSecret": "",
    "RestrictCustomEmojiCreation": "all",
    "RestrictPostDelete": "all",
    "AllowEditPost": "always",
    "PostEditTimeLimit": -1,
    "TimeBetweenUserTypingUpdatesMilliseconds": 5000,
    "EnablePostSearch": true,
    "EnableUserTypingMessages": true,
    "EnableChannelViewedMessages": true,
    "EnableUserStatuses": true,
    "ExperimentalEnableAuthenticationTransfer": true,
    "ClusterLogTimeoutMilliseconds": 2000,
    "CloseUnusedDirectMessages": false,
    "EnablePreviewFeatures": true,
    "EnableTutorial": true,
    "ExperimentalEnableDefaultChannelLeaveJoinMessages": true,
    "ExperimentalGroupUnreadChannels": "disabled",
    "ImageProxyType": "",
    "ImageProxyURL": "",
    "ImageProxyOptions": "",
    "EnableAPITeamDeletion": false,
    "ExperimentalEnableHardenedMode": false,
    "ExperimentalLimitClientConfig": false,
    "EnableEmailInvitations": false
  },
  "TeamSettings": {
    "SiteName": "Mattermost",
    "MaxUsersPerTeam": 50,
    "EnableTeamCreation": true,
    "EnableUserCreation": true,
    "EnableOpenServer": false,
    "EnableUserDeactivation": false,
    "RestrictCreationToDomains": "",
    "EnableCustomBrand": false,
    "CustomBrandText": "",
    "CustomDescriptionText": "",
    "RestrictDirectMessage": "any",
    "RestrictTeamInvite": "all",
    "RestrictPublicChannelManagement": "all",
    "RestrictPrivateChannelManagement": "all",
    "RestrictPublicChannelCreation": "all",
    "RestrictPrivateChannelCreation": "all",
    "RestrictPublicChannelDeletion": "all",
    "RestrictPrivateChannelDeletion": "all",
    "RestrictPrivateChannelManageMembers": "all",
    "UserStatusAwayTimeout": 300,
    "MaxChannelsPerTeam": 2000,
    "MaxNotificationsPerChannel": 1000,
    "EnableConfirmNotificationsToChannel": true,
    "TeammateNameDisplay": "full_name",
#    "TeammateNameDisplay": "username",
  },
  "SqlSettings": {
    "DriverName": "",
    "DataSource": "",
    "DataSourceReplicas": [],
    "DataSourceSearchReplicas": [],
    "MaxIdleConns": 20,
    "MaxOpenConns": 35,
    "Trace": false,
    "AtRestEncryptKey": "",
    "QueryTimeout": 30
  },
  "LogSettings": {
    "EnableConsole": false,
    "ConsoleLevel": "INFO",
    "ConsoleJson": true,
    "EnableFile": true,
    "FileLevel": "INFO",
    "FileJson": true,
    "FileLocation": "",
    "EnableWebhookDebugging": true,
    "EnableDiagnostics": true
  },
  "PasswordSettings": {
    "MinimumLength": 5,
    "Lowercase": false,
    "Number": false,
    "Uppercase": false,
    "Symbol": false
  },
  "FileSettings": {
    "EnableFileAttachments": true,
    "EnableMobileUpload": true,
    "EnableMobileDownload": true,
    "MaxFileSize": 52428800,
    "DriverName": "local",
    "Directory": "./data/",
    "EnablePublicLink": false,
    "PublicLinkSalt": "",
    "ThumbnailWidth": 120,
    "ThumbnailHeight": 100,
    "PreviewWidth": 1024,
    "PreviewHeight": 0,
    "ProfileWidth": 128,
    "ProfileHeight": 128,
    "InitialFont": "luximbi.ttf",
    "AmazonS3AccessKeyId": "",
    "AmazonS3SecretAccessKey": "",
    "AmazonS3Bucket": "",
    "AmazonS3Region": "",
    "AmazonS3Endpoint": "s3.amazonaws.com",
    "AmazonS3SSL": false,
    "AmazonS3SignV2": false,
    "AmazonS3SSE": false,
    "AmazonS3Trace": false
  },
  "EmailSettings": {
    "EnableSignUpWithEmail": true,
    "EnableSignInWithEmail": true,
    "EnableSignInWithUsername": true,
    "SendEmailNotifications": false,
    "UseChannelInEmailNotifications": false,
    "RequireEmailVerification": false,
    "FeedbackName": "",
    "FeedbackEmail": "",
    "FeedbackOrganization": "",
    "SMTPUsername": "",
    "SMTPPassword": "",
    "EnableSMTPAuth": false,
    "SMTPServer": "",
    "SMTPPort": "",
    "ConnectionSecurity": "",
    "InviteSalt": "",
    "SendPushNotifications": true,
    "PushNotificationServer": "https://push-test.mattermost.com",
    "PushNotificationContents": "generic",
    "EnableEmailBatching": false,
    "EmailBatchingBufferSize": 256,
    "EmailBatchingInterval": 30,
    "EnablePreviewModeBanner": false,
    "SkipServerCertificateVerification": false,
    "EmailNotificationContentsType": "full",
    "LoginButtonColor": "",
    "LoginButtonBorderColor": "",
    "LoginButtonTextColor": ""
  },
  "RateLimitSettings": {
    "Enable": true,
    "PerSec": 10,
    "MaxBurst": 100,
    "MemoryStoreSize": 10000,
    "VaryByRemoteAddr": true,
    "VaryByUser": false,
    "VaryByHeader": ""
  },
  "PrivacySettings": {
    "ShowEmailAddress": true,
    "ShowFullName": true
  },
  "SupportSettings": {
    "TermsOfServiceLink": "https://about.mattermost.com/default-terms/",
    "PrivacyPolicyLink": "https://about.mattermost.com/default-privacy-policy/",
    "AboutLink": "https://about.mattermost.com/default-about/",
    "HelpLink": "https://about.mattermost.com/default-help/",
    "ReportAProblemLink": "https://about.mattermost.com/default-report-a-problem/",
    "SupportEmail": "[email protected]"
  },
  "AnnouncementSettings": {
    "EnableBanner": false,
    "BannerText": "",
    "BannerColor": "#f2a93b",
    "BannerTextColor": "#333333",
    "AllowBannerDismissal": true
  },
  "GitLabSettings": {
    "Enable": false,
    "Secret": "",
    "Id": "",
    "Scope": "",
    "AuthEndpoint": "",
    "TokenEndpoint": "",
    "UserApiEndpoint": ""
  },
  "LocalizationSettings": {
    "DefaultServerLocale": "en",
    "DefaultClientLocale": "en",
    "AvailableLocales": ""
  },
  "NativeAppSettings": {
    "AppDownloadLink": "https://about.mattermost.com/downloads/",
    "AndroidAppDownloadLink": "https://about.mattermost.com/mattermost-android-app/",
    "IosAppDownloadLink": "https://about.mattermost.com/mattermost-ios-app/"
  },
  "AnalyticsSettings": {
    "MaxUsersForStatistics": 2500
  },
  "WebrtcSettings": {
    "Enable": false,
    "GatewayWebsocketUrl": "",
    "GatewayAdminUrl": "",
    "GatewayAdminSecret": "",
    "StunURI": "",
    "TurnURI": "",
    "TurnUsername": "",
    "TurnSharedKey": ""
  },
  "DisplaySettings": {
    "CustomUrlSchemes": [],
    "ExperimentalTimezone": true
  },
  "TimezoneSettings": {
    "SupportedTimezonesPath": "timezones.json"
  },
  "PluginSettings": {
    "Enable": true,
    "EnableUploads": true,
    "Directory": "./plugins",
    "ClientDirectory": "./client/plugins",
    "Plugins": {},
    "PluginStates": {}
  }
}

Helm Chart not GitOps ready

Hi
we are using ArgoCD to sync everything with our clusters.

The current version of the helm chart is not usable for this because of the random value generation in the config map. Because of this there is always a out of sync between the data on the cluster and what is generated with helm.

Solution:
Remove the randAlphaNum from the config.tpl or give a posibility to overwrite this with static string through the values file.

Unable to upload media due to permissions error, fixed by restarting

I have a very simple deployment with nothing related to volumes altered in my values.yaml

After updating to something less than 5.24.0 I had an issue where anything that required uploading media errored with:
Encountered an error creating the directory for the new file.

I don't remember the version but it was only a couple days ago.

After looking in the logs it was getting a permission error. I checked the permissions on /mattermost in the pod and it had the permissions correctly set to mattermost. I also checked that the drive wasnt full.

Idk what happened but after upgrading to 5.24.0 it works again. I think it might've just been the restart that did it.

Difference between Team and Enterprise edition

Hi there,

I'm wondering why there is such big difference between team edition and enterprise edition charts. Also it looks like enterprise edition chart is still using helm v1 chart spec. Any plan to upgrade to v2? If yes, would it be more inline with team edition chart?

Mattermost Enterprise Edition fails to upgrade because of minio Storage

Hi there,

Upgrade fails when trying to upgrade Mattermost Enterprise Edition with the following command:

helm upgrade mattermost-dev -f mattermost/my-values.yaml charts/mattermost-enterprise-edition/ --debug --wait

minio Storage is not clean rolled out. It takes too long so helm --wait fails. And sometimes minio just gets a second pod stuck in "ContainerCreating".

Output of the upgrade command:

wait.go:224: [debug] Deployment is not ready: test/mattermost-dev-minio. 0 out of 1 expected pods are ready
upgrade.go:291: [debug] warning: Upgrade "mattermost-dev" failed: timed out waiting for the condition
Error: UPGRADE FAILED: timed out waiting for the condition
helm.go:75: [debug] timed out waiting for the condition
UPGRADE FAILED
main.newUpgradeCmd.func1
        /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:138
github.com/spf13/cobra.(*Command).execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
github.com/spf13/cobra.(*Command).Execute
        /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main
        /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
        /usr/local/go/src/runtime/proc.go:203
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357
$ kubectl get pods
NAME                                                              READY   STATUS              RESTARTS   AGE
mattermost-dev-db-0                                              1/1     Running             0          31m
mattermost-dev-mattermost-enterprise-edition-644564d6bf-58pb6    2/2     Running             0          9m1s
mattermost-dev-mattermost-enterprise-edition-644564d6bf-6q2nm    2/2     Running             0          8m18s
mattermost-dev-mattermost-enterprise-edition-jobserver-cfwgqlv   1/1     Running             0          8m54s
mattermost-dev-minio-596767d659-hcsw4                            0/1     ContainerCreating   0          9m2s
mattermost-dev-minio-865b88d7dd-xp6h8                            1/1     Running             0          11m

The Chart has then status failed

NAME            NAMESPACE       REVISION        UPDATED                                       STATUS   CHART                                   APP VERSION
mattermost-dev test             6               2020-07-01 14:14:35.331376305 +0200 CEST      failed   mattermost-enterprise-edition-1.1.1     5.1
$ kubectl describe pod mattermost-dev-minio-596767d659-hcsw4
...

Events:
  Type     Reason              Age                  From                              Message
  ----     ------              ----                 ----                              -------
  Normal   Scheduled           <unknown>            default-scheduler                 Successfully assigned test/mattermost-dev-minio-596767d659-hcsw4 to 5-21-282-889-2-2362799d
  Warning  FailedAttachVolume  58m                  attachdetach-controller           Multi-Attach error for volume "pvc-df136f22-50a9-4774-8c04-6f3193809170" Volume is already used by pod(s) mattermost-dev-minio-865b88d7dd-xp6h8, mattermost-dev-minio-5df98d454-gs7hn
  Warning  FailedMount         6m39s (x4 over 38m)  kubelet, 5-21-282-889-2-2362799d  Unable to attach or mount volumes: unmounted volumes=[export], unattached volumes=[mattermost-dev-minio-token-p2sw4 export]: timed out waiting for the condition
  Warning  FailedMount         2m7s (x21 over 56m)  kubelet, 5-21-282-889-2-2362799d  Unable to attach or mount volumes: unmounted volumes=[export], unattached volumes=[export mattermost-dev-minio-token-p2sw4]: timed out waiting for the condition

Edit: This is the information about the PVC:

$ kubectl get pvc | grep minio
mattermost-dev-minio              Bound    pvc-df136f22-50a9-4774-8c04-6f3193809170   10Gi       RWO            standard       96m
$ kubectl describe pvc mattermost-dev-minio
Name:          mattermost-dev-minio
Namespace:     test
StorageClass:  standard
Status:        Bound
Volume:        pvc-df136f22-50a9-4774-8c04-6f3193809170
Labels:        app=minio
               chart=minio-5.0.23
               heritage=Helm
               release=mattermost-dev
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
               volume.kubernetes.io/selected-node: 5-21-282-888-2-236278b0
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    mattermost-dev-minio-596767d659-hcsw4
               mattermost-dev-minio-865b88d7dd-xp6h8
Events:        <none>

Please, let me know if any further information is needed.
And one more question: why the Enterprise Helm Chart does not include the option of using PVC for data and plugins like the Team Edition and forces using minio?

Thank you. Great work with Mattermost.

Add service account use in mattermost teams helm chart

Opening an issue for this issue I talked about in another issue:

Thanks for the feedback. I want to use a service account to avoid providing any credential from environment and benefit from the AWS role mechanism.
I found this issue that talks about that. I'll continue the discussion there.

Originally posted by @rrey in #168 (comment)

I'll submit a PR to add this capability.

Can't deploy newer version of app with a persistent volume

When attempting to deploy a newer version of focalboard for example, 9.0 https://hub.docker.com/layers/mattermost/focalboard/0.9.0/images/sha256-31078df7a3c891c5ce7e24dd946e247bcb12fd7a4c8d3ab42b5ebcb956a3519a?context=explore
The following error occurs, However, when using the same values and the 6.7 version it does work without issues. There were alot of changes to the upstream project around the creation of the docker between releases and I think this might be the issue. But I am not 100%

2021-09-24T18:17:51.875390024Z 2021/09/24 18:17:51 {ServerRoot:http://localhost:8000 Port:8000 DBType:sqlite3 DBConfigString:/data/focalboard.db DBTablePrefix: UseSSL:false SecureCookie:false WebPath:./pack FilesDriver:local FilesS3Config:{AccessKeyID: SecretAccessKey: Bucket: PathPrefix: Region: Endpoint: SSL:false SignV2:false SSE:false Trace:false} FilesPath:/data/files Telemetry:true TelemetryID: PrometheusAddress: WebhookUpdate:[] Secret: SessionExpireTime:2592000 SessionRefreshTime:18000 LocalOnly:false EnableLocalMode:true LocalModeSocketLocation:/var/tmp/focalboard_local.socket EnablePublicSharedBoards:false AuthMode:native LoggingCfgFile: LoggingCfgJSON: AuditCfgFile: AuditCfgJSON:}
2021-09-24T18:17:51.875816944Z �[36minfo�[0m  [2021-09-24 18:17:51.875 Z] FocalBoard Server                        �[36mcaller�[0m="main/main.go:80" �[36mversion�[0m=0.9.1 �[36medition�[0m=linux �[36mbuild_number�[0m=dev �[36mbuild_date�[0m=n/a �[36mbuild_hash�[0m=58da537274b5c3e71f67125014ff10c477ae188c
2021-09-24T18:17:51.875830442Z �[31merror�[0m [2021-09-24 18:17:51.875 Z] Database Ping failed                     �[31mcaller�[0m="server/server.go:219" �[31merror�[0m="unable to open database file: no such file or directory"
2021-09-24T18:17:51.875835244Z fatal [2021-09-24 18:17:51.875 Z] server.NewStore ERROR                    caller="main/main.go:165" error="unable to open database file: no such file or directory"
2021-09-24T18:17:51.875838794Z   main.main
2021-09-24T18:17:51.875841759Z       /go/src/focalboard/server/main/main.go:165
2021-09-24T18:17:51.875844723Z   runtime.main
2021-09-24T18:17:51.875847552Z       /usr/local/go/src/runtime/proc.go:225
2021-09-24T18:17:51.875854069Z   runtime.goexit
2021-09-24T18:17:51.875856931Z       /usr/local/go/src/runtime/asm_amd64.s:1371```

Is anyone else having this issue?

Liveness probe and readiness probe fail

I deployed the team edition with Argocd (helm), version 3.19.0, in IBM cloud. All resources were provisioned, database is up but the server never starts.

Events:
  Type     Reason            Age                   From                    Message
  ----     ------            ----                  ----                    -------
  Warning  FailedScheduling  18m (x4 over 19m)     default-scheduler       0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         18m                   default-scheduler       Successfully assigned mattermost/mattermost-mattermost-team-edition-5c56645b99-fj65m to 10.150.188.36
  Normal   Pulled            17m                   kubelet, 10.150.188.36  Container image "appropriate/curl:latest" already present on machine
  Normal   Created           17m                   kubelet, 10.150.188.36  Created container init-mysql
  Normal   Started           17m                   kubelet, 10.150.188.36  Started container init-mysql
  Normal   Pulled            12m (x2 over 14m)     kubelet, 10.150.188.36  Container image "mattermost/mattermost-team-edition:5.29.0" already present on machine
  Normal   Created           12m (x2 over 14m)     kubelet, 10.150.188.36  Created container mattermost-team-edition
  Normal   Started           12m (x2 over 14m)     kubelet, 10.150.188.36  Started container mattermost-team-edition
  Warning  Unhealthy         12m (x3 over 13m)     kubelet, 10.150.188.36  Liveness probe failed: Get http://172.30.3.19:8065/api/v4/system/ping: dial tcp 172.30.3.19:8065: connect: connection refused
  Normal   Killing           12m                   kubelet, 10.150.188.36  Container mattermost-team-edition failed liveness probe, will be restarted
  Warning  Unhealthy         2m16s (x44 over 14m)  kubelet, 10.150.188.36  Readiness probe failed: Get http://172.30.3.19:8065/api/v4/system/ping: dial tcp 172.30.3.19:8065: connect: connection refused

Values are as follow:

        persistence:
          data:
            storageClass: "ibmc-block-silver"
          plugins:
            storageClass: "ibmc-block-silver"
        mysql:
          mysqlRootPassword: "change-me-please"
          mysqlUser: "administrator"
          mysqlPassword: "super-secret"
          persistence:
            storageClass: "ibmc-block-silver"
        ingress:
          enabled: true
          annotations:
            cert-manager.io/cluster-issuer: letsencrypt-prod
            kubernetes.io/ingress.class: nginx
            kubernetes.io/tls-acme: "true"
          hosts:
            - chat.example.com
          tls:
            - secretName: chat.example.com-secret
              hosts:
                - chat.example.com
        configJSON:
          ServiceSettings:
            SiteURL: "https://chat.example.com"
            EnableCustomEmoji: true
            EnableLinkPreviews: true
            SessionLengthWebInDays: 15
            SessionLengthMobileInDays: 15
          TeamSettings:
            SiteName: "Team Chat"
          PasswordSettings:
            MinimumLength: 8

/mattermost/data empty after new install

Hi,
i install Chart with Persistent Volume on Azure AKS.
I log in the Container and create some Directories: mkdir /mattermost/data/pics
but after new install die Directory is gone

$ ls -al /mattermost/data/
total 32
drwxrwsr-x 4 root mattermo 4096 Nov 25 13:29 .
drwxr-sr-x 1 mattermo mattermo 4096 Nov 25 13:35 ..
drwxrws--- 2 root mattermo 16384 Nov 25 13:28 lost+found
drwxr-s--- 3 mattermo mattermo 4096 Nov 25 13:29 users

How is that possible ?

  volumeMounts:
    - mountPath: /mattermost/config/config.json
      name: config-json
      subPath: config.json
    - mountPath: /mattermost/data
      name: mattermost-data
    - mountPath: /mattermost/plugins
      name: mattermost-plugins
    resources:
        null
  volumes:
  - name: config-json
    secret:
      secretName: mm-mattermost-team-edition-config-json
  - name: mattermost-data

    persistentVolumeClaim:
      claimName: mm-mattermost-team-edition

  - name: mattermost-plugins

    persistentVolumeClaim:

      claimName: mm-mattermost-team-edition-plugins

plugins permission issue

~/plugins $ ls -lha
total 52K
drwxr-xr-x   11 root     root        4.0K Nov  4 00:08 .
drwxr-sr-x    1 mattermo mattermo      40 Nov  4 02:23 ..
drwxr--r--    3 root     root        4.0K Nov  4 00:08 com.mattermost.aws-sns
drwxr--r--    4 root     root        4.0K Nov  4 00:08 com.mattermost.custom-attributes
drwxr--r--    5 root     root        4.0K Nov  4 00:08 com.mattermost.nps
drwxr--r--    3 root     root        4.0K Nov  4 00:08 com.mattermost.welcomebot
drwxr--r--    5 root     root        4.0K Nov  4 00:08 github
drwxr--r--    4 root     root        4.0K Nov  4 00:08 jira
drwx------    2 root     root       16.0K Nov  3 22:15 lost+found
drwxr--r--    3 root     root        4.0K Nov  4 00:08 mattermost-autolink
drwxr--r--    4 root     root        4.0K Nov  4 00:08 zoom
~/plugins $ ls -lha ../prepackaged_plugins/
total 98M
drwxr-xr-x    2 mattermo mattermo    4.0K Oct 30 17:02 .
drwxr-sr-x    1 mattermo mattermo      40 Nov  4 02:23 ..
-rw-r--r--    1 mattermo mattermo    8.5M Oct 30 16:44 mattermost-plugin-antivirus-v0.1.1.tar.gz
-rw-r--r--    1 mattermo mattermo    8.5M Oct 30 16:44 mattermost-plugin-autolink-v1.1.1.tar.gz
-rw-r--r--    1 mattermo mattermo    8.8M Oct 30 16:44 mattermost-plugin-aws-SNS-v1.0.2.tar.gz
-rw-r--r--    1 mattermo mattermo    8.8M Oct 30 16:44 mattermost-plugin-custom-attributes-v1.0.2.tar.gz
-rw-r--r--    1 mattermo mattermo    9.6M Oct 30 16:44 mattermost-plugin-github-v0.11.0.tar.gz
-rw-r--r--    1 mattermo mattermo    9.3M Oct 30 16:44 mattermost-plugin-gitlab-v1.0.1.tar.gz
-rw-r--r--    1 mattermo mattermo    9.0M Oct 30 16:44 mattermost-plugin-jenkins-v1.0.0.tar.gz
-rw-r--r--    1 mattermo mattermo    9.7M Oct 30 16:44 mattermost-plugin-jira-v2.2.2.tar.gz
-rw-r--r--    1 mattermo mattermo    8.7M Oct 30 16:44 mattermost-plugin-nps-v1.0.3.tar.gz
-rw-r--r--    1 mattermo mattermo    8.5M Oct 30 16:44 mattermost-plugin-welcomebot-v1.1.1.tar.gz
-rw-r--r--    1 mattermo mattermo    8.7M Oct 30 16:44 mattermost-plugin-zoom-v1.1.1.tar.gz
│ {"level":"info","ts":1572833405.6760542,"caller":"mlog/log.go:166","msg":"Starting up plugins"}
│ {"level":"info","ts":1572833405.6761217,"caller":"app/plugin.go:199","msg":"Syncing plugins from the file store"}
│ {"level":"info","ts":1572833405.7092931,"caller":"app/plugin.go:234","msg":"Found no files in plugins file store"}
│ {"level":"error","ts":1572833405.8930361,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, mkdir plugins/antivirus: permission denied","path":"/mattermost/prepackaged_pl
│ ugins/mattermost-plugin-antivirus-v0.1.1.tar.gz"}
│ {"level":"error","ts":1572833406.0656123,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-autolink-v1.1.1.tar.gz"}
│ {"level":"error","ts":1572833406.2460847,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-aws-SNS-v1.0.2.tar.gz"}
│ {"level":"error","ts":1572833406.4339068,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-custom-attributes-v1.0.2.tar.gz"}
│ {"level":"error","ts":1572833406.6397548,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-github-v0.11.0.tar.gz"}
│ {"level":"error","ts":1572833406.8434503,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, mkdir plugins/com.github.manland.mattermost-plugin-gitlab: permission denied",
│ "path":"/mattermost/prepackaged_plugins/mattermost-plugin-gitlab-v1.0.1.tar.gz"}
│ {"level":"error","ts":1572833407.0257208,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, mkdir plugins/jenkins: permission denied","path":"/mattermost/prepackaged_plug
│ ins/mattermost-plugin-jenkins-v1.0.0.tar.gz"}
│ {"level":"error","ts":1572833407.2281287,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-jira-v2.2.2.tar.gz"}
│ {"level":"error","ts":1572833407.4050324,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-nps-v1.0.3.tar.gz"}
│ {"level":"error","ts":1572833407.5880146,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost
│ -plugin-welcomebot-v1.1.1.tar.gz"}
│ {"level":"error","ts":1572833407.764443,"caller":"app/plugin.go:170","msg":"Failed to unpack prepackaged plugin","error":"installPluginLocally: app.plugin.mvdir.app_error, destination already exists","path":"/mattermost/prepackaged_plugins/mattermost-
│ plugin-zoom-v1.1.1.tar.gz"}
│ {"level":"info","ts":1572833407.7646933,"caller":"app/server.go:216","msg":"Current version is 5.16.0 (5.16.2/Wed Oct 30 16:41:39 UTC 2019/6890713b8afcbc5185a27dd8c7389219eb6e1957/none)"}
│ {"level":"info","ts":1572833407.764718,"caller":"app/server.go:217","msg":"Enterprise Enabled: false"}
│ {"level":"info","ts":1572833407.7647355,"caller":"app/server.go:219","msg":"Current working directory is /mattermost"}
│ {"level":"info","ts":1572833407.7647529,"caller":"app/server.go:220","msg":"Loaded config","source":"file:///mattermost/config/config.json"}
    persistence:
      data:
        enabled: true
        size: 100Gi
        storageClass: gp2
        accessMode: ReadWriteOnce
      plugins:
        enabled: true
        size: 1Gi
        storageClass: gp2
        accessMode: ReadWriteOnce

we should probably add support for securityContext in the chart as outlined here https://github.com/mattermost/mattermost-docker/blob/master/contrib/kubernetes/README.md#optional-steps

Add example in how to add PVC to store backups

You could provide docs on using a pvc to store backups:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-pv-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi

and in mysql-dump-scheduledjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mysqldump
spec:
schedule: "25 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
volumes:
- name: backup-pv-storage
persistentVolumeClaim:
claimName: backup-pv-claim
containers:
- name: mysqldump
image: deitch/mysql-backup:latest
volumeMounts:
- mountPath: "/data"
name: backup-pv-storage
env:
- name: RUN_ONCE
value: "true"
- name: DB_DUMP_TARGET
value: "/data"

Team Edition Chart version 3.6.1?

Hey folks! I'm not sure this is the right place for this, so feel free to close this issue and redirect me if so.

I'm just getting started with Mattermost team edition within a hobby kubernetes cluster.

It looks like the team edition chart got bumped to 3.6.1 (and Mattermost 5.13.2) when the Enterprise Edition chart got updated (and dockerhub has had the latest image for a while) - but the https://helm.mattermost.com/index.yaml didn't get updated.

$ helm search -l mattermost/mattermost-team-edition
NAME                              	CHART VERSION	APP VERSION	DESCRIPTION
mattermost/mattermost-team-edition	3.6.0        	5.13.0     	Mattermost Team Edition server.
mattermost/mattermost-team-edition	3.5.1        	5.12.4     	Mattermost Team Edition server.
mattermost/mattermost-team-edition	3.4.1        	5.11.0     	Mattermost Team Edition server.
mattermost/mattermost-team-edition	3.4.0        	5.11.0     	Mattermost Team Edition server.
mattermost/mattermost-team-edition	3.3.0        	5.10.0     	Mattermost Team Edition server.
mattermost/mattermost-team-edition	3.2.0        	5.10.0     	Mattermost Team Edition server.

I'm guessing that was just a oversight in the publication process, but I wanted to raise it up in case something was missed somewhere.

Thanks!

PVC and PV are not re-used when redeploying/updating MM

Chart version: 5.4.0
K8s version: 1.21.4

When redeploying or updating, the helm chart creates a new PV and a new PVC claim instead of re-using the old ones.

pvc-00563e91-ddde-40f7-9f14-d394324ec8b3   10Gi       RWO            Retain           Released   mattermost/mattermost-mattermost-team-edition           ebs-sc                  9d
pvc-1580f883-c152-410e-9b1c-dbe5faae1082   10Gi       RWO            Retain           Bound      mattermost/mattermost-mattermost-team-edition           ebs-sc                  15m
kubectl get pv

pvc-1580f883-c152-410e-9b1c-dbe5faae1082   10Gi       RWO            Retain           Released   mattermost/mattermost-mattermost-team-edition           ebs-sc                  112m

A workaround is to set persistence.data.existingClaim referencing a PVC name which claims an existing PV which contains the data to fix that.
However this requires to have existing PV and PVC and I assume this won't work on the first deployment in which both are not yet existent.

Also this looks like a bug to me as I'd expect the chart to re-use the previous PVC.

More specifically, I am using the following TF setup now which works across updates

  set {
    name  = "persistence.data.existingClaim"
    value = "pvc-mattermost-data"
  }
resource "kubernetes_persistent_volume_claim" "pvc-mattermost-data" {
  metadata {
    name      = "pvc-mattermost-data"
    namespace = "mattermost"
  }
  spec {
    access_modes = ["ReadWriteOnce"]
    resources {
      requests = {
        storage = "10Gi"
      }
    }
    volume_name        = "pvc-00563e91-ddde-40f7-9f14-d394324ec8b3"
    storage_class_name = "ebs-sc"
  }
}

The same applies also to the PVC for plugins.

This might also relate to #200 and #251.

All attachments are gone in Mattermost Enterprise Edition after Mattermost upgrade or restart

Steps to reproduce lost of attachments after upgrade

  1. Install: helm install mm-test mattermost/mattermost-enterprise-edition --set mattermostApp.image.tag=5.23
  2. Create a user, team, and add the enterprise license.
  3. Send a picture or a file to a channel, for example Town Square.
  4. Upgrade: helm upgrade mm-test mattermost/mattermost-enterprise-edition --set mattermostApp.image.tag=5.24
  5. No preview of the picture/file is shown now in Mattermost and the picture or file can't be accessed/downloaded anymore.

Logs

{"level":"debug","ts":1593636437.517199,"caller":"mlog/log.go:163","msg":"Encountered an error opening a reader from local server file storage.","path":"/api/v4/files/nhybdpiwrirsikzs79td8z5zdo/preview","request_id":"xf9xkd58fjnyp8u5gbx66nxjfh","ip_addr":"127.0.0.1","user_id":"8hx56j9npfgxtegqttzadc5fgr","method":"GET","err_where":"Reader","http_code":404,"err_details":"open data/20200701/teams/noteam/channels/nagxx84xrjgxuchzeourem4qca/users/8hx56j9npfgxtegqttzadc5fgr/nhybdpiwrirsikzs79td8z5zdo/test_preview.jpg: no such file or directory"}
{"level":"debug","ts":1593636437.6167703,"caller":"web/handlers.go:85","msg":"Received HTTP request","method":"POST","url":"/api/v4/channels/members/me/view","request_id":"wzqgrradm3rrd84rmq9kebonxe"}

Same issue can be triggered just restarting the deployment:

$ kubectl scale --replicas=0 deployment/mm-test-mattermost-enterprise-edition
$ kubectl scale --replicas=2 deployment/mm-test-mattermost-enterprise-edition

The behavior of minio together with the Enterprise chart does not seem to work clean (just for me?), as I indicated here:
#161
Even the last version of minio throws this error sometimes:

Events:
  Type     Reason              Age        From                     Message
  ----     ------              ----       ----                     -------
  Normal   Scheduled           <unknown>  default-scheduler        Successfully assigned default/mm-test-minio-78f98fb456-5xxpn to pool-fvngga374-3xwki
  Warning  FailedAttachVolume  98s        attachdetach-controller  Multi-Attach error for volume "pvc-c4d45a29-c443-4872-adc6-354202ba2f13" Volume is already used by pod(s) mm-test-minio-7c457bdc9d-l4gr4

Could you please take a look at it or should we better open a ticket directly with the support? Thank you.

mattermost-push-proxy ApplePushCertPrivate path in push-config.tml does not match deployment helm template

Chart name: mattermost-push-proxy
Chart version: 0.5.0
Application version: 5.22.4

Currently push-config.tpl defines the Apple Push certificate file as

"ApplePushCertPrivate": "/certs/apple-push-cert.pem"

However, in the template deployment.yaml the certificate secret is mounted in the path /mattermost-push-proxy/certs/apple-push-cert.pem resulting the Pod to crash loop because of missing file

{{- if .Values.applePushSettings.apple.privateCert }}
  - mountPath: /mattermost-push-proxy/certs/apple-push-cert.pem
     name: apple-push-cert
     subPath: apple-push-cert.pem
{{- end }}

Installing the helm chart with apple certificate results in crash loop with the error

ERR: 2021/07/06 11:19:25 logger.go:77: Failed to load the apple pem cert err=open /certs/apple-push-cert.pem: no such file or directory for type=apple
panic: Failed to load the apple pem cert err=open /certs/apple-push-cert.pem: no such file or directory for type=apple

Either the push-config.tpl should be updated to match the deployment helm template or the path should be made customizable with values.yaml

Allow additional persistence mounts

We have a local CA that is needed to be imported into the deployment in order for Jira integration to be supported. We got around this in Jenkins via just defining an addition mount in /etc/ssl/certs/ with the persistence.mounts and persistence.volumes options.

Without this setup we get the output below on doing a /jira connect, let me know if there is a better way to do things

failed to get a connect link: Post https://jira/plugins/servlet/oauth/request-token: x509: certificate signed by unknown authority

For reference on Jenkins implementation- https://github.com/helm/charts/blob/master/stable/jenkins/templates/jenkins-master-deployment.yaml#L233

Helm V3 support

For various reasons, the use of Helm v2 is not recommended for new installations - especially for security reasons.

I saw on other issue that the support of Helm V3 was not planned before the end of the year ( #132 (comment) ).

It is also mentioned in this issue that "Helm v3 can happily install APIVersion v1 charts, the only issue is with CRDs but we don't use that".
This doesn't seem to be the case anymore on last versions of K8S and Helm v3.

I am getting the following error :

Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"

where to put http-snippet?

looking at the values.yaml it's not entirely clear where we should put this code:

        http-snippet: |
          proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

Helm chart values incorrectly rendered.

Problem:
helm chart rendered incorrectly if tolerations passed.
Input:

"tolerations": [
        {
            "key": "app",
            "operator": "Equal",
            "value": "mattermost",
            "effect": "NoExecute",
        },
    ],

Error:

failed to generate YAML for specified Helm chart: failed to create chart from template: YAML parse error on mattermost-team-edition/templ
ates/deployment.yaml: error converting YAML to JSON: yaml: line 34: did not find expected key

Solution:
Remove spaces when inserting values. Tested locally.

{{ toYaml .Values.nodeSelector | indent 8 }}

{{ toYaml .Values.affinity | indent 8 }}

{{ toYaml .Values.tolerations | indent 8 }}

      {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
      {{- end }}
      {{- if .Values.affinity }}
      affinity:
{{ toYaml .Values.affinity | indent 8 }}
      {{- end }}
      {{- if .Values.tolerations }}
      tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
      {{- end }}

[mattermost-team-edition] read only file system ONLY FOR >=5.30

When i run mattermost images >= 5.30 mattermost wont start due to Error: failed to load configuration: failed to create store: unable to load on store creation: failed to persist: failed to write file: open /mattermost/config/config.json: read-only file system

image:
  repository: mattermost/mattermost-team-edition
  tag: 5.31.0-rc2
  imagePullPolicy: IfNotPresent

When i switch to older versions like 5.29.1 mattermost will start.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.