Coder Social home page Coder Social logo

atlassian / data-center-helm-charts Goto Github PK

View Code? Open in Web Editor NEW
150.0 12.0 129.0 10.79 MB

Helm charts for Atlassian's Data Center products

Home Page: https://atlassian.github.io/data-center-helm-charts/

License: Apache License 2.0

Shell 7.81% Java 66.97% Smarty 1.96% Mustache 17.66% Python 5.52% Makefile 0.09%
atlassian confluence jira bitbucket data-center helm-charts kubernetes helm crowd clipper

data-center-helm-charts's Introduction

Atlassian Data Center Helm Charts

Artifact Hub Atlassian license PRs Welcome Maven unit tests

This project contains Helm charts for installing Atlassian's Jira Data Center, Confluence Data Center, Bitbucket Data Center and Bamboo Data Center on Kubernetes.

Use the charts to install and operate Data Center products within a Kubernetes cluster of your choice. It can be a managed environment, such as Amazon EKS, Azure Kubernetes Service, Google Kubernetes Engine, or a custom on-premise system.

Get started

Get started right now using our documentation.

We provide extensive documentation to support our Helm charts. This includes prerequisites, set up, installation, examples, and more.

Support disclaimer

We don’t officially support the functionality described in the examples or the documented platforms. You should use them for reference only.

Feedback

If you find any issue, raise a ticket. If you have general feedback or question regarding the charts, use Atlassian Community Kubernetes space.

Contributions

Contributions are welcome. Find out how to contribute.

License

Copyright (c) [2020] to [2021] Atlassian and others. Apache 2.0 licensed, see license file.

data-center-helm-charts's People

Contributors

alxwi avatar apawelczyk-atlassian avatar badgersow avatar bianchi2 avatar bkwiatek-atlassian avatar bordenit avatar dependabot[bot] avatar eduardoalvarenga avatar errcode1202 avatar fredrikand avatar github-actions[bot] avatar grawert avatar hickeyma avatar janfuhrer avatar jjeongatl avatar kcichy-atlassian avatar kennymacleod avatar l0wl3vel avatar louiszschaler avatar nanux avatar nghazali avatar pathob avatar pbruski avatar sylus avatar t0bl avatar tan-ro avatar tarka avatar uohndecadisde avatar wkritzinger-atlassian avatar yzha645 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-center-helm-charts's Issues

[Bug] - Readiness probe fails, database driver unknown

Suggestion

Hi,

I'm running into some issues deploying the chart and would appreciate some pointers for troubleshooting.
My readiness probes are failing, with logs containing the following

2022-04-08 09:52:18,765+0000 JIRA-Bootstrap FATAL      [c.a.jira.startup.JiraStartupLogger] Driver for the database Unknown not found. Ensure it is installed in the 'lib' directory.
2022-04-08 09:52:18,987+0000 JIRA-Bootstrap INFO      [c.a.jira.startup.JiraStartupLogger] Running Jira startup checks.
2022-04-08 09:52:18,987+0000 JIRA-Bootstrap FATAL      [c.a.jira.startup.JiraStartupLogger] Startup check failed. Jira will be locked.

I haven't been able to figure out why the database registers as Unknown, the config and the resulting env vars should be correct.

        database:
          type: postgres72
          driver: org.postgresql.Driver
      ATL_DB_TYPE:                   postgres72
      ATL_DB_DRIVER:               org.postgresql.Driver

Thanks in advance

Product

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Plug-in caches and jira home lock file issue

Recommend creating option and default init container to dump the plug-in caches and lock file before the container starts. Containers don't start up reliably without dumping the caches and lock file in the 267 iterations I tried, so I built the dumping of these files into a self-healing script for now, but it should proactively just dump those files before it starts.

[Suggestion] - Ability to provide license via secret

Suggestion

Currently, it isn't possible to provide the license in a secret or in a programmatical way.

This means that for every fresh installation, manual intervention is necessary and that an engineer will need access to the license key.

I'm not familiar enough with Jira to provide a pull request that makes this a reality, but would be happy to do so if you could give me a pointer on how the license would need to be provided (env, file).

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

Jira can't start in clustering mode in OpenShift with EBS local-home storage

The problem was discovered in OpenShift but can be potentially reproducable in any K8s env where a Jira pod runs as unprivileged user and local home is persisted.

In an OpenShift cluster on AWS, I am using gp2 storage class. A volume is provisioned dynamically. When Jira container starts it executes an entrypoint to generate cluster.properties. It fails to chown it:

INFO:root:Generating /var/atlassian/application-data/jira/cluster.properties from template cluster.properties.j2
Traceback (most recent call last):
  File "/entrypoint.py", line 23, in <module>
    gen_cfg('cluster.properties.j2', f'{JIRA_HOME}/cluster.properties',
  File "/entrypoint_helpers.py", line 64, in gen_cfg
    set_perms(target, user, group, mode)
  File "/entrypoint_helpers.py", line 34, in set_perms
    shutil.chown(path, user=user, group=group)
  File "/usr/lib/python3.8/shutil.py", line 1296, in chown
    os.chown(path, _user, _group)
PermissionError: [Errno 1] Operation not permitted: '/var/atlassian/application-data/jira/cluster.properties'

Jira container runs as an unprivilaged user with an OpenShift generated id. This user can write to the mounted directory. There's no reason why chown is needed in this particular case.

Is it possible to make it optional?

Additional variables do not seem to work properly

Here is an example configmap that does work....

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: jira-config
  labels:
    app: jira
data:
  JVM_MINIMUM_MEMORY: 2048m
  JVM_MAXIMUM_MEMORY: 4096m
  ATL_JDBC_USER: "{{ jiradbUsername }}"
  ATL_JDBC_PASSWORD: "{{ jiradbPassword }}"
  ATL_DB_DRIVER: org.postgresql.Driver
  ATL_JDBC_URL: jdbc:postgresql://postgres-jira:5432/jiradb
  CLUSTERED: "true"
  ATL_DB_TYPE: postgres72
  JIRA_SHARED_HOME: /var/atlassian/application-data/jira/shared

I tried adding ATL_TOMCAT_CONTEXTPATH variable using the helm chart and I think it maybe puts it in the wrong place. You could possibly group all the container variable options under data in configmap like the above.

Can we use the helm chart for production setup?

Hi Team,
We are planning to setup Jira and Confluence on AWS EKS, as per the readme file, the charts are experimental and unsupported but the version 0.5.0 has verified check. So can we use this version to setup apps on AWS EKS production?

"Base URL mismatch" on redirect

Suggestion

Hello,

I deployed Atlassian Data Center Bitbucket into a Kubernetes cluster. All Helm tests passed. The application is running, and with one exception ("Base URL mismatch", see below) reports no issues. Pre-deployment I changed the service "bitbucket" type to "NodePort". The application is behind a load balancer, which forwards traffic from port 7990 to the service NodePort. The behavior we are observing is that the redirect eventually drops the port we have configured the load balancer to use (7990), after which we receive a HTTP status code 404 (if we also have a listener on port 80):

$ curl -Lv <URL>:7990/dashboard
* About to connect() to <URL> port 7990 (#0)
* Trying <IP>...
* Connected to <URL> (<IP>) port 7990 (#0)
> GET /dashboard HTTP/1.1
> User-Agent: curl/7.29.0
> Host: <URL>:7990
> Accept: */*
>
< HTTP/1.1 302
< X-AREQUESTID: @55C3QLx1044x132x0
< x-xss-protection: 1; mode=block
< x-frame-options: SAMEORIGIN
< x-content-type-options: nosniff
< Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Cache-Control: no-cache
< Cache-Control: no-store
< Location: <URL>/login?nextUrl=%2Fdashboard
< Content-Language: en-US
< Content-Length: 0
< Date: Tue, 19 Jul 2022 17:24:50 GMT
<
* Connection #0 to host <URL> left intact
* Issue another request to this URL: '<URL>/login?nextUrl=%2Fdashboard'
* Found bundle for host <URL>: 0x1f67ff0
* About to connect() to <URL> port 80 (#1)
* Trying <IP>...
* Connected to <URL> (<IP>) port 80 (#1)
> GET /login?nextUrl=%2Fdashboard HTTP/1.1
> User-Agent: curl/7.29.0
> Host: <URL>
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Date: Tue, 19 Jul 2022 17:24:50 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #1 to host <URL> left intact

or the connection times out (if we don't also have a listener on port 80). This happens even if I exec into the pod and attempt to 'curl -Lv http://localhost:7990'.

There doesn't appear to be a way to set the application service port in the UI, but we configured Bitbucket to serve on port 7990 on deployment.

Attempts to reach "/", "/login", or "/dashboard" (for example) on deployment drop the port on redirect, but I can copy-and-paste the URL into a web browser, then add the port number and log in. Once I get past the initial login, the port number is retained when clicking on items on the dashboard (for example). Attempts to change the "Base URL" to include the port number in the "Administration" > "Server settings" dialogue result in the application informing me that there's a "Base URL mismatch".

In addition, the URL behind the "Bitbucket" logo in the upper left does not include the port, even if the base URL has been updated to include it. And after updating the base URL and saving the change the link in the upper left still refers to the base URL before the update, even if the page is refreshed.

I would like to know if these URLs are hard-coded where they shouldn't be.

Product

Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

jira pod runs as root

The main jira/tomcat pod is by default running as the root user or id 0. This is a bad security practice.

Interested in Crowd chart?

Hey guys,

we need a Crowd chart, are currently working on it and would be interested in upstreaming it. The initial version my colleages created is based on your Confluence chart. Any chances to get this merged if we create a pull request?

Thanks,
Patrick

Ingress Cleanup

The ingress could be cleaned up in the following ways:

  1. The path in the ingress could be automatically defaulted to the jira.service.contextpath.
  2. The ATL_PROXY_PORT and ATL_PROXY_NAME could perhaps be added in the statefulset, where ATL_PROXY_PORT defaults to 443 and ATL_PROXY_NAME defaults to the ingress.host value.

I noticed in the server.xml the proxyPort and proxyName are empty with the standard values.yaml implementation currently.

[Suggestion] - Allow the option to add annotations for Standalone synchrony service

Suggestion

We have to add annotations on the services to enable session affinity and create a health check (backend configuration). At present, we can add annotations through values.yaml only on confluence service. For standalone synchrony service, we have to add annotations after the application is deployed.

It would be great if there is an option to add annotations on Synchrony service in values.yaml as we can manage everything through Helm.

Product

Confluence

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Allow smtp configuration via secret / chart

Suggestion

It isn't possible to configure SMTP credentials via the helm chart.
Again, this requires manual interaction and secret knowledge, which we'd like to avoid.

Again, happy to provide a PR if given a pointer on how to provide these credentials to Jira.

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Bug] - Unable to control some StatefulSet configurations with values.yaml

Bug

Hello, I am trying to activate clustering via the Helm chart and my values.yaml file, but not having any luck. It seems like the clustering value (and others, see below), do not result in the proper values being set in the StatefulSet object:

My (partial) values.yaml:

cert:
  environment: dev
  secret-name: dev-certs

jira:
  clustering:
    enabled: true
  replicaCount: 1
  setPermissions: false
  securityContext: {}
  containerSecurityContext: {}
  image:
    repository: registry01.my.corp/sre/jira-software 
    pullPolicy: IfNotPresent
    tag: ""
<...snip...>

Chart.yaml (since we are using atlassian/data-center-helm-charts as a dependent Chart):

apiVersion: v2
name: sd-jira
description: A chart for installing Jira on K8s
type: application
version: 0.1.1
dependencies:
  - name: cert
    version: "1.1.0"
    repository: "http://chartmuseum.my.corp/"  
  - name: jira
    version: ~1.3.0
    repository: https://atlassian.github.io/data-center-helm-charts/

When I look at the output of Helm Template, I do not see the clustering value set on the StatefulSet. Also notice the value of SET_PERMISSIONS from values.yaml above is also not honored:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sd-jira
  labels:
    helm.sh/chart: jira-1.3.0
    app.kubernetes.io/name: jira
    app.kubernetes.io/instance: sd-jira
    app.kubernetes.io/version: "8.20.7"
    app.kubernetes.io/managed-by: Helm
    
spec:
  replicas: 1
  serviceName: sd-jira
  selector:
    matchLabels:
      app.kubernetes.io/name: jira
      app.kubernetes.io/instance: sd-jira
  template:
    metadata:
      annotations:
        checksum/config-jvm: 223f26e47250d997193c133d041f8b49427c3a68177aadc5861025f97dcc4d50
        
      labels:
        app.kubernetes.io/name: jira
        app.kubernetes.io/instance: sd-jira
        
    spec:
      serviceAccountName: svc-sd-jira
      terminationGracePeriodSeconds: 30
      securityContext:
        
        
        fsGroup: 2001
      initContainers:
        
      containers:
        - name: jira
          image: "registry01.my.corp/sre/jira-software:8.20.7"
          imagePullPolicy: IfNotPresent
          env:
            
            - name: ATL_TOMCAT_SCHEME
              value: "https"
            - name: ATL_TOMCAT_SECURE
              value: "true"
            
            
            
            - name: ATL_PROXY_NAME
              value: "jira.my.corp.dev"
            - name: ATL_PROXY_PORT
              value: "443"
            
            
            - name: ATL_DB_TYPE
              value: "mysql57"
            
            
            - name: ATL_DB_DRIVER
              value: "com.mysql.jdbc.Driver"
            
            
            - name: ATL_JDBC_URL
              value: "jdbc:mysql://address=(protocol=tcp)(host=jira-db)(port=3306)/jirak8s?sessionVariables=default_storage_engine=InnoDB"
            
            
            - name: ATL_JDBC_USER
              valueFrom:
                secretKeyRef:
                  name: jira-db-secret
                  key: USERNAME
            - name: ATL_JDBC_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: jira-db-secret
                  key: PASSWORD
            
            
            
            
            - name: SET_PERMISSIONS
              value: "true"
            - name: JIRA_SHARED_HOME
              value: "/var/atlassian/application-data/shared-home"
            - name: JVM_SUPPORT_RECOMMENDED_ARGS
              valueFrom:
                configMapKeyRef:
                  key: additional_jvm_args
                  name: sd-jira-jvm-config
            - name: JVM_MINIMUM_MEMORY
              valueFrom:
                configMapKeyRef:
                  key: min_heap
                  name: sd-jira-jvm-config
            - name: JVM_MAXIMUM_MEMORY
              valueFrom:
                configMapKeyRef:
                  key: max_heap
                  name: sd-jira-jvm-config
            - name: JVM_RESERVED_CODE_CACHE_SIZE
              valueFrom:
                configMapKeyRef:
                  key: reserved_code_cache
                  name: sd-jira-jvm-config
            
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            - name: ehcache
              containerPort: 40001 
              protocol: TCP 
            - name: ehcacheobject
              containerPort: 40011
              protocol: TCP
            
          readinessProbe:
            httpGet:
              port: 8080
              path: /status
            initialDelaySeconds: 10
            periodSeconds: 5
            failureThreshold: 30
          resources:
            requests:
              cpu: "2"
              memory: 2G
          volumeMounts:
            
            - name: local-home
              mountPath: "/var/atlassian/application-data/jira"
            - name: local-home
              mountPath: "/opt/atlassian/jira/logs"
              subPath: "log"
            - name: shared-home
              mountPath: "/var/atlassian/application-data/shared-home"
            
            
            
          lifecycle:
            preStop:
              exec:
                command: ["sh", "-c", "/shutdown-wait.sh"]
        
        
        
      volumes:
        
        
        - name: shared-home
          persistentVolumeClaim:
            claimName: jira-shared-home
        
        
        
        
        
  
  
  volumeClaimTemplates:
  - metadata:
      name: local-home
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "portworx-pso-fb-v3"
      resources:
        requests:
          storage: 8Gi

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

Managed Synchrony in Confluence

Currently, you have two options for Synchrony in the Confluence helm chart:

  1. Deploy with Synchrony disabled, which sets -Dsynchrony.btf.disabled=true
  2. Or deploy with Synchrony enabled, which deploys a Statefulset, Service, entry point, etc. for a stand-alone Synchrony

I would like to add another option to provide a Synchrony managed by Confluence.

To do this, I would add a "synchrony.managed" boolean to the values. If managed is set to false (and synchrony.enabled is true), it deploys the stand-alone Synchrony. If managed is set to true (and synchrony.enabled is true). It would deploy the following:

  • Same service as stand-alone
  • JVM option of -Dsynchron.btf.disabled=false
  • Additional exposed ports on the Confluence stateful set for Synchrony's service port and Hazelcast port. This will require a port name change to avoid conflicts.
  • Additional labels added to stateful set for Confluence so service will select it

Driver for the database MySQL 5.7 not found

It appears my database configuration isn't be acknowledged.

Below is my error log.

2021-08-14 13:12:04,624+0000 JIRA-Bootstrap FATAL [c.a.jira.startup.JiraStartupLogger] Driver for the database MySQL 5.7 not found. Ensure it is installed in the 'lib' directory.

Below is the values.yaml for the database.

database:
      type: mysql8
      url: jdbc:mysql://mysql/my_database
      driver: com.mysql.jdbc.Driver
      credentials:
        secretName: mysql-creds
        usernameSecretKey: username
        passwordSecretKey: password

How to reproduce.

  1. Helm install

[Suggestion] - Allow setting TLS host(s)

Suggestion

Dear Atlassian charts team,

we would like to be able to set the TLS host(s) separately from the normal (rules) hosts to make it easier to work with wildcard certificates (issued e.g. with Letsencrypt) which is especially useful on pre-prod environments.

Example known from many other charts:

  ingress:
    enabled: true
    apiVersion: ...
    hostName: service.staging.example.com
    tls:
    - hosts:
      - *.staging.example.com
      secretName: tls-secret
    ...

Thank you
Patrick

Product

Jira, Confluence, Bitbucket, Other

Code of Conduct

  • I agree to follow this project's Code of Conduct

Login infinite loop if there is more than 1 pod [istio]

@kennymacleod @bkwiatek-atlassian ,
I am facing login issues when I scale the pods to >1. This works when there is only one pod.
I found the resolution in Atlassian docs.
But how can I incorporate the change in helm through values.yaml ?

Resolution is to add JVM_EXTRA_ARGS (-DjvmRoute=respective node id) in setenv.sh

Resolution
Verify that node name(s) are consistent across the configuration elements. You will need to have the same node name defined in:

/cluster.properties file at jira.node.id
/bin/setenv.sh under JVM_EXTRA_ARGS (-DjvmRoute=)
In your load balancer configuration (example for Apache: BalancerMember http://: route=nodename)

https://confluence.atlassian.com/jirakb/keep-being-redirected-to-the-login-page-after-providing-the-right-credentials-on-jira-data-center-610435649.html

Old releases (e.g. 0.5.0) got removed from Helm repo

Hey guys,

we noticed that some of the old releases (e.g. 0.5.0) got removed from the index.yaml in the repository. I know this project is in a very early stage, but we are also trying to adapt your new charts as soon and as early as possible. Hence two questions: Is there any particular reason why this happened and should we expect this more often?

Thanks for your answers.

Patrick

NFS-FIXER pod starts before PVC created

Hi,

I ran into something rather unusual when using nfsPermissionFixer:

helm install -n $n jira --values values.yaml data-center-helm-charts/jira --debug

I noticed that the nfs-fixer would time out waiting for the condition

install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /root/data-center-helm-charts/src/main/charts/jira
client.go:268: [debug] Starting delete for "jira-nfs-fixer" Job
client.go:297: [debug] jobs.batch "jira-nfs-fixer" not found
client.go:122: [debug] creating 1 resource(s)
client.go:477: [debug] Watching for changes to Job jira-nfs-fixer with timeout of 5m0s
client.go:505: [debug] Add/Modify event for jira-nfs-fixer: ADDED
client.go:544: [debug] jira-nfs-fixer: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:505: [debug] Add/Modify event for jira-nfs-fixer: MODIFIED
client.go:544: [debug] jira-nfs-fixer: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition
helm.go:81: [debug] failed pre-install: timed out waiting for the condition

After scratching my head until I was bald, I noticed I could alter the pre-install value in nfs-permission-fixer.yaml

Before:

  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"

After:

  annotations:
    "helm.sh/hook": post-install
    "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"

This appears to have resolved one of my problems.

The second problem was the actual command run by nfs-fixer; if others are using an EFS backed PVC, then I found the following command in your values.yaml doesn't prevent Jira from starting:

command: "chown 2001: /shared-home"

Hopefully, this may be of help to others encountering the same!

Unable to create directory for deployment

Below is the error log.
14-Aug-2021 13:32:21.183 SEVERE [Catalina-startStop-1] org.apache.catalina.startup.HostConfig.beforeStart Unable to create directory for deployment: [/opt/atlassian/jira/conf/Catalina/localhost]

I found the solution in your documentation found in the below link.
https://confluence.atlassian.com/jirakb/jira-server-throws-unable-to-create-directory-for-deployment-error-on-startup-389781040.html

Your chart doesn't make it easy to implement the solution. I think just running the below command is enough.
chown -R jira:jira /opt/atlassian/jira/conf/Catalina/localhost

How to reproduce.

  1. Helm install

Can you make the chart more flexible please?

Mirror mode desired

Hi,

I have deployed the chart, but can't figure out how to get the server in mirror mode. Any help would be appreciated.

Best,
Friedrich

Make ingress path configurable

Currently the ingress path for all applications is set to "/". Can this be changed to be a configurable value?

 rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
          - path: "/"    =>    {{ $.Values.ingress.path }}
            backend:
              serviceName: {{ include "jira.fullname" $ }}
              servicePort: {{ $.Values.jira.service.port }}

[Suggestion] - Custom Docker image repository and tags for nfsPermissionFixer and Fluentd

Suggestion

Please update the Helm charts to support specifying the repository for the nfsPermissionFixer Docker image. In addition, please update the Fluentd values to support specifying the image repository independently from the image tag. These changes will enable deploying the charts in air gapped environments and allow overriding the Docker repository while preserving the default image tag.

Product

Jira, Confluence, Bitbucket, Other

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Deploy into k8s only bitbucket solution in HA

Suggestion

We want to use a chart or some complete solution to deploy only bitbucket into kubernetes cluster (EKS for example) with HA. The database could be managed externally, Jira and elasticsearch are running and ready on our company.

Thanks

Product

Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Allow the ability to set more than one host on the ingress object

Suggestion

We have mutliple URLs our users access Jira with, due to legacy setups. Currently we have to manually add another host to the ingress object, but it would be nice to include it in values.yaml so we can manage everything from there.

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Automatically roll StatefulSet if changes occur in ConfigMaps

Suggestion

Problem

The helm charts ship with at least one ConfigMap for each application, e.g., https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/templates/config-jvm.yaml which can be updated by providing the respective values in values.yaml. However, the StatefulSet will not perform an update as it is not modified.

Details

An administrator might want to change the JVM memory options. A change to jvm.maxHeap, jvm.minHeap and jvm.reservedCodeCache will not roll the StatefulSet, because only the ConfigMap receives an update. A change to requests and limits of a container will roll the StatefulSet, because the StatefulSet gets updated. This seems like an inconsistent behavior, given that the StatefulSet will always roll if container requests and limits get updated.

The helm documentation shows an example how ConfigMaps can force updates: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments

Suggestion

Add the respective annotations such that changes to ConfigMaps perform updates to the StatefulSet

I can provide the necessary changes in a PR if wanted.

Product

Jira, Confluence, Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

Unable to scale stateful sets replica to 2

Hi,

I am currently using the helm chart by Atlassian to deploy my Confluence Data Center in a Kubernetes cluster.
I am able to do this successfully with 1 replica.

However, upon scaling with this command "kubectl scale statefulset --replicas==2", it will throw an error on my initial first pod - Caused by: com.hazelcast.core.HazelcaseInstanceNotActiveException: Hazelcast instance is not active!
Please refer to attached file for the full error log.
errorlog.txt

However, my second replica pod runs as per normal.

May I seek ur assistance on this?

Thanks!

[Suggestion] - Should Bitbucket Smart Mirror have a livenessProbe?

Suggestion

I was wondering why the Helm chart does not have a option for setting up a livenessProbe for the Bitbucket Smart Mirror. There is a open port for the readinessProbe already there. Is there no need for a livenessProbe or should this be added in the future?

My other concern is about the Security Context of the Smart Mirror since it runs as the root user for the ability to change the volume permissions. Also you can't change the allowPrivilegeEscalation value either on the Values file unfortunately. I think this should be configured more securely if possible.

Product

Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

Jira doesn't use db host from dbconfig.yml

Describe the bug
Jira seems to be using its own IP address for the pod instead of using the hostname defined in the dbconfig.yml
I created a mysql database using helm. I have a kubernetes service called mysql which exposes the traffic on the mysql pod.

Below is my dbconfig.yml

<?xml version="1.0" encoding="UTF-8"?>

<jira-database-config>
  <name>defaultDS</name>
  <delegator-name>default</delegator-name>

  <schema-name>public</schema-name>
  <database-type>mysql8</database-type>
  <jdbc-datasource>
    <url>jdbc:mysql://mysql:3306/myDatabase?useUnicode=true&amp;characterEncoding=UTF8&amp;sessionVariables=default_storage_engine=InnoDB</url>
    <username>admin</username>
    <password>admin123</password>
    <driver-class>com.mysql.cj.jdbc.Driver</driver-class>

    <pool-min-size>20</pool-min-size>
    <pool-max-size>100</pool-max-size>
    <pool-min-idle>10</pool-min-idle>
    <pool-max-idle>20</pool-max-idle>

    <pool-max-wait>30000</pool-max-wait>
    <validation-query>select 1</validation-query>
    <time-between-eviction-runs-millis>30000</time-between-eviction-runs-millis>
    <min-evictable-idle-time-millis>5000</min-evictable-idle-time-millis>
    <pool-remove-abandoned>true</pool-remove-abandoned>
    <pool-remove-abandoned-timeout>300</pool-remove-abandoned-timeout>
    <pool-test-while-idle>true</pool-test-while-idle>
    <pool-test-on-borrow>false</pool-test-on-borrow>
  </jdbc-datasource>
</jira-database-config>

Below is the error log.

Caused by: java.sql.SQLSyntaxErrorException: SELECT command denied to user 'admin'@'10.1.1.163' for table 'propertyentry'
        at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
        at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
        at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1003)
        at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
        at com.querydsl.sql.AbstractSQLQuery.fetch(AbstractSQLQuery.java:446)

I've tried the fqdn below and got the same results.

mysql.NAMESPACE.svc.cluster.local:3306
mysql-0.mysql.NAMESPACE.svc.cluster.local:3306

I've also tried the below drivers.

com.mysql.jdbc.Driver
com.mysql.cj.jdbc.Driver

Steps to Reproduce

  1. Configure Jira via helm with image atlassian/jira-software:8.18.1-jdk11
  2. Configure mysql via helm from bitnami

Expected Behaviour
Jira connects to the mysql database.

Session Affinity Recommendation

If you are to do load testing of this configuration perhaps consider setting the SessionAffinity in the service to clusterIP. For some reason even with the Session Affinity on the ingress, the backend node will switch on users and then they will have issues with their sessions.

https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-userspace

A use case I have experienced:
Log out of Jira and sit at the logout page for 3 minutes. Click the "Log in again" button and see if (a.) the node at the bottom of the page switches, or (b.) you get a login error due your x-ausername switching to "anonymous" (usually related to the node switching) and maybe that node knows nothing about you? Setting this affinity in the service seems to result in less errors with this use case. Perhaps you will have similar results.

"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Shared home emptyDir prevents k8s maintenance, autoscaling, etc.

Suggestion

I want to run a single node of Bitbucket. I don't need shared home, I'm not gonna use it anyway.

The chart only seems to have 2 options for shared-home: emptyDir or persistentVolumeClaim. Even if the volume is not even going to be mounted, it will still be created.

This is causing some issues later, as k8s autoscaler won't evict a pod that has local storage.

It looks like a better solution could be:

  • Support persistentVolumeClaim for multi-node deployments.
  • Do not create the shared-home volume at all for single-node deployments. What's the point of an unused emptyDir?

If I'm missing something and there's still some value in that emptyDir, perhaps it would be possible to add a switch that lets us avoid creating this volume?

Product

Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Always write Jira's dbconfig.xml when starting a new Pod

Suggestion

Problem

As a user it is possible to run into wrong information in the dbconfig.xml. As a result, the Jira pod will not start (no connection possible) or will start, but show the Setup Jira dialog.

There are a few ways to run into an inconsistent state when following the recommended configuration of having a persistent local-home directory.

  1. Deploy via Helm without database configuration in values.yaml. Configure Jira. Spawn a second node. The second node will start with an empty dbconfig.xml and will show the Jira Setup dialog again.
  2. Deploy via Helm and set the database url/credentials in the values.yaml. Try to switch to a new database by changing the database configuration in Helm's values.yaml. Trigger an upgrade of the release via Helm. Recreated Pods will still connect to the old database, because dbconfig.xml is not overwritten.

A persistent local-home directory is recommended in the documentation:

Whilst the data that is stored in local-home can generally be regenerated (e.g. from the database), this can be a very expensive process that sometimes requires manual intervention.

Current Workaround

Up to two additional steps have to be performed currently, to get back to a consistent state:

  • Configure the database in Helm's values.yaml
  • Delete the PersistentVolumeClaim which is used for the local-home of a Pod.

Suggestion

The root cause is associated with Jira's container image, specifically the entrypoint python script which can be found at https://bitbucket.org/atlassian-docker/docker-atlassian-jira/src/master/entrypoint.py. The dbconfig.xml file is intentionally not overwritten.

A solution could be to regenerate the dbconfig.xml if the database is configured in the values.yaml. This could be done in the container image (e.g., always regenerate if environment variables are set) or the StatefulSet could contain an additional initContainer which "deletes" the existing dbconfig.xml beforehand.

This problem could also affect Confluence or Bitbucket, but I did not test it.

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

Can't change namespace for nfs

Suggestion

Hello,

the provisioning script tries to set namespace wide security constraints, and for obivous devsecops considerations our account is only limited to namespace-wide definitions of SecurityContextConstraints, see the following error:

│ Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource SecurityContextConstraints "jira-nfs-server" **in namespace ""**: securitycontextconstraints.security.openshift.io "jira-nfs-server" is forbidden: User "XXXXX" cannot get resource "securitycontextconstraints" in API group "security.openshift.io" at the cluster scope
│
│   with module.jira[0].module.nfs.helm_release.nfs,
│   on modules\kubernetes\nfs\helm.tf line 1, in resource "helm_release" "nfs":
│    1: resource "helm_release" "nfs" {

We tried passing the namespace name over the values.yaml in the helm chart as a normal key/pair value, but were unsuccessful. Could you please tell us how we could achieve this ?

Thank you very much in advance :

Product

Jira

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bitbucket readiness probe does not respect context path

Bug

I want to deploy Bitbucket on a context path like "/bitbucket".

I'm setting service.contextPath=/bitbucket. It updates SERVER_CONTEXT_PATH in YAML (which is great), but the readiness probe is still at "/status". As a result it's never "ready". I suspect the tests may have the same issue.

To get this to work in my setup, right now I'm using the following patch.

kubectl patch statefulset bitbucket --type='json' \
  -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/readinessProbe/httpGet/path", "value": "/bitbucket/status"}]'

kubectl delete pod bitbucket-0

It looks like it could be a bug in the chart.

Product

Bitbucket

Code of Conduct

  • I agree to follow this project's Code of Conduct

Synchrony Ingress Path Incorrect

The ingress path for synchrony is incorrect.

- path: "{{ trimSuffix "/" .Values.ingress.path }}/synchrony"

This should not rely on the .Values.ingress.path as it then nests the synchrony path inside the Confluence contextPath and/or puts synchrony at the same base path, which usually users have to authenticate to with SSO. I fixed it manually by just changing this path to /synchrony in the ingress, as I already have a context path for Confluence.

The path in the ingress should be changed to /synchrony if Confluence is using a contextPath already. But, ideally the ingress for Synchrony should be broken down in the same way the ingress is for Confluence where a host and a path are both parameters. There will also be a different requirement for ingress depending on whether the user uses or does not use a contextPath with Confluence.

User Case 1 - User uses a contextPath with Confluence

  • The same ingress can be used for both Confluence and Synchrony by putting both Confluence and Synchrony at different contextPaths. Confluence would route to {{ .Values.ingress.host }}/confluence and and Synchrony would route to {{ .Values.ingress.host }}/synchrony for example.

Use Case 2 - User does not use a contextPath with Confluence

  • The same ingress can't be used for both Confluence and Synchrony as a different ingress will be used for a second host used by Synchronys ingress to avoid SSO collisions with the baseURL. Synchrony would route to {{ .Values.sychrony.ingress.host }} and Confluence would route to {{ .Values.ingress.host }} where they don't have any chance of colliding.

This would create a conditional requirement to create a second ingress specifically for synchrony if a contextPath is not used as well.

I can work on a PR if you think creating a conditional for second ingress and breaking down synchrony into host and path options is feasible.

Make securityContext more configurable on pod and container level

Currently, Helm charts only allow enabling securityContent.fsGroup while it's common to use runAsUser and runAsGroup too (moreover, it's often mandatory). Please, add those to stateful set templates and make them configurable in values.yaml.

Perhaps, it makes sense to make it 100% flexible, i.e. let a user define the entire securityContent on a pod and container level. Container securityContext might require things like

        securityContext:
          readOnlyRootFilesystem: true

which can be enforced by PodSecurityPolicy. So, it'll be good to be able to configure them in values.yaml.

Allow path based routing and setting path type for ingress resource.

Describe the bug
Can't connect to application when using path based routing. Unable to set path type for ingress resource. It has a fixed value of Prefix. Can we update the chart so this value can be provided by the developer. At the moment I can't seem to connect to my jira application from the browser.

Existing Behaviour
I'm interested in doing path based routing with jira and confluence. At the moment I'm not able to connect to application using the ingress resource host path. https://my-host.com/jira

Steps to Reproduce

  1. Deploy Jira with ingress resource

Expected Behaviour
Connect to application using the ingress resource host path.

Below is what my ingress resource looks like.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: ca-issuer
    kubernetes.io/ingress.class: nginx
    meta.helm.sh/release-name: jira
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/proxy-body-size: 250m
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
  name: jira
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - my-host.com
    secretName: my-host.com
  rules:
  - host: my-host.com
    http:
      paths:
      - backend:
          serviceName: jira
          servicePort: 80
        path: /jira
        pathType: Prefix

Deployed a cluster, scaling past 2 nodes failed, now can't start any

Just deployed this with following values:

  • clustering.enabled: true
  • postgres72 db config
  • ingress true with nginx/https/tlssecretname set appropriately
  • volumes.*.persistentVolumeClaim: true

This seemed to work great initially, was able to login and run through the initial setup. After this I tried scaling up to 2 nodes which also worked. Increasing to 3, the 3rd node would not start (similar errors as mentioned below).

I then tried scaling to 0 nodes, and scaling back up to 1 node and now the single node is throwing RMICachePeer listener errors and null pointer exceptions. I seem unable to start anything here now.

jira-pod0.log

Thanks!

Crowd nfsPermissionFixer Job runs before PVC created causing error (Run via Terraform)

I have a weird issue with the crowd chart where setting the nfsPermissionFixer value to true (thus enabling the k8s job object) errors on the helm build as the job runs before the PVC is created. This keeps the job container in a pending state.

The event on the pending job pod is:

Warning FailedScheduling 9m13s default-scheduler persistentvolumeclaim "crowd-shared-home" not found

According to this, the persistentVolumeClaim manifest should be applied before the job is. This doesn't seem to be the case for the crowd helm chart.. or is it just me?
This isnt an issue with other apps such as Jira... Maybe due to nfsPermissionFixer running in the statefulset init container rather than its own separate job manifest?

Running via Terraform.

values used:

volumes:
localHome:
persistentVolumeClaim:
create: true
storageClassName: "aws-ebs"
sharedHome:
persistentVolumeClaim:
create: true
storageClassName: "aws-efs"
nfsPermissionFixer:
enabled: true

Also, storage classes already exist and are running ebs/efs volumes for Jira

Make it possible to enable debug logging for selected packages

Currently, there's no way to provide own log4j.properties. Even though logs level can be changed in runtime, it'd be great to be able to define log level for selected packages.

Perhaps it's worth adding a configmap which is mounted as a file to WEB-INF/classes? Even though charts allow defining additional volume mounts and volumes, one needs to create a configmap with log4j.properties, mount it into the right path in the container and reference configmap in an additional volume (which is obviously) some work to do.

Standalone Synchrony Pod run as root and securityContext does not exist for Synchrony

Suggestion

The standalone Synchrony pods for Confluence run as root, while the Confluence pods can run with a dedicated user. While exec into the pod:

root@confluence-synchrony-0:~# ps axu
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   6972  3304 ?        Ss   Mar14   0:00 bash /scripts/start-synchrony.sh
root           6  0.9  3.3 8172900 1116324 ?     Sl   Mar14  59:30 java -Xms1g -Xmx2g -Xss2048k -XX:ActiveProcessorCount=2 -classpath /opt/atlassian/confluence/confluence/WEB-
root       13133  0.0  0.0   7236  4004 pts/0    Ss   20:11   0:00 /bin/bash
root       13178  0.0  0.0   8896  3308 pts/0    R+   20:24   0:00 ps axu

There is no securityContext available for standalone Synchrony in the values.yaml. Could you please suggest here?

Product

Confluence

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Making service sessionAffinity configurable

Suggestion

Hi All,

I have just started work on a PR to allow for service sessionAffinity to be configurable, but while skimming through previous PRs and Issues I cannot see this being discussed anywhere.

I'm having issues with keeping a session alive on Jira, and imagine ill run into the same issue on the other Atlassian apps too.

I'm using AWS ALB (through the AWS Load Balancer Controller) with sticky sessions configured to the JSESSIONID on the target groups, however, I still have session issues where it logs me out on refresh due to routing to a different pod. I found that configuring sessionAffinity on the service fixes this issue. My question is - Am I doing something wrong in having to have sessionAffinity where others are not?

Any help would be appreciated. If I am correct and sessionAffinity is needed, I will finish the PR and send it through.

Cheers.

Product

Jira, Confluence, Bitbucket, Other

Code of Conduct

  • I agree to follow this project's Code of Conduct

[Suggestion] - Library chart which contains common logic between Atlassian charts

Suggestion

There is a duplication of logic between charts. An examples of this is the fullname definition which is duplicated in each of the charts. For example bitbucket and jira examples below:

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "bitbucket.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "jira.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

These definitions are essentially the same logic which could be defined in one chart, a Helm Library Chart and used between the Atlassian charts.

Some of the benefits of introducing a common or library chart are:

  • Code re-use and keeping charts DRY
  • Minimizing errors
  • Maintenance improvements

Note: The common logic can be extracted from the charts and added to the common chart on a bit by bit basis and avoid "big bang" approach.

Product

Jira, Confluence, Bitbucket, Other

Code of Conduct

  • I agree to follow this project's Code of Conduct

False warning on missing volumes

With volumes.localHome.persistentVolumeClaim.create: true and volumes.sharedHome.persistentVolumeClaim.create: false but volumes.sharedHome.persistentVolumeClaim.claimName: jira-shared-home` getting this warning:

#################################################################################
######              WARNING: Persistent volume is not used!!!               #####
######            Data will be lost when the pod is terminated.             #####
#################################################################################

which isn't true since both local and shared home are persisted. I just don't let Helm create shared home PVC and pre-created it myself wit the right storage class and annotations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.