Coder Social home page Coder Social logo

activemq-artemis-helm's People

Contributors

bigga94 avatar denis111 avatar fernandofederico1984 avatar japzio avatar krancour avatar vromero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

activemq-artemis-helm's Issues

Can't find Chart activemq-artemis

Hello,

When executing

C:\repository\charts>helm install --name jms-service stable/activemq-artemis
Error: failed to download "stable/activemq-artemis"

Also i can't find in the list of charts listed with helm search

https://vromero.github.io/activemq-artemis-helm/index.yaml is missing

A helm repo updatefails with a 404.

Steps to reproduce

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "activemq-artemis" chart repository (https://vromero.github.io/activemq-artemis-helm/):
    failed to fetch https://vromero.github.io/activemq-artemis-helm/index.yaml : 404 Not Found

(shortened for brevity)

Slave failing to connect

Hello, I am trying to install artemis chart in a private kubernetes cluster but slave node is failing
Readiness probe failed: dial tcp 10.244.21.14:61616: connect: connection refused

I do not see anything wrong, pod is running but not ready 0/1
User removed.
User added successfully.
Merging input with '/var/lib/artemis/etc-override/broker-10.xml'
Merging input with '/var/lib/artemis/etc-override/broker-11.xml'
Calculating performance journal ...
100000
_ _ _
/ \ | | ___ __ __() _____
/ _ | _ \ __|/ _ \ / | |/ __/
/ ___ \ | / |
/ / |/| | |_
/
/ _| __|| |||/__ /
Apache ActiveMQ Artemis 2.6.2
2018-09-18 11:40:39,548 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server
2018-09-18 11:40:39,674 INFO [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2018-09-18 11:40:39,692 INFO [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/journal to /var/lib/artemis/data/journal/oldreplica.1
2018-09-18 11:40:39,733 INFO [org.apache.activemq.artemis.core.server] AMQ221012: Using AIO Journal
2018-09-18 11:40:39,927 INFO [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 4,202,692,608
2018-09-18 11:40:40,080 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
2018-09-18 11:40:40,080 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
2018-09-18 11:40:40,081 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
2018-09-18 11:40:40,081 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-mqtt-protocol]. Adding protocol support for: MQTT
2018-09-18 11:40:40,082 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-openwire-protocol]. Adding protocol support for: OPENWIRE
2018-09-18 11:40:40,090 INFO [org.apache.activemq.artemis.core.server] AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
2018-09-18 11:40:40,345 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2018-09-18 11:40:40,446 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2018-09-18 11:40:40,570 INFO [org.apache.activemq.artemis.core.server] AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.6.2 [null] started, waiting live to fail before it gets active
018-09-18 11:40:41,006 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2018-09-18 11:40:41,033 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2018-09-18 11:40:41,036 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to hawtio 1.5.5 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2018-09-18 11:40:41,039 INFO [io.hawt.jmx.UploadManager] Using file upload directory: /var/lib/artemis/tmp/uploads
2018-09-18 11:40:41,064 INFO [io.hawt.web.AuthenticationFilter] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2018-09-18 11:40:41,113 INFO [io.hawt.web.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/var/lib/artemis/etc/jolokia-access.xml]
2018-09-18 11:40:41,152 INFO [io.hawt.web.RBACMBeanInvoker] Using MBean [hawtio:type=security,area=jmx,rank=0,name=HawtioDummyJMXSecurity] for role based access control
2018-09-18 11:40:41,361 INFO [io.hawt.system.ProxyWhitelist] Initial proxy whitelist: [localhost, 127.0.0.1, 10.244.21.14, activemq-artemis-activemq-artemis-slave-0.activemq-artemis-activemq-artemis-slave.kube-system.svc.cluster.local]
2018-09-18 11:40:41,727 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://0.0.0.0:8161
2018-09-18 11:40:41,727 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://0.0.0.0:8161/console/jolokia
2018-09-18 11:40:41,727 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://0.0.0.0:8161/console
2018-09-18 11:40:46,586 INFO [org.apache.activemq.artemis.core.server] AMQ221024: Backup server ActiveMQServerImpl::serverUUID=cb9170da-bb34-11e8-a42b-0a580af4140d is synchronized with live-server.
2018-09-18 11:40:46,609 INFO [org.apache.activemq.artemis.core.server] AMQ221031: backup announced

PersistentVolume permissions

I had issues with permissions on the data folder when running this chart, same issue as 28.

My fix was to add this to the pod spec. Surprising that others haven't had this issue with the chart.

securityContext:
  fsGroup: 1000

Production ready? review 2020

Hi @vromero ,

If we back into production readiness topic
#14

Are all these questions still open or probably some of them you managed to sort out?
I'm mostly interested in prometheus metrics

  1. haven't decided if generate config or use KUBEPING
  2. artemis can't handle dynamic cluster sizes (the cluster with static size has to be formed on start), I have no idea what to do about this.
  3. haven't completed the integration with prometheus, a messaging broker without metrics/alarms is more a problem than a solution
  4. Not sure what to do about the loadbalancing. Today slave is not-ready but not ready messes up with things like helm install --wait or with deployment of replica>1 stateful sets. No idea yet what to do about this.

AMQ222092: Connection to the backup node failed, removing replication now: ActiveMQRemoteDisconnectException[errorType=REMOTE_DISCONNECT message=null]

Im getting below exception when i deploy helm charts with replicas = 1:

`2019-10-29 17:52:12,372 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.2 [si-activemq-ha-activemq-artemis-master-0, nodeID=260a4bed-fa74-11e9-8f9c-32d5856911e9]
2019-10-29 17:52:16,675 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2019-10-29 17:52:18,073 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2019-10-29 17:52:21,160 WARN [org.apache.activemq.artemis.core.server] AMQ222092: Connection to the backup node failed, removing replication now: ActiveMQRemoteDisconnectException[errorType=REMOTE_DISCONNECT message=null]
at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl.connectionDestroyed(RemotingServiceImpl.java:542) [artemis-server-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$Listener.connectionDestroyed(NettyAcceptor.java:829) [artemis-server-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.lambda$channelInactive$0(ActiveMQChannelHandler.java:83) [artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.6.2.jar:2.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_212]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.6.2.jar:2.6.2]

2019-10-29 17:52:22,779 WARN [org.apache.activemq.artemis.core.server] AMQ222251: Unable to start replication: java.lang.NullPointerException
at org.apache.activemq.artemis.core.journal.impl.JournalFilesRepository.closeFile(JournalFilesRepository.java:481) [artemis-journal-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.moveNextFile(JournalImpl.java:3019) [artemis-journal-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.journal.impl.JournalImpl.forceMoveNextFile(JournalImpl.java:2299) [artemis-journal-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.prepareJournalForCopy(JournalStorageManager.java:536) [artemis-server-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.startReplication(JournalStorageManager.java:597) [artemis-server-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.server.impl.SharedNothingLiveActivation$2.run(SharedNothingLiveActivation.java:178) [artemis-server-2.6.2.jar:2.6.2]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]

2019-10-29 17:52:23,276 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.2 [260a4bed-fa74-11e9-8f9c-32d5856911e9] stopped, uptime 17.686 seconds
`

Please let me know what might be the issue?

Production ready?

Hi, thank you for this chart. Would you say this chart is able to be used in a production environment?

Github Actions for automatic Release

Currently, there is an old version on the gh-pages.
This could be fixed by using github actions for example.
I have done that and could help with that, if desired.
See example.

Management Console login fails

Hello, I deployed your helm chart on our kubernetes cluster, added ingress on the top of that in order to access http://activemq-artemis.devel.fdp.eit.zone/console/login
The only problem is that the username / password specified does not log in correct, after filling in the login fields on the login page, the management console opens for a second and goes back to the first page in a second.
On network tab I can see some forbidden calls to http://activemq-artemis.devel.fdp.eit.zone/console/refresh

Do you have any ideas?

Timeout while handshaking was occured

I have installed a chart with 2 replicas set with a static cluster configuration.
First I fixed domain naming issue so a bridge been estabilished.

Bridge ClusterConnectionBridge@3c966cfd [name=$.artemis.internal.sf.habroker-jms.477161f3-fdde-11e8-999f-0242c0022412, queue=QueueImpl[name=$.artemis.internal.sf.habroker-jms.477161f3-fdde-11e8-999f-0242c0022412, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=3504f3fc-fd53-11e8-9c53-0242c0022412], temp=false]@7994e50b targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@3c966cfd [name=$.artemis.internal.sf.habroker-jms.477161f3-fdde-11e8-999f-0242c0022412, queue=QueueImpl[name=$.artemis.internal.sf.habroker-jms.477161f3-fdde-11e8-999f-0242c0022412, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=3504f3fc-fd53-11e8-9c53-0242c0022412], temp=false]@7994e50b targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=habroker-jms-master-1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=habroker-jms-master-1-habroker-jms-master-pmh-depl-svc-kube-local], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1788364006[nodeUUID=3504f3fc-fd53-11e8-9c53-0242c0022412, connector=TransportConfiguration(name=habroker-jms-slave-1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=habroker-jms-slave-1-habroker-jms-slave-pmh-depl-svc-kube-local, address=jms, server=ActiveMQServerImpl::serverUUID=3504f3fc-fd53-11e8-9c53-0242c0022412])) [initialConnectors=[TransportConfiguration(name=habroker-jms-master-1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=habroker-jms-master-1-habroker-jms-master-pmh-depl-svc-kube-local], discoveryGroupConfiguration=null]] is connected

But logs from master nodes contains errors

2018-12-12 07:51:53,968 ERROR [org.apache.activemq.artemis.core.server] AMQ224088: Timeout (10 seconds) while handshaking has occurred.

  • Is any way to debug and shoot this trouble?
  • What broker settings are responsible for the handshaking process?

Office Hours?

I'm struggling to make up my mind on this chart and the clustering model. I'd like to meet with anyone that undestand well K8s and Artemis. At the very least @DanSalt.

Anyone interested please answer with a timezone or email me at victor.romero and then the famous google mail server dot com so I can figure out a good time.

activemq-artemis ingress controller not works

Hi @vromero ,
i'm trying to install your activemq-artemis-helm on a RKE cluster. All works fine but i couldn't deploy the principal service as LoadBalancer so i have tried to use a ingress controller but i have a problem.
When i do the login after few second i have been redirected to login page again.
If i do a portforward of the svc all work fine.
Which could be the problem?
here my ingress:

`apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/creatorId: user-t82ds
field.cattle.io/ingressState: '{"YXJ0ZW1pcy1hY3RpdmVtcS9kZWZhdWx0L2FydGVtaXMtbWFzdGVyLnNtYXJ0cm9hZC1jY2wtZGV2LmNvcnAvLy84MTYx":""}'
field.cattle.io/publicEndpoints: '[{"addresses":["10.64.20.23","10.64.20.24","10.64.20.25","10.64.20.26","10.64.20.27"],"port":80,"protocol":"HTTP","serviceName":"default:activemq-activemq-artemis","ingressName":"default:artemis-activemq","hostname":"artemis-master.smartroad-ccl-dev.corp","path":"/","allNodes":false}]'
creationTimestamp: "2020-11-04T11:07:05Z"
generation: 1
labels:
cattle.io/creator: norman
managedFields:

  • apiVersion: networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
    f:status:
    f:loadBalancer:
    f:ingress: {}
    manager: nginx-ingress-controller
    operation: Update
    time: "2020-11-04T11:07:57Z"
  • apiVersion: extensions/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:field.cattle.io/creatorId: {}
    f:field.cattle.io/ingressState: {}
    f:field.cattle.io/publicEndpoints: {}
    f:labels:
    .: {}
    f:cattle.io/creator: {}
    f:spec:
    f:rules: {}
    manager: rancher
    operation: Update
    time: "2020-11-04T11:07:57Z"
    name: artemis-activemq
    namespace: default
    resourceVersion: "3808148"
    selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/artemis-activemq
    uid: 978535ba-7002-41ac-a5d1-87bdb96274fd
    spec:
    rules:
  • host: artemis-master.smartroad-ccl-dev.corp
    http:
    paths:
    • backend:
      serviceName: activemq-activemq-artemis
      servicePort: 8161
      path: /
      pathType: ImplementationSpecific
      status:
      loadBalancer:
      ingress:
    • ip: 10.64.20.23
    • ip: 10.64.20.24
    • ip: 10.64.20.25
    • ip: 10.64.20.26
    • ip: 10.64.20.27`

thanks for the support
Cristian

HA configuration with enabled persistence

Hi Victor,
It is great chart! Have you tested it for the cases with the enabled persistence?
I tried to configure it for HA with the enabled persistence but I faced with the following issue:
I configured my cluster for one master and one slave servers. I realized that the only one PVC instance is created for all replicas (masters and slaves) that lead to conflicts in the journals and replicated data.
Master logs:

2018-07-05 12:55:19,109 INFO  [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2018-07-05 12:55:20,311 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/bindings to /var/lib/artemis/data/bindings/oldreplica.1
2018-07-05 12:55:20,322 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/journal to /var/lib/artemis/data/journal/oldreplica.2
2018-07-05 12:55:20,376 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/paging to /var/lib/artemis/data/paging/oldreplica.1

Slave logs:

2018-07-05 13:11:16,209 INFO  [org.apache.activemq.artemis.core.server] AMQ221000: backup Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)
2018-07-05 13:11:16,240 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/bindings to /var/lib/artemis/data/bindings/oldreplica.2
2018-07-05 13:11:16,257 INFO  [org.apache.activemq.artemis.core.server] AMQ221055: There were too many old replicated folders upon startup, removing /var/lib/artemis/data/journal/oldreplica.1
2018-07-05 13:11:16,266 INFO  [org.apache.activemq.artemis.core.server] AMQ222162: Moving data directory /var/lib/artemis/data/journal to /var/lib/artemis/data/journal/oldreplica.3

In the logs we may see that the slave broker deletes data of the master broker: "There were too many old replicated folders upon startup, removing /var/lib/artemis/data/journal/oldreplica.1".

The documentation says:

Data Replication
When using replication, the live and the backup servers do not share the same data directories, all data synchronization is done over the network. Therefore all (persistent) data received by the live server will be duplicated to the backup. (https://activemq.apache.org/artemis/docs/2.4.0/ha.html)

Warning
Once a cluster node has been configured it is common to simply copy that configuration to other nodes to produce a symmetric cluster. However, care must be taken when copying the Apache ActiveMQ Artemis files. Do not copy the Apache ActiveMQ Artemis data (i.e. the bindings, journal, and large-messages directories) from one node to another. When a node is started for the first time and initializes its journal files it also persists a sp thecial identifier to the journal directory. This id must be unique among nodes in the cluster or the cluster will not form properly. (https://activemq.apache.org/artemis/docs/2.4.0/clusters.html)

May be I miss something in the configuration? Or should I adjust the chart to create separate PVC for the master and slave nodes in the cluster? The another possible workaround is to use shared-store ha-policy.

Best regards,
Hanna

Messages arriving on the wrong master never consumed?

The part I do not understand regarding the provided setup is the following - which might be of course due to the fact I am new with Artemis:
I register my consumer at the service ...-activemq-artemis, which routes me to one of the 2 master brokers (k8s replicas). The same I do with my producer and there I might end up on the other master. As far as I see in the broker config there is not setting for redistributing of messages between the 2 masters. So I end up with messages which are never consumed, right?
One solution coming to my mind would be having the replicated masters redistributing or maybe registering consumers at master-0 as well as at master-1. But how to achieve this with 2 master brokers accessible only through one service?

BUG: Missing security context for master and slave when using persistence mode.

H.
When using persistent storage artemis is not able to write to his data path and raise:

AMQ222141: Node Manager can not open file /var/lib/artemis/data/journal/server.lock: java.io.IOException: No such file or directory

This is due the fact that in current helm chart artemis use own user called "artemis" who don't have proper permission to write on requested PVC.

To solve this issue I add a securityContext to both
activemq-artemis/templates/master-statefulset.yaml
and
activemq-artemis/templates/slave-statefulset.yaml

  securityContext:
    fsGroup: 1000
    runAsUser: 1000
    runAsNonRoot: true

This solve the issue and master and slave is able to write his stuff to given PVC.

Slave Readiness Probes faling

Hi,

I'm trying to deploy ActiveMQ-Artemis helm chart. But the slave's readiness probe fails.

Screen Shot 2019-10-01 at 5 20 48 PM

How can I find the base issue ?

Issue when installing helmchart with rights in etc

I've got:

Permission denied: ../etc/broker.xml
Permission denied: ../etc/broker.xml
sed: couldn't open temporary file ../etc/sed1d0l85: Permission denied

when installing the chart and running the pod. I see that the user in dockerfile is fixed or my openshift when running pod use randomuser.

Do you thing i need to update the dockerfile to add :
RUN chgrp -R 0 /some/directory &&
chmod -R g=u /some/directory

or do you have other solution?

LoadBalancer service pendiing

Hi,
I am trying to install your artemis helm on a rancher cluster.
After installing the deployment the service remains in pending state.

Has someone experienced the same issue?

Thanks,
Matteo.

Enable Prometheus metric

Hi,
i'm trying to enable the prometheus metric but even if i set the value.yaml no metrics are esposed.
Could you tell me if i have to set others parametres?

`
prometheus:

Prometheus JMX Exporter: exposes the majority of Kafkas metrics
jmx:
Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator
interval: 10s

 **Timeout at which Prometheus timeouts scrape run, note: only used by Prometheus Operator**
scrapeTimeout: 10s

** Port jmx-exporter exposes Prometheus format metrics to scrape**
port: 5556

operator:
Are you using Prometheus Operator?
enabled: true

serviceMonitor:
 **Namespace Prometheus is installed in**
  namespace: monitoring

  # Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
  # [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
  # [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
  #selector:
  #  prometheus: kube-prometheus

`

Thanks
Cristian

cluster config in separate core node

Hi, very useful chart. I am using Artemis in a single node configuration for now (planning later for cluster), but I noticed that the possible cluster configuration is not correct.

After some investigation found the change in ActiveMq Artemis 2.0:
apache/activemq-artemis@0006627

which adds xsi:schemaLocation="urn:activemq:core " to the core node, when the broker config is generated for the first time.

This causes the configure-cluster.sh to add separate "core" node in broker.xml, which is ignored by Artemis. The original schemaLocation contains also blank space at the end, which is probably unintended.

This can also break any other xml/xsl broker config added through override mechanism.

I have tried fixing the xml by adding the schema location, but in this case the broker refuses to start pointing to missing connector-ref.

broker.xml contains in this case
<core:connector-ref xmlns:core="urn:activemq:core">activemq-0</core:connector-ref>

so probably the explicit "core:connector-ref" is not recognised properly?

Not sure what the proper bugfix should be in this case.

Connection fail

pod/active-mq-activemq-artemis-slave-0 0/1 Running 0 4m
pod/active-mq-activemq-artemis-slave-1 0/1 Running 0 4m

error

Broken Icon

There is still a broken link to the old icon after #39 .
This prevents some tools, e.g. Rancher, from using the chart.

I think, there is an open mergerequest, that could fix this: #31 (Haven't actually validated it, fixed it otherwise).

Remove Slave Nodes.

I came across an artcile by RedHat where they talk about Artemis in the OpenShift/Kubernetes world.

The article is located here: https://developers.redhat.com/blog/2020/01/10/architecting-messaging-solutions-with-apache-activemq-artemis/

A few of the sections make it quite clear that having a slave node for HA in Kuberenetes is not needed as HA is achieved by K8s itself.

In the orchestration section:
For example, there is no master/slave failover (so no hot backup broker present). Instead, there is a single pod per broker instance that is health monitored and restarted by Kubernetes, which ensures broker HA

In the Broker section

On Kubernetes, broker HA is achieved through health checks and container restarts. On-premise, the broker HA is achieved through master/slave (shared store or replication).

I have removed it from the helm chart and busy testing will create a pull request if you are keen and we can simplify the cluster model a bit more.

Getting Validation Error while helm install

Please help with the below error

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(StatefulSet.spec.template.spec.initContainers[0]): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Container, ValidationError(StatefulSet.spec.template.spec): unknown field "strategy" in io.k8s.api.core.v1.PodSpec]

Loadbalancing with external load balancer

I have two masters, and two slaves, and am trying to utilize an external network load balancer, however it does not seem to work between the two masters. I changed service.yaml to utilize 'type: NodePort' instead of 'LoadBalancer', but everything else is the same. Why will the console work on 8161 with master-0, but does not seem to balance to master-1?

Allow to specify a ConfigMap for config-override

In some cases we would like to modify the default config. This is exposed via the docker image using the snippets concept. Currently this is not exposed on the chart (unless i've missed this).

Would it be possible to add an optional configmap to be specified for the config-override volume?

Can't install chart

When i try to follow the steps, the following error is returned:
helm install --name my-release activemq-artemis/activemq/artemis
Error: Failed to fetch https://vromero.github.io/charts/activemq-artemis-0.0.1.tgz : 404 Not Found

I can validate the index.yaml is found by doing
helm repo list
activemq-artemis https://vromero.github.io/activemq-artemis-helm/

Is there something i've missed or has the zip been removed?

After failover, master and slave both alive

Scenario
Restart one master node (kubectl delete pod xxx) to simulate a service interruption.

Expected behaviour
Slave becomes active immediately and when the master is back up (restarted by k8s) and synchronized, there is still only one active ActiveMQ artemis instance for that master/slave pair.

Actual behaviour
Slave becomes active immediately (✔️), but after k8s restarts the master pod, it, too, is considered active (❌), at least from the perspective of k8s (1/1 pods). The consequence of this is that k8s would route requests to both master and slave (via the service DNS)

Additional information
I haven't really tested much beyond this observation. I don't know if the master node would have actually responded to requests. But I find it a bit weird that the system doesn't return to the original state after a failover.

The Artemis HA documentation suggests to use <allow-failback>true</allow-failback> on the slave and <check-for-live-server>true</check-for-live-server> on the master. I must confess, I don't understand why the chart explicitly configures the opposite, but my experience with Artemis is very limited so far.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.