Coder Social home page Coder Social logo

mq-helm's Introduction

IBM MQ Sample Helm Chart

This repository provides a helm chart to deploy an IBM® MQ container built from the IBM MQ Container GitHub repository, and has been verified against the 9.3.5 branch.

Pre-reqs

Prior to using the Helm chart you will need to install two dependencies:

  1. Helm version 3
  2. Kubectl

You will also need a kubernetes environment for testing, this could be a private cloud environment or a deployment on a public cloud such as IBM Cloud, AWS, Azure or Google Cloud.

The repoistory includes two directories:

  • ibm-mq: the helm chart for IBM MQ
  • samples: provides a number of samples of deployment

Issues and contributions

For issues relating specifically to the Helm chart, please use the GitHub issue tracker. If you do submit a Pull Request related to this Helm chart, please indicate in the Pull Request that you accept and agree to be bound by the terms of the Developer's Certificate of Origin.

License

The code and scripts are licensed under the Apache License 2.0.

This Helm chart defaults to deploy the free to use non-warranted IBM MQ Advanced for Developer containers for development use only, with the option to customize to other container images.

When deploying IBM MQ for production or non-production use into a Kubernetes environment, you can license based on the resource limits specified on the container by using the IBM License Service. The IBM License Service is deployed into the Kubernetes Cluster and tracks usage based on Kubernetes Pod annotations. How this can be defined within the Helm chart is described here. To understand how to deploy the IBM License Service please review here.

This chart includes the capability to deploy IBM MQ Native HA. When used for production and non-production this feature is available to customers with entitlement to IBM MQ Advanced.

Copyright

© Copyright IBM Corporation 2021

mq-helm's People

Contributors

callumpjackson avatar jgrzybowski avatar lorellalou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mq-helm's Issues

Extending size of persistent storage in AKS

Hi

In AKS it is possible to extend the size of the persistent volumes just by changing the PVC value of “resources.requests.storage”.
After a little while persistent volumes are extended with the new size.

When I try so use the helm chart to update the PVC through volumeClaimTemplates:, using the override:
persistence:
qmPVC:
enable: true
size: 12Gi

I get the following error:

"Error: UPGRADE FAILED: cannot patch "nativeha-s1-ibm-mq" with kind StatefulSet: StatefulSet.apps "nativeha-s1-ibm-mq" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden"

It seems to be a restriction with helm that volumeClaimTemplates can’t be changed?
If I do not use the helm chart to maintain the PVC’s, the helm chart will be out of sync with the live values. Will that be a problem?
Can I circumvent the problem somehow ?

Rgds
John Barbesgaard

AMQ3230E rrcE_SSL_BAD_KEYFILE_LABEL (407) on AKS

Hi - I'm setting up a nativeHA queue manager on AKS. The pods fails during startup due to the following error.
I can't figure out what goes wrong.

I'm using chart v4.0.0 and MQ v9.3.1.1 from the developer image.

AMQ3230E: Native HA network connection attempt from network address '172.20.88.197' failed. [CommentInsert1(172.20.88.197), CommentInsert2(secureapphelm), CommentInsert3(rrcE_SSL_BAD_KEYFILE_LABEL (407) (????) (ha) ())]

Any help appreciated
Rgds
John Barbesgaard

Restoring queue configuration

How to retain or restore queues configuration which are created over the time, when nativaha pod's replication is scaled down to 0 and scaled it back to original replication value.

Support topologySpreadConstraints

Hi,

In order to guarantee that a replica don't get scheduled in the same zone and node as another, we need to be able to specify topologySpreadConstraints.

I am happy to contribute back this change to the project.

is this the new repo for mq

Hello!

I'm wondering what happened to the ibm-mqadvanced-server-dev chart. Is this the new replacement for that chart? I don't see that repo on Github anymore. Wondering if going forward this is the canonical and supported Helm chart for IBM MQ on K8s. Thanks!!

Modify Log stanza of the qm.ini file

Hello,

We have been trying to customize the log stanza when deploying a qmgr through the helm chart by adding the following in this section

    Log:
       LogPrimaryFiles=100
       LogSecondaryFiles=250
       LogFilePages=65535
       LogBufferPages=4096

But it seems that those values are not being applied to the configuration within the statefulset, and when we review the qm.ini file on this path /var/mqm/qmgrs/<QMGR>/qm.ini only the default values are set.

Our environment is EKS and we are using Helm to deploy it.
It would be helpful if you could provide the right way to add the log stanza configuration during the deployment.

Add loadBalancerIP to service-loadbalancer template

We have a use case where we are using an internal static IP for our ibm-mq loadbalancer. Setting this requires a loadBalancerIP parameter along with an annotation. The current service-loadbalancer template does not have this and others may benefit, so could it be added or would you accept a PR with the following addition?

spec:
  loadBalancerIP: {{ .Values.route.loadBalancer.loadBalancerIP }}

Thank you!

Enhance this helm chart to support readOnlyRootFileSystem settings on container security context

Set readOnlyRootFileSystem to true for containers is a best practice from security perspective.

From the document of the mq container repo, the container support running in readOnlyRootFileSystem, but helmchart doesn't support this configuration.
https://github.com/ibm-messaging/mq-container/blob/master/docs/usage.md#running-with-a-read-only-root-filesystem

Here are a few reasons why we might want to do this:

Immutability: By making the root filesystem read-only, you ensure that the application's environment remains the same as when you deployed it. This can help prevent issues caused by changes to the filesystem.

Preventing Malware Persistence: If a container becomes compromised (e.g., an attacker manages to run a malicious script), a read-only filesystem can prevent the malware from writing files to the filesystem and gaining persistence.

Enforcing Good Application Design: Applications running in containers should be designed to be stateless and to write any persistent data to a separate storage volume, not to the container's filesystem. A read-only root filesystem enforces this design principle.

Reducing the Attack Surface: A read-only filesystem can limit the capabilities of an attacker by preventing them from writing or modifying files on the container's filesystem.

I would like to create a PR to enhance this helm chart.

Basic idea is to create 2 emptyDir volumes, for the /run and /tmp folder and mount them when the readOnlyRootFileSystem is set to true.

Error while Sending the Message

Hi @callumpjackson

While sending the message getting the below error. can you please try to help.
W0812 12:47:31.053301 1125 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Starting amqsphac secureapphelm
sendMessage.sh: line 27: /opt/mqm/samp/bin/amqsphac: No such file or directory
akhil1_aggarwal@cs-469377201213-default:~/mq-helm-main/samples/GoogleKubernetesEngine/test$

nativeha failover -> messages lost

Hello,

I'm testing out a nativeha setup. To achieve this I am deploying a cluster on GKE with nativeha enabled, but multi-instance disabled.

I then used the supplied test scripts to send a few messages (sendMessage.sh), which indeed sends some message to my active pod, ibm-mq-0.

I then kill the sendMessage.sh script, and in turn kill the ibm-mq-0 pod.
ibm-mq-1 becomes active after a few seconds.

If I run the getMessages.sh script, I get no messages at all. it seems the messages weren't replicated.

QMNAME(ibmmq)                                             STATUS(Running) DEFAULT(yes) STANDBY(Not permitted) INSTNAME(Installation1) INSTPATH(/opt/mqm) INSTVER(9.3.1.0) ROLE(Active) INSTANCE(ibm-mq-1) INSYNC(yes) QUORUM(3/3)
 INSTANCE(ibm-mq-1) ROLE(Active) REPLADDR(ibm-mq-replica-1) CONNACTV(yes) INSYNC(yes) BACKLOG(0) CONNINST(yes) ALTDATE(2023-02-16) ALTTIME(22.23.35)
 INSTANCE(ibm-mq-2) ROLE(Replica) REPLADDR(ibm-mq-replica-2) CONNACTV(yes) INSYNC(yes) BACKLOG(0) CONNINST(yes) ALTDATE(2023-02-16) ALTTIME(22.23.35)
 INSTANCE(ibm-mq-0) ROLE(Replica) REPLADDR(ibm-mq-replica-0) CONNACTV(yes) INSYNC(yes) BACKLOG(0) CONNINST(yes) ALTDATE(2023-02-16) ALTTIME(22.23.35)

dspmq command seems to indicate the cluster is running fine. BACKLOG(0) never changes as far as I can see, but my messages got lost. If I fail back to ibm-mq-0 they are still not there.

Am I doing something wrong in my tests?

Addition of existing Security groups to EKS cluster

We have modified the stacks according to our requirements to launch worker nodes in private subnets and include boot nodes. Now, we are facing issues in the boot node accessing the EKS cluster API endpoint. We have fixed this by manually adding the security group to EKS by allowing the entire VPC CIDR or allowing the IP of the boot node alone. We need to add this security group within eksctl commands as shown below.

We tried by adding below values

clusterSecurityGroup="sg-1234567". but failed with association.

Resources:
BootNodeProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref RoleName
Path: /
BootNode:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
Required:
- StackPropertiesFile
StackPropertiesFile:
commands:
01_create_opt_ibm:
command: mkdir -p /opt/ibm
test: test ! -d /opt/ibm
files:
/opt/ibm/cluster.yaml:
mode: '000755'
owner: root
group: root
content:
!Sub
- |
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${ClusterName}
region: ${AWS::Region}
vpc:
subnets:
private:
${AZ1}: { id: ${PrivateSubnet1ID} }
${AZ2}: { id: ${PrivateSubnet2ID} }
${AZ3}: { id: ${PrivateSubnet3ID} }
clusterEndpoints:
publicAccess: true
privateAccess: true
managedNodeGroups:
- name: ng-1-workers
instanceType: m5.large
desiredCapacity: 3
privateNetworking: true
- AZ1: !Ref AvailabilityZone1
AZ2: !Ref AvailabilityZone2
AZ3: !Ref AvailabilityZone3
PrivateSubnet1ID: !Ref PrivateSubnet1ID
PrivateSubnet2ID: !Ref PrivateSubnet2ID
PrivateSubnet3ID: !Ref PrivateSubnet3ID
ClusterName: !Ref EKSClusterName

Thanks,
Murali

How multiple trust store certificates are recognised?

Hi, I am trying to understand how the multiple truststore certificates are getting recognised.
Based on the mq containers docs from https://github.com/ibm-messaging/mq-container/blob/master/docs/usage.md ,
the way they designed it was the following:

  • /etc/mqm/pki/trust/ - for certificates with only the public key

Example:

  • /etc/mqm/pki/trust/0/tls.crt
  • /etc/mqm/pki/trust/1/tls.crt

As per the helm value file and the helm documentation, it is:
pki:
trust:
- name: default
secret:
secretName: appsecret
items:
- app.crt

If I add multiple items (.crt files),
pki:
trust:
- name: default
secret:
secretName: appsecret
items:
- app1.crt
- app2.crt
- app3.crt

they will be placed under a single index - '0' in the MQ container file system, say,
/etc/mqm/pki/trust/0/app1.crt
/etc/mqm/pki/trust/0/app2.crt
/etc/mqm/pki/trust/0/app3.crt

Does that approach still work or MQ is designed to have something like that?
/etc/mqm/pki/trust/0/app1.crt
/etc/mqm/pki/trust/1/app2.crt
/etc/mqm/pki/trust/2/app3.crt

how to create groups ?

I can create users without any issue but when I try to create a group I get the next following message:

Unknown user/group

The provided name for the given type is not defined on the system. Make sure that the name is defined and that it matches the type of entity

If I use one of the groups under /etc/group then the group can becreated but this directory is lock and the command groudadd inside the running pod gives me the next message

groupadd: Permission denied.
groupadd: cannot lock /etc/group; try again later.

I need to add groups with specific permissions, how can I achive this ?

Howto specify replica container productID

When I build my own container using the helm chart from git, and this container is a nativeHA instance, how do I provide different productID’s in the annotations for the active instance and the 2 standby instances respectively?
I assume I need to do so, since there are different licenses for the two.

rgds
John Barbesgaard

Multiple mqsc Config Maps not working

When using multiple CMs in the Helm chart only the last CM is used to map to a volume

mqscConfigMaps:

  • name: mqsc-configmap
    items:
    • cvp.mqsc
      ping.mqsc
      name: mqsc-tmp-configmap
      items:
    • temp.mqsc

Results in

image

is it possible to provide trust store public keys via ConfigMap?

After reading the readme, I found that it is only possible to provide trust store public keys via Secret

pki:
  trust:
    - name: default
      secret:
        secretName: appsecret
        items:
          - app.crt

Would it be possible to support providing trust via config map? In our ocp cluster, the service ca is provided as a config map.

Readiness probe failed:

hi,

I am using this latest helm chart to create NativeHA MQ but getting issue of readiness probe failed.

Please help to resolve the issue & let me know if you need more info on that.

below you will get the output of "describe pod" & "Logs of the pod"

Describe -

Name: mq-poc1-ibm-mq-0
Namespace: mq
Priority: 0
Node: ip-
Start Time: Thu, 16 Dec 2021 00:38:31 +0100
Labels: app.kubernetes.io/instance=mq-poc1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibm-mq
app.kubernetes.io/version=9.2.4.0
controller-revision-hash=mq-poc1-ibm-mq-fcdc6dd87
helm.sh/chart=ibm-mq-1.0.0
statefulSetName=mq-poc1-ibm-mq
statefulset.kubernetes.io/pod-name=mq-poc1-ibm-mq-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP:
IPs:
IP:
Controlled By: StatefulSet/mq-poc1-ibm-mq
Containers:
qmgr:
Container ID: docker://e7a1f1a200ccf98b9b6ac5a2b4dbae0d9a8576ef6e62f5f5fde4391fb7ab11b5
Image: ibmcom/mq:9.2.4.0-r1
Image ID: docker-pullable://ibmcom/mq@sha256:7590ea14750ecba7bd24b758dc9978d2280e880fcda6b4a996068966dea8c61d
Ports: 1414/TCP, 9443/TCP, 9157/TCP, 9414/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Thu, 16 Dec 2021 00:38:41 +0100
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 2
memory: 2Gi
Liveness: exec [chkmqhealthy] delay=0s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [chkmqready] delay=0s timeout=3s period=5s #success=1 #failure=1
Startup: exec [chkmqstarted] delay=0s timeout=5s period=5s #success=1 #failure=24

Environment:
LICENSE: accept
MQ_QMGR_NAME: mqpoc1
MQ_NATIVE_HA: true
AMQ_CLOUD_PAK: true
MQ_NATIVE_HA_INSTANCE_0_NAME: mq-poc1-ibm-mq-0
MQ_NATIVE_HA_INSTANCE_0_REPLICATION_ADDRESS: mq-poc1-ibm-mq-replica-0(9414)
MQ_NATIVE_HA_INSTANCE_1_NAME: mq-poc1-ibm-mq-1
MQ_NATIVE_HA_INSTANCE_1_REPLICATION_ADDRESS: mq-poc1-ibm-mq-replica-1(9414)
MQ_NATIVE_HA_INSTANCE_2_NAME: mq-poc1-ibm-mq-2
MQ_NATIVE_HA_INSTANCE_2_REPLICATION_ADDRESS: mq-poc1-ibm-mq-replica-2(9414)
LOG_FORMAT: basic
MQ_ENABLE_METRICS: true
DEBUG: false
MQ_ENABLE_TRACE_CRTMQDIR: false
MQ_ENABLE_TRACE_CRTMQM: false
MQ_EPHEMERAL_PREFIX: /run/mqm
MQ_GRACE_PERIOD: 29
Mounts:
/etc/mqm/mq.ini from ini-cm-helmsecurepoc1 (ro,path="mq.ini")
/etc/mqm/mq.mqsc from mqsc-cm-helmsecurepoc1 (ro,path="mq.mqsc")
/etc/mqm/pki/keys/ibmwebspheremqqmgr1 from ibmwebspheremqqmgr1 (ro)
/mnt/mqm from qm (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2s9z2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
qm:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: qm-mq-poc1-ibm-mq-0
ReadOnly: false
ibmwebspheremqqmgr1:
Type: Secret (a volume populated by a Secret)
SecretName: qmgr1secret
Optional: false
mqsc-cm-helmsecurepoc1:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: helmsecurepoc1
Optional: false
ini-cm-helmsecurepoc1:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: helmsecurepoc1
Optional: false
kube-api-access-2s9z2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 56s default-scheduler Successfully assigned mq/mq-poc1-ibm-mq-0 to ip-**.eu-west-1.compute.internal
Normal SuccessfulAttachVolume 50s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c5c0ccaa-6251-46fb-9407-6a21a9035cd2"
Normal Pulled 46s kubelet Container image "ibmcom/mq:9.2.4.0-r1" already present on machine
Normal Created 45s kubelet Created container qmgr
Normal Started 45s kubelet Started container qmgr
Warning Unhealthy 36s (x2 over 41s) kubelet Startup probe failed:
Warning Unhealthy 5s (x5 over 25s) kubelet Readiness probe failed:


Logs of the pod -

2021-12-15T23:38:44.180Z CPU architecture: amd64
2021-12-15T23:38:44.180Z Linux kernel version: 5.4.149-73.259.amzn2.x86_64
2021-12-15T23:38:44.180Z Container runtime: kube
2021-12-15T23:38:44.180Z Base image: Red Hat Enterprise Linux 8.5 (Ootpa)
2021-12-15T23:38:44.180Z Running as user ID 1001 with primary group 0, and supplementary groups 1000
2021-12-15T23:38:44.180Z Capabilities: none
2021-12-15T23:38:44.180Z seccomp enforcing mode: disabled
2021-12-15T23:38:44.180Z Process security attributes: none
2021-12-15T23:38:44.181Z Detected 'ext4' volume mounted to /mnt/mqm
2021-12-15T23:38:48.878Z Using queue manager name: mqpoc1
2021-12-15T23:38:48.892Z Created directory structure under /var/mqm
2021-12-15T23:38:48.892Z Image created: 2021-11-12T16:26:21+00:00
2021-12-15T23:38:48.892Z Image tag: ibm-mqadvanced-server-dev:9.2.4.0-r1.20211112161954.1f6d37a-amd64
2021-12-15T23:38:48.933Z MQ version: 9.2.4.0
2021-12-15T23:38:48.933Z MQ level: p924-L211105.DE
2021-12-15T23:38:48.933Z MQ license: Developer
2021-12-15T23:38:51.147Z Creating queue manager mqpoc1
2021-12-15T23:38:51.148Z Starting web server
2021-12-15T23:38:51.164Z Detected existing queue manager mqpoc1
2021-12-15T23:38:51.183Z Removing existing ServiceComponent configuration
2021-12-15T23:38:51.184Z Starting queue manager
2021-12-15T23:38:51.203Z AMQ6206I: Command strmqm was issued. [CommentInsert1(strmqm), CommentInsert2(strmqm -x mqpoc1)]
2021-12-15T23:38:51.275Z Initializing MQ Advanced for Developers custom authentication service
2021-12-15T23:38:51.275Z mqhtpass: MQStart options=Primary qmgr=mqpoc1
2021-12-15T23:38:51.350Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1
2021-12-15T23:38:51.215Z AMQ5775I: Successfully applied automatic configuration INI definitions. [CommentInsert1(INI)]
2021-12-15T23:38:51.395Z AMQ5051I: The queue manager task 'LOG-FORMAT' has started. [ArithInsert2(1), CommentInsert1(LOG-FORMAT)]
2021-12-15T23:38:51.397Z AMQ5051I: The queue manager task 'LOGGER-IO' has started. [ArithInsert2(1), CommentInsert1(LOGGER-IO)]
2021-12-15T23:38:51.410Z AMQ5051I: The queue manager task 'NATIVE-HA' has started. [ArithInsert2(1), CommentInsert1(NATIVE-HA)]
2021-12-15T23:38:51.501Z AMQ7814I: IBM MQ queue manager running as replica instance 'mq-poc1-ibm-mq-1'. [CommentInsert2(mq-poc1-ibm-mq-1), CommentInsert3(mqpoc1)]
2021-12-15T23:38:51.516Z AMQ3208E: Native HA network connection to 'mq-poc1-ibm-mq-0' could not be established. [CommentInsert1(mq-poc1-ibm-mq-0), CommentInsert2(mq-poc1-ibm-mq-replica-0(9414)), CommentInsert3(rrcE_HOST_NOT_AVAILABLE - Remote host not available, retry later. (111) (0x6F) (mq-poc1-ibm-mq-replica-0 (9414)) (TCP/IP) (????))]
2021-12-15T23:38:51.522Z AMQ3211I: Native HA outbound connection established to 'mq-poc1-ibm-mq-2'. [CommentInsert1(mq-poc1-ibm-mq-2), CommentInsert2(mq-poc1-ibm-mq-replica-2(9414))]
2021-12-15T23:38:51.523Z AMQ3235I: Native HA instance 'mq-poc1-ibm-mq-1' is not connected to enough other instances to start the process of selecting the active instance. [ArithInsert2(1), CommentInsert1(mq-poc1-ibm-mq-1), CommentInsert2(mqpoc1), CommentInsert3(Full)]
2021-12-15T23:38:51.529Z AMQ3213I: Native HA inbound connection accepted from 'mq-poc1-ibm-mq-2'. [CommentInsert1(mq-poc1-ibm-mq-2), CommentInsert2(10.32.0.3)]
2021-12-15T23:38:51.543Z AMQ3215I: The local Native HA instance 'mq-poc1-ibm-mq-1' is now the active instance of queue manager 'mqpoc1'. [ArithInsert1(2), CommentInsert1(mq-poc1-ibm-mq-1), CommentInsert2(mqpoc1)]
2021-12-15T23:38:51.551Z AMQ7816I: IBM MQ queue manager 'mqpoc1' active instance 'mq-poc1-ibm-mq-1' has a quorum of synchronised replicas available. [CommentInsert2(mq-poc1-ibm-mq-1), CommentInsert3(mqpoc1)]
2021-12-15T23:38:51.663Z AMQ7229I: 207 log records accessed on queue manager 'mqpoc1' during the log replay phase. [ArithInsert1(207), CommentInsert1(mqpoc1)]
2021-12-15T23:38:51.664Z AMQ7230I: Log replay for queue manager 'mqpoc1' complete. [ArithInsert1(207), CommentInsert1(mqpoc1)]
2021-12-15T23:38:51.664Z AMQ5051I: The queue manager task 'CHECKPOINT' has started. [ArithInsert2(1), CommentInsert1(CHECKPOINT)]
2021-12-15T23:38:51.666Z AMQ7231I: 0 log records accessed on queue manager 'mqpoc1' during the recovery phase. [CommentInsert1(mqpoc1)]
2021-12-15T23:38:51.667Z AMQ7232I: Transaction manager state recovered for queue manager 'mqpoc1'. [CommentInsert1(mqpoc1)]
2021-12-15T23:38:51.997Z Started replica queue manager
2021-12-15T23:38:52.022Z Starting metrics gathering
2021-12-15T23:38:51.762Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 �����⌂
2021-12-15T23:38:51.775Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 `�Ti�⌂
2021-12-15T23:38:51.798Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 �
2021-12-15T23:38:51.849Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1
2021-12-15T23:38:51.851Z mqhtpass: mqhtpass_authenticate_user without CSP user set. effectiveuid=mqm env=0, callertype=1, type=0, accttoken=102380180 applidentitydata=102380212
2021-12-15T23:38:51.857Z mqhtpass: mqhtpass_authenticate_user without CSP user set. effectiveuid=mqm env=3, callertype=1, type=0, accttoken=83583636 applidentitydata=83583668
2021-12-15T23:38:51.936Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 л���⌂
2021-12-15T23:38:51.938Z mqhtpass: Terminating secondary
2021-12-15T23:38:51.969Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 ��N��⌂
2021-12-15T23:38:51.971Z mqhtpass: Terminating secondary
2021-12-15T23:38:52.014Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1 @�k(�⌂
2021-12-15T23:38:52.027Z mqhtpass: mqhtpass_authenticate_user without CSP user set. effectiveuid=mqm env=0, callertype=1, type=0, accttoken=81986196 applidentitydata=81986228
2021-12-15T23:38:52.034Z mqhtpass: MQStart options=Secondary qmgr=mqpoc1
2021-12-15T23:38:51.702Z AMQ7467I: The oldest log file required to start queue manager mqpoc1 is S0000000.LOG. [CommentInsert1(mqpoc1), CommentInsert2(S0000000.LOG)]
2021-12-15T23:38:51.702Z AMQ7468I: The oldest log file required to perform media recovery of queue manager mqpoc1 is S0000000.LOG. [CommentInsert1(mqpoc1), CommentInsert2(S0000000.LOG)]
2021-12-15T23:38:51.702Z AMQ7233I: 0 out of 0 in-flight transactions resolved for queue manager 'mqpoc1'. [CommentInsert1(mqpoc1)]
2021-12-15T23:38:51.710Z AMQ7467I: The oldest log file required to start queue manager mqpoc1 is S0000000.LOG. [CommentInsert1(mqpoc1), CommentInsert2(S0000000.LOG)]
2021-12-15T23:38:51.710Z AMQ7468I: The oldest log file required to perform media recovery of queue manager mqpoc1 is S0000000.LOG. [CommentInsert1(mqpoc1), CommentInsert2(S0000000.LOG)]
2021-12-15T23:38:51.743Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(3), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.743Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(1), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.744Z AMQ5037I: The queue manager task 'ERROR-LOG' has started. [ArithInsert2(1), CommentInsert1(ERROR-LOG)]
2021-12-15T23:38:51.744Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(2), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.744Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(4), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.744Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(5), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.745Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(6), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.745Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(7), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.747Z AMQ5037I: The queue manager task 'APP-SIGNAL' has started. [ArithInsert2(8), CommentInsert1(APP-SIGNAL)]
2021-12-15T23:38:51.783Z AMQ8003I: IBM MQ queue manager 'mqpoc1' started using V9.2.4.0. [CommentInsert1(9.2.4.0), CommentInsert3(mqpoc1)]
2021-12-15T23:38:51.801Z AMQ5051I: The queue manager task 'DUR-SUBS-MGR' has started. [ArithInsert2(1), CommentInsert1(DUR-SUBS-MGR)]
2021-12-15T23:38:51.801Z AMQ9410I: Repository manager started.
2021-12-15T23:38:51.810Z AMQ5051I: The queue manager task 'TOPIC-TREE' has started. [ArithInsert2(1), CommentInsert1(TOPIC-TREE)]
2021-12-15T23:38:51.815Z AMQ5051I: The queue manager task 'IQM-COMMS-MANAGER' has started. [ArithInsert2(1), CommentInsert1(IQM-COMMS-MANAGER)]
2021-12-15T23:38:51.821Z AMQ5024I: The command server has started. ProcessId(246). [ArithInsert1(246), CommentInsert1(SYSTEM.CMDSERVER.1)]
2021-12-15T23:38:51.824Z AMQ5022I: The channel initiator has started. ProcessId(247). [ArithInsert1(247), CommentInsert1(SYSTEM.CHANNEL.INITQ)]
2021-12-15T23:38:51.829Z AMQ5051I: The queue manager task 'AUTOCONFIG' has started. [ArithInsert2(1), CommentInsert1(AUTOCONFIG)]
2021-12-15T23:38:51.852Z AMQ8942I: Starting to process automatic MQSC configuration script.
2021-12-15T23:38:51.859Z AMQ8024I: IBM MQ channel initiator started. [CommentInsert1(SYSTEM.CHANNEL.INITQ)]
2021-12-15T23:38:51.980Z AMQ8940E: An automatic MQSC command was not successful. [ArithInsert1(2), ArithInsert2(4001), CommentInsert1(define ql(MQPOC) usage(xmitq) trigger trigdata(MQPOC1.MQPOC) initq(SYSTEM.CHANNEL.INITQ)), CommentInsert2(AMQ8150E: IBM MQ object already exists.)]
2021-12-15T23:38:51.981Z AMQ8940E: An automatic MQSC command was not successful. [ArithInsert1(2), ArithInsert2(4092), CommentInsert1(define chl(MQPOC1.MQPOC) chltype(sdr) conname('mq-poc-ibm-mq-server(1414)') xmitq(MQPOC) SSLCIPH('TLS_RSA_WITH_AES_256_CBC_SHA')), CommentInsert2(AMQ8242E: SSLCIPH definition wrong.)]
2021-12-15T23:38:51.982Z AMQ8940E: An automatic MQSC command was not successful. [ArithInsert1(2), ArithInsert2(4001), CommentInsert1(define chl(MQPOC.MQPOC1) chltype(rcvr) SSLCIPH('ANY_TLS12_OR_HIGHER')), CommentInsert2(AMQ8150E: IBM MQ object already exists.)]
2021-12-15T23:38:51.983Z AMQ8939I: Automatic MQSC configuration script has completed, and contained 31 command(s), of which 3 had errors. [ArithInsert1(31), ArithInsert2(3), CommentInsert1(0)]
2021-12-15T23:38:51.984Z AMQ5037I: The queue manager task 'STATISTICS' has started. [ArithInsert2(1), CommentInsert1(STATISTICS)]
2021-12-15T23:38:51.984Z AMQ5037I: The queue manager task 'MARKINTSCAN' has started. [ArithInsert2(1), CommentInsert1(MARKINTSCAN)]
2021-12-15T23:38:51.985Z AMQ5037I: The queue manager task 'DEFERRED_DELIVERY' has started. [ArithInsert2(1), CommentInsert1(DEFERRED_DELIVERY)]
2021-12-15T23:38:51.985Z AMQ5037I: The queue manager task 'DEFERRED-MSG' has started. [ArithInsert2(1), CommentInsert1(DEFERRED-MSG)]
2021-12-15T23:38:51.985Z AMQ9722W: Plain text communication is enabled.
2021-12-15T23:38:51.986Z AMQ5026I: The listener 'SYSTEM.LISTENER.TCP.1' has started. ProcessId(281). [ArithInsert1(281), CommentInsert1(SYSTEM.LISTENER.TCP.1)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'MEDIA-IMAGES' has started. [ArithInsert2(1), CommentInsert1(MEDIA-IMAGES)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'RESOURCE_MONITOR' has started. [ArithInsert2(1), CommentInsert1(RESOURCE_MONITOR)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'ACTVTRC' has started. [ArithInsert2(1), CommentInsert1(ACTVTRC)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'LOGGEREV' has started. [ArithInsert2(1), CommentInsert1(LOGGEREV)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'EXPIRER' has started. [ArithInsert2(1), CommentInsert1(EXPIRER)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'Q-DELETION' has started. [ArithInsert2(1), CommentInsert1(Q-DELETION)]
2021-12-15T23:38:51.989Z AMQ5051I: The queue manager task 'PRESERVED-Q' has started. [ArithInsert2(1), CommentInsert1(PRESERVED-Q)]
2021-12-15T23:38:51.990Z AMQ5051I: The queue manager task 'ASYNCQ' has started. [ArithInsert2(1), CommentInsert1(ASYNCQ)]
2021-12-15T23:38:51.994Z AMQ5051I: The queue manager task 'MULTICAST' has started. [ArithInsert2(1), CommentInsert1(MULTICAST)]
2021-12-15T23:38:51.997Z AMQ5052I: The queue manager task 'QPUBSUB-CTRLR' has started. [ArithInsert2(1), CommentInsert1(QPUBSUB-CTRLR)]
2021-12-15T23:38:51.998Z AMQ5052I: The queue manager task 'QPUBSUB-QUEUE-NLCACHE' has started. [ArithInsert2(1), CommentInsert1(QPUBSUB-QUEUE-NLCACHE)]
2021-12-15T23:38:51.998Z AMQ5052I: The queue manager task 'QPUBSUB-SUBPT-NLCACHE' has started. [ArithInsert2(1), CommentInsert1(QPUBSUB-SUBPT-NLCACHE)]
2021-12-15T23:38:52.000Z AMQ5052I: The queue manager task 'PUBSUB-DAEMON' has started. [ArithInsert2(1), CommentInsert1(PUBSUB-DAEMON)]
2021-12-15T23:38:52.000Z AMQ5975I: 'IBM MQ Distributed Pub/Sub Controller' has started. [CommentInsert1(IBM MQ Distributed Pub/Sub Controller)]
2021-12-15T23:38:52.004Z AMQ5975I: 'IBM MQ Distributed Pub/Sub Fan Out Task' has started. [CommentInsert1(IBM MQ Distributed Pub/Sub Fan Out Task)]
2021-12-15T23:38:52.004Z AMQ5975I: 'IBM MQ Distributed Pub/Sub Command Task' has started. [CommentInsert1(IBM MQ Distributed Pub/Sub Command Task)]
2021-12-15T23:38:52.005Z AMQ5975I: 'IBM MQ Distributed Pub/Sub Publish Task' has started. [CommentInsert1(IBM MQ Distributed Pub/Sub Publish Task)]
2021-12-15T23:38:52.025Z AMQ5806I: Queued Publish/Subscribe Daemon started for queue manager mqpoc1. [CommentInsert1(mqpoc1)]
2021-12-15T23:38:52.061Z AMQ3213I: Native HA inbound connection accepted from 'mq-poc1-ibm-mq-0'. [CommentInsert1(mq-poc1-ibm-mq-0), CommentInsert2(10.38.0.1)]
2021-12-15T23:38:52.078Z AMQ3211I: Native HA outbound connection established to 'mq-poc1-ibm-mq-0'. [CommentInsert1(mq-poc1-ibm-mq-0), CommentInsert2(mq-poc1-ibm-mq-replica-0(9414))]
2021-12-15T23:39:20.616Z Started web server
2021-12-15T23:39:21.989Z AMQ5041I: The queue manager task 'AUTOCONFIG' has ended. [CommentInsert1(AUTOCONFIG)]

Running MQ on AKS with Istio, "TLS passthrough sends all traffic on 443 to MQ backend"

We are setting up MQ on AKS and we use Istio to manage TLS and traffic routing. Since MQ is expecting HTTPS traffic we are not terminating TLS at Istio gateway level and using TLS mode PASSTHROUGH to send traffic as is, for this we also match port 443 of gateway to app service in Virtual Service configuration which now sends all traffic at gateway on 443 including traffic that is meant for other services to MQ service. I understand since Istio is not terminating TLS it won't be able to route traffic. Now other than creating another host and use it for MQ is there another way we can handle this?

Adding container policy

Hi
I want to set apparmor policy in annotations.
This policy must be defined per container, like:

annotations:
container.apparmor.security.beta.kubernetes.io/container-name: runtime/default

This is simple enough for a deployment, but I’m in doubt how to set this for a statefulset, that creates 3 individual MQ containers. Can you help?

rgds
John B

Variables for port numbers

Is there a reason that port numbers are not customizable in the chart? Eg. in load balancer?
If not, can I provide a pull request allowing such feature?

Native HA is unavailable - AMQ5708E - Error 93 for v7.0.1

Dear all,

using the helm chart v7.0.1 we are getting the following error:

2023-07-19T10:47:17.950Z Using queue manager name: P01M01G
2023-07-19T10:47:17.950Z CPU architecture: amd64
2023-07-19T10:47:17.950Z Linux kernel version: 5.15.0-1035-azure
2023-07-19T10:47:17.950Z Base image: Red Hat Enterprise Linux 8.8 (Ootpa)
2023-07-19T10:47:17.950Z Running as user ID 1001 with primary group 0, and supplementary groups 0,2000
2023-07-19T10:47:17.950Z Capabilities: none
2023-07-19T10:47:17.950Z seccomp enforcing mode: disabled
2023-07-19T10:47:17.950Z Process security attributes: cri-containerd.apparmor.d (enforce)
2023-07-19T10:47:17.951Z Detected 'ext4' volume mounted to /mnt/mqm-data
2023-07-19T10:47:17.951Z Detected 'ext4' volume mounted to /mnt/mqm-log
2023-07-19T10:47:17.951Z Detected 'ext4' volume mounted to /mnt/mqm
2023-07-19T10:47:17.958Z Created directory structure under /var/mqm
2023-07-19T10:47:17.958Z Image created: 2023-05-25T14:16:50+00:00
2023-07-19T10:47:17.958Z Image tag: ibm-mqadvanced-server:9.3.0.5-r3.20230525140420.7d14e60-amd64
2023-07-19T10:47:17.977Z MQ version: 9.3.0.5
2023-07-19T10:47:17.977Z MQ level: p930-005-230413
2023-07-19T10:47:17.977Z MQ license: Production
2023-07-19T10:47:23.215Z Creating queue manager P01M01G
2023-07-19T10:47:23.241Z Error 93 creating queue manager: AMQ5708E: Native HA is unavailable.

2023-07-19T10:47:23.241Z /opt/mqm/bin/crtmqm: exit status 93

I just reverted back to v7.0.0. With this version the pods do come up.

Thank you,
Uwe

How to preserve client IP address when using Nginx LB

image_2024-03-14_123510423

Hi we are testing this deployment, but we identified that client IP address are not preserved even if we enabled ProxyProtocol in NGINX Controller.

NGINX configmap settings:

data:
allow-snippet-annotations: "true"
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
use-proxy-protocol: "true"

NGINX LoadBalancer Service:

externalTrafficPolicy: Local
allocateLoadBalancerNodePorts: false

QMGR Service settings:

metadata:
name: qmtest-ibm-mq-qm
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
spec:
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
type: NodePort

This configuration works to Preserve Client IP Address in other Kubernetes based applications that we have, but only on MQ Container client IP preservation not working and "DISPLAY CONN(*) CHANNEL CONNAME" still showing the K8S LoadBalancer IPs in CONNAME section as can see in the image.

Have some example of how to configure MQ services to Preserve IP Clients when connecting using a LoadBalancer ?

Web Console is not available for mq advanced server container

Hi Team, my deployment works fine with mq advanced developer container (9.3.4.0-r1). That includes Web Console. when I changed the container to mq advanced server (cp.icr.io/cp/ibm-mqadvanced-server:9.3.4.0-r1), the web console did not start (no corresponding logs). Could you please advise if I am missing something?

Queue's default persistance in native ha mode

Hello,

I've noticed a strange thing. When I modify DEFPSIST of one of the pre created queues, for example DEV.QUEUE.1, after the failover it is set to NO again ... But if I create a new queue, perform couple of failovers, then modify DEFPSIST to YES and perform failover again, it's kept (as expected).

what is different about pre created queues in the container?

Thank you

Classic-Load balancer is getting created

When we deploy the solution, only classic load balancer is getting created, is there any way we can update the chart to create NLB when deploying the solution.

Whether we need to deploy aws-load-balancer controller and modify values.yaml accordingly.

Error sending and getting messages

I setup a new queue manager and install Client on my linux machine. TRyin to send a message i get this error:

Starting amqsphac secureapphelm
Sample AMQSPHAC start
MQCONNX ended with reason code 2393
Sample AMQSPHAC end

Does i need a cert on my machine?

is the Helm Chart production grade?

Hi,

is it the Helm chart a supported way to run IBM MQ on production or we should stay with Operators?
While operator can bring some benefits, it tight us to an specific version of OpenShift (and even number 4.10, 4.12) and breaks our intention of keep the cluster upgraded very often.
Question come because i usually see in IBM docs that Operator is the prefered way.

But i guess, CLoud Pak / entitlement keys are mandatory even if im using helm chart.

Queue Manager Redeploy after Helm Chart Upgrade

Hi Callum,

as discussed today: I switched the MQ helm chart in our repository from version 3.0.0 to latest version 5.0.0 lately. After push to main ArgoCD picked it up but deployment did not succeed after several tries. Only thing that helped to get it deployed was to delete the Queue Manager and redeploy it from scratch.

The error message I got in ArgoCD:
"StatefulSet.apps "t01n01k-ibm-mq" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden."

Thank you,
Uwe

Changing MQ queue manager name and how to set privileges for containers running in pod.

  1. How to change the default MQ queue manager name from "secureapphelm" to something else. We should make this configurable instead of hardcoding it.

  2. How to install any packages that we might need on the container itself. For instance, we might want to install git or jq command line tool to run some configuration scripts. Do we have access to the root user on the container itself to do this?

  3. The current solution uses IBM MQ Advanced for Developer's container image. How do we change it to use the IBM MQ Advanced production image? Does this require a complete cluster rebuild or can the image be switched without the rebuild?

Explanation section of the MQ log is missing

Hi Team,
In MQ logging section,
If we map log.format=json, We will get all the details of the logs in json but we are missing explanation section. Can you please add the code snippet to also get explanation section of the log event?
Thanks
Vijay

Dynamic configurations to update mqsc without restarting the POD

Hi,

We are able to create queues, channels topics etc, but everytime we need
to restart the pod before the change is activated.
below the base-helm-values what we used, this works.
queueManager:
mqscConfigMaps:
- name: mq-tst-config-qm
items:
- queues.mqsc

We would like to add or remove queues with zero downtime. What is the best practices to do that?

Platform Kubernetes.
Client Version: v1.26.9
Kustomize Version: v4.5.7
Server Version: v1.27.11

Sample for VMware Tanzu Kubernetes Grid

Currently there are samples for deployment on OpenShift, AWS and Azure.
Unfortunatly we only have TKGi on premise.

Would it be possible to provide sample charts for the TKG Kubernetes Distribution?

Installing haelm charts on openshift with NFS storage class

Hello,

I'm trying to install helm charts with NFS storage class using this command:

helm install nativeha ./mq_helm/mq-helm/charts/ibm-mq --set license=accept,persistence.qmPVC.storageClassName=nfs-client,queueManager.name=QM1,security.context.supplementalGroups=65534,web.enable=true

Specifying 65534, because I ran into permissions problems and read it was a solution, but I get this error:

Error: INSTALLATION FAILED: template: ibm-mq/templates/stateful-set.yaml_keep:114:35: executing "ibm-mq/templates/stateful-set.yaml_keep" at <.Values.security.context.supplementalGroups>: range can't iterate over 65534

Can you please advice?

thank you.

Multiinstance Qmgr - Standby Qmgr not existing

Hi Callum,

we tried deploy a multiinstance qmgr on Azure. Used included sample configuration and deployment works fine, however the second pod does not seem to have a qmgr installed, also failover testing did not really work. Is this the way it is intended to be?

Please see here:

> kubectl get pods -n t01n03a
NAME               READY   STATUS    RESTARTS   AGE
t01n03a-ibm-mq-0   1/1     Running   0          11m
t01n03a-ibm-mq-1   0/1     Running   0          38s

> kubectl exec --stdin --tty t01n03a-ibm-mq-0 --namespace t01n03a -- /bin/bash
bash-4.4$ runmqsc
5724-H72 (C) Copyright IBM Corp. 1994, 2024.
Starting MQSC for queue manager T01N03A.


No MQSC commands read.
bash-4.4$ exit
exit

> kubectl exec --stdin --tty t01n03a-ibm-mq-1 --namespace t01n03a -- /bin/bash
bash-4.4$ runmqsc
5724-H72 (C) Copyright IBM Corp. 1994, 2024.
AMQ8146E: IBM MQ queue manager not available.

No MQSC commands read.
bash-4.4$ exit
exit
command terminated with exit code 20

> kubectl delete pod t01n03a-ibm-mq-0 -n t01n03a
pod "t01n03a-ibm-mq-0" deleted
> kubectl get pods -n t01n03a
NAME               READY   STATUS    RESTARTS   AGE
t01n03a-ibm-mq-0   0/1     Running   0          45s
t01n03a-ibm-mq-1   1/1     Running   0          6m34s

> kubectl exec --stdin --tty t01n03a-ibm-mq-0 --namespace t01n03a -- /bin/bash
bash-4.4$ runmqsc
5724-H72 (C) Copyright IBM Corp. 1994, 2024.
AMQ8478E: Standby queue manager.

No MQSC commands read.
bash-4.4$ exit
exit
command terminated with exit code 20

> kubectl exec --stdin --tty t01n03a-ibm-mq-1 --namespace t01n03a -- /bin/bash
bash-4.4$ runmqsc
5724-H72 (C) Copyright IBM Corp. 1994, 2024.
AMQ8146E: IBM MQ queue manager not available.

No MQSC commands read.
bash-4.4$ exit
exit
command terminated with exit code 20

> kubectl get pvc -n t01n03a
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-t01n03a-ibm-mq     Bound    pvc-385e0594-a6a8-4ccc-b298-5c6730cdc94d   32Gi       RWX            mq-azurefile   24m
log-t01n03a-ibm-mq      Bound    pvc-6695dc64-168d-479f-8480-2517748e3ddb   32Gi       RWX            mq-azurefile   24m
qmgr-t01n03a-ibm-mq-0   Bound    pvc-82eb47b3-a736-451d-8a23-a4f342e96b67   32Gi       RWO            managed        24m
qmgr-t01n03a-ibm-mq-1   Bound    pvc-e8dcf5cb-8430-4461-a2bc-0a2f44605e1a   32Gi       RWO            managed        23m

Thank you,
Uwe

Add seccompProfile type to values

Hi,
there is an error deploying this chart in OpenShift 4.11.25.
W0208 14:36:11.941652 1433 warnings.go:70] would violate PodSecurity "restricted:v1.24": seccompProfile (pod or container "qmgr" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

It will be excellent to have it in values.yaml

securityContext:
seccompProfile:
type: RuntimeDefault

Complex if statement in statefulset v8.0.0

Hi Callum,

I was able to deploy our Azure cloud QMgr with helm chart v8.0.0 since we need more control over starting and configuring the web server.

However, it does not work with the current checked in quite complex if statement in line 284 in stateful-set.yaml:
{{- if or .Values.web.enable (eq .Values.web.enable false)}}
With this statement I do not get it deployed.
I changed it to: {{- if .Values.web.enable }}
and then it works.

Another question, we have seen that the web server inside of the active pod has to be started by bashing into the container and starting it manually using strmqweb.

The new param support in values.yaml:
web:
enabled:
manualConfig:
configMap:
name:
secret:
name:

Does this really start the web server process if set to true inside of the pods?
I tried to use a postStart hook, this works for the active pod but causes trouble with the non-active pods.

Thank you,
Uwe

Webconsole Read Only User

Hello,

We are trying to add an additional web console RO user with the role "MQWebAdminRO" we were able to added it manually under
/mnt/mqm/data/web/installations/Installation1/servers/mqweb/ by replacing the file mqwebuser.xml but the change is not persistent to a pod restart or a fail over in native ha, we have persisten storage defined. The following is the content of the xml file :

<?xml version="1.0" encoding="UTF-8"?>
<server>
    <featureManager>
        <feature>appSecurity-2.0</feature>
        <feature>basicAuthenticationMQ-1.0</feature>
    </featureManager>
    <enterpriseApplication id="com.ibm.mq.console">
        <application-bnd>
            <security-role name="MQWebAdmin">
                <group name="MQWebUI" realm="defaultRealm"/>
            </security-role>
            <security-role name="MQWebAdminRO">
                <group name="MQWebMessaging" realm="defaultRealm"/>
            </security-role>
        </application-bnd>
    </enterpriseApplication>
    <enterpriseApplication id="com.ibm.mq.rest">
        <application-bnd>
            <security-role name="MQWebAdmin">
                <group name="MQWebUI" realm="defaultRealm"/>
            </security-role>
            <security-role name="MQWebUser">
                <group name="MQWebMessaging" realm="defaultRealm"/>
            </security-role>
        </application-bnd>
    </enterpriseApplication>
    <basicRegistry id="basic" realm="defaultRealm">
        <user name="admin" password="${env.MQ_ADMIN_PASSWORD}"/>
        <!-- The app user will always get a default password of "passw0rd",
             even if you don't set the environment variable.
             See `webserver.go` -->
        <user name="app" password="${env.MQ_APP_PASSWORD}"/>
        <group name="MQWebUI">
            <member name="admin"/>
        </group>
        <group name="MQWebMessaging">
            <member name="app"/>
        </group>
    </basicRegistry>
    <variable name="httpHost" value="*"/>
    <variable name="managementMode" value="externallyprovisioned"/>
    <variable name="mqConsoleRemoteSupportEnabled" value="false"/>
    <variable name="mqConsoleEnableUnsafeInline" value="true"/>
    <jndiEntry jndiName="mqConsoleDefaultCCDTHostname" value="${env.MQ_CONSOLE_DEFAULT_CCDT_HOSTNAME}"/>
    <jndiEntry jndiName="mqConsoleDefaultCCDTPort" value="${env.MQ_CONSOLE_DEFAULT_CCDT_PORT}"/>
    <include location="tls.xml"/>
</server>

Is there a way to add a webconsole read only user by specifying it in the values file for the chart?

Regards

what is the process to update the application license?

Currently we are using developer license and soon would be updating the license.
Could you please let us know if updating the license would require cluster rebuild?
What is the process to update the license without having any down time?

Unable to configure mq multi instance on Azure AKS

Hi Team,

I'm trying to spin up multi-instance mq in AKS, when I try to run the install.sh script the pod is not coming up and in events it is showing up unable to mount or attach volumes. But when I check the relevant PVCs the status is showing as bound. not sure what exactly is causing this error. Looking for assistance on this issue.

Thanks in Advance

Pod Events:

Unable to attach or mount volumes: unmounted volumes=[qm data log], unattached volumes=[default mqsc-cm-helmsecure ini-cm-helmsecure qm data log kube-api-access-sg6pr trust0]: timed out waiting for the condition

Unable to attach or mount volumes: unmounted volumes=[data log qm], unattached volumes=[data log kube-api-access-sg6pr trust0 default mqsc-cm-helmsecure ini-cm-helmsecure qm]: timed out waiting for the condition

Unable to attach or mount volumes: unmounted volumes=[qm data log], unattached volumes=[qm data log kube-api-access-sg6pr trust0 default mqsc-cm-helmsecure ini-cm-helmsecure]: timed out waiting for the condition

MountVolume.MountDevice failed for volume "pvc-68153b01-3341-4b98-a9b3-97da5f92fea1" : rpc error: code = Internal desc = volume(rg-aksnodes-rbs-dip-nonprd-e2-01#f999eab35c0084287808317#pvcn-68153b01-3341-4b98-a9b3-97da5f92fea1###multi-mq) mount f999eab35c0084287808317.file.core.windows.net:/f999eab35c0084287808317/pvcn-68153b01-3341-4b98-a9b3-97da5f92fea1 on /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/4db2af01f85980738f013d75431e57c42e780c51163016cafe11251f668ca304/globalmount failed with mount failed: exit status 32 Mounting command: mount Mounting arguments: -t nfs -o rw,vers=4,minorversion=1,sec=sys f999eab35c0084287808317.file.core.windows.net:/f999eab35c0084287808317/pvcn-68153b01-3341-4b98-a9b3-97da5f92fea1 /var/lib/kubelet/plugins/kubernetes.io/csi/file.csi.azure.com/4db2af01f85980738f013d75431e57c42e780c51163016cafe11251f668ca304/globalmount Output: mount.nfs: access denied by server while mounting f999eab35c0084287808317.file.core.windows.net:/f999eab35c0084287808317/pvcn-68153b01-3341-4b98-a9b3-97da5f92fea1

(combined from similar events): Unable to attach or mount volumes: unmounted volumes=[log qm data], unattached volumes=[log kube-api-access-sg6pr trust0 default mqsc-cm-helmsecure ini-cm-helmsecure qm data]: timed out waiting for the condition

Connecting with MQ Explorer via SSL

Hi,
I haven been trying to use the same helm chart example for OPenShiftNativeHA.
In the end i have 2 routes: web and qm route.

Connecting to the cluster using a nodePort (non SSL), works fine.

Now i want to try out the connection via SSL, but i always get SSL errors and i dont reach the MQ LOgs (yet).

I got "application.jks" file into a folder and configure MQ Explorer to use "application.jks".

qm: mqdev
chanel: MTLSQMCHL
mq route: https://mqdev-ibm-mq-qm-mqdev.apps
mq web route: https://mqdev-ibm-mq-web-mqdev.apps

Queue manager mqdev is not available for client connection due to an SSL configuration error. (AMQ4199)
Queue manager mqdev is not available for client connection due to an SSL configuration error. (AMQ4199)
Severity: 30 (Severe Error)
Explanation: The user is trying to connect to a remote queue manager using a secure connection.
Response: Check the SSL configuration of the target queue manager and the local SSL trust store.

{
"channel": [
{
"name": "MTLSQMCHL",
"clientConnection": {
"connection": [
{
"host": "mqdev-ibm-mq-qm-mqdev.apps",
"port": 443
}
],
"queueManager": "mqdev"
},
"transmissionSecurity": {
"cipherSpecification": "ANY_TLS12_OR_HIGHER"
},
"type": "clientConnection"
}
]
}

An again, im not see any logs on. ONly if i go to a brownser and do: https://mqdev-ibm-mq-qm-mqdev.apps

The data received from host '10.131.0.2' on channel '????' is not valid. [CommentInsert1(10.131.0.2), CommentInsert2(TCP/IP), CommentInsert3(????)]
2023-02-23T14:58:20.900Z AMQ9999E: Channel '????' to host '10.131.0.2' ended abnormally. [CommentInsert1(????), CommentInsert2(688), CommentInsert3(10.131.0.2)]

Error in Executing install.sh

Hi @callumpjackson

while trying to install the MQ over to the pods. Install.sh is giving an error with no such path or directory
akhil1_aggarwal@cs-469377201213-default:~/GoogleKubernetesEngine/deploy$ bash install.sh
cat: ../../genericresources/createcerts/server.key: No such file or directory
cat: ../../genericresources/createcerts/server.crt: No such file or directory
cat: ../../genericresources/createcerts/application.crt: No such file or directory

can you please advice if something more to be done here.

Running MQ HA on more then 3 replicas

Hi Callum,

since we are not able to use PDB I simulated this by extending the cluster nodes to total 5 nodes. After this I scaled up the MQ replicas from 3 to 5. Looking at it it does not seem to work. Can you confirm that the whole MQ HA setup only works with 3 replicas? Or could we drive it with e.g. 5 replicas? If it would work I could set max-surge to 1 to only use 1 node to drain during a cluster upgrade. This would ensure that always 1 active MQ replic would be there.

Currently cluster upgrade with 3 pods and setting max-surge to 1, so total 4 nodes during cluster upgrade only draining 1 node at a time does not work. There will be a pending MQ pod and a container creating, the pending one will stay pending, resulting in only one pod left and this one cannot get active because of the quorum, means I was detecting a downtime of around 2 min or more and several times during a cluster upgrade.

Thank you,
Uwe

Not Able to Login Using MQ COnsole

Hi @callumpjackson,

I was able to launch the MQ console using GKE, but does not know what would be the user name and password for the same.
if you can help with the same where and how to recieve the username and password.

Error:
AMQ9660E: SSL key repository: password incorrect or, stash file absent or unusable. [ArithInsert1(408), CommentInsert1(MTLSQMCHL), CommentInsert2(gsk_environment_init)]

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.