Coder Social home page Coder Social logo

helm-charts's Introduction

Redpanda Helm Chart Specification


description: Find the default values and descriptions of settings in the Redpanda Helm chart.

Version: 5.8.11 Type: application AppVersion: v24.1.8

This page describes the official Redpanda Helm Chart. In particular, this page describes the contents of the chart’s values.yaml file. Each of the settings is listed and described on this page, along with any default values.

For instructions on how to install and use the chart, including how to override and customize the chart’s values, refer to the deployment documentation.


Autogenerated from chart metadata using helm-docs v1.13.1

Source Code

Requirements

Kubernetes: ^1.21.0-0

Repository Name Version
https://charts.redpanda.com connectors >=0.1.2 <1.0
https://charts.redpanda.com console >=0.5 <1.0

Settings

Affinity constraints for scheduling Pods, can override this for StatefulSets and Jobs. For details, see the Kubernetes documentation.

Default: {}

Audit logging for a redpanda cluster, must have enabled sasl and have one kafka listener supporting sasl authentication for audit logging to work. Note this feature is only available for redpanda versions >= v23.3.0.

Default:

{"clientMaxBufferSize":16777216,"enabled":false,"enabledEventTypes":null,"excludedPrincipals":null,"excludedTopics":null,"listener":"internal","partitions":12,"queueDrainIntervalMs":500,"queueMaxBufferSizePerShard":1048576,"replicationFactor":null}

Defines the number of bytes (in bytes) allocated by the internal audit client for audit messages.

Default: 16777216

Enable or disable audit logging, for production clusters we suggest you enable, however, this will only work if you also enable sasl and a listener with sasl enabled.

Default: false

Event types that should be captured by audit logs, default is [admin, authenticate, management].

Default: nil

List of principals to exclude from auditing, default is null.

Default: nil

List of topics to exclude from auditing, default is null.

Default: nil

Kafka listener name, note that it must have authenticationMethod set to sasl. For external listeners, use the external listener name, such as default.

Default: "internal"

Integer value defining the number of partitions used by a newly created audit topic.

Default: 12

In ms, frequency in which per shard audit logs are batched to client for write to audit log.

Default: 500

Defines the maximum amount of memory used (in bytes) by the audit buffer in each shard.

Default: 1048576

Defines the replication factor for a newly created audit log topic. This configuration applies only to the audit log topic and may be different from the cluster or other topic configurations. This cannot be altered for existing audit log topics. Setting this value is optional. If a value is not provided, Redpanda will use the internal_topic_replication_factor cluster config value. Default is null

Default: nil

Authentication settings. For details, see the SASL documentation.

Default:

{"sasl":{"enabled":false,"mechanism":"SCRAM-SHA-512","secretRef":"redpanda-users","users":[]}}

Enable SASL authentication. If you enable SASL authentication, you must provide a Secret in auth.sasl.secretRef.

Default: false

The authentication mechanism to use for the superuser. Options are SCRAM-SHA-256 and SCRAM-SHA-512.

Default: "SCRAM-SHA-512"

A Secret that contains your superuser credentials. For details, see the SASL documentation.

Default: "redpanda-users"

Optional list of superusers. These superusers will be created in the Secret whose name is defined in auth.sasl.secretRef. If this list is empty, the Secret in auth.sasl.secretRef must already exist in the cluster before you deploy the chart. Uncomment the sample list if you wish to try adding sample sasl users or override to use your own.

Default: []

Default Kubernetes cluster domain.

Default: "cluster.local"

Additional labels to add to all Kubernetes objects. For example, my.k8s.service: redpanda.

Default: {}

This section contains various settings supported by Redpanda that may not work correctly in a Kubernetes cluster. Changing these settings comes with some risk. Use these settings to customize various Redpanda configurations that are not covered in other sections. These values have no impact on the configuration or behavior of the Kubernetes objects deployed by Helm, and therefore should not be modified for the purpose of configuring those objects. Instead, these settings get passed directly to the Redpanda binary at startup. For descriptions of these properties, see the configuration documentation.

Default:

{"cluster":{"default_topic_replications":3},"node":{"crash_loop_limit":5},"pandaproxy_client":{},"rpk":{},"schema_registry_client":{},"tunable":{"compacted_log_segment_size":67108864,"group_topic_partitions":16,"kafka_batch_max_bytes":1048576,"kafka_connection_rate_limit":1000,"log_segment_size":134217728,"log_segment_size_max":268435456,"log_segment_size_min":16777216,"max_compacted_log_segment_size":536870912,"topic_partitions_per_shard":1000}}

Node (broker) properties. See the property reference documentation.

Default: {"crash_loop_limit":5}

Crash loop limit A limit on the number of consecutive times a broker can crash within one hour before its crash-tracking logic is reset. This limit prevents a broker from getting stuck in an infinite cycle of crashes. User can disable this crash loop limit check by the following action: * One hour elapses since the last crash * The node configuration file, redpanda.yaml, is updated via config.cluster or config.node or config.tunable objects * The startup_log file in the node’s data_directory is manually deleted Default to 5 REF: https://docs.redpanda.com/current/reference/node-properties/#crash_loop_limit

Default: 5

Tunable cluster properties.

Default:

{"compacted_log_segment_size":67108864,"group_topic_partitions":16,"kafka_batch_max_bytes":1048576,"kafka_connection_rate_limit":1000,"log_segment_size":134217728,"log_segment_size_max":268435456,"log_segment_size_min":16777216,"max_compacted_log_segment_size":536870912,"topic_partitions_per_shard":1000}

See the property reference documentation.

Default: 67108864

See the property reference documentation.

Default: 16

See the property reference documentation.

Default: 1048576

See the property reference documentation.

Default: 1000

See the property reference documentation.

Default: 134217728

See the property reference documentation.

Default: 268435456

See the property reference documentation.

Default: 16777216

See the property reference documentation.

Default: 536870912

See the property reference documentation.

Default: 1000

Redpanda Managed Connectors settings For a reference of configuration settings, see the Redpanda Connectors documentation.

Default:

{"deployment":{"create":false},"enabled":false,"test":{"create":false}}

Redpanda Console settings. For a reference of configuration settings, see the Redpanda Console documentation.

Default:

{"config":{},"configmap":{"create":false},"deployment":{"create":false},"enabled":true,"secret":{"create":false}}

Enterprise (optional) For details, see the License documentation.

Default:

{"license":"","licenseSecretRef":{}}

license (optional).

Default: ""

Secret name and key where the license key is stored.

Default: {}

External access settings. For details, see the Networking and Connectivity documentation.

Default:

{"enabled":true,"service":{"enabled":true},"type":"NodePort"}

Enable external access for each Service. You can toggle external access for each listener in listeners.<service name>.external.<listener-name>.enabled.

Default: true

Service allows you to manage the creation of an external kubernetes service object

Default: {"enabled":true}

Enabled if set to false will not create the external service type You can still set your cluster with external access but not create the supporting service (NodePort/LoadBalander). Set this to false if you rather manage your own service.

Default: true

External access type. Only NodePort and LoadBalancer are supported. If undefined, then advertised listeners will be configured in Redpanda, but the helm chart will not create a Service. You must create a Service manually. Warning: If you use LoadBalancers, you will likely experience higher latency and increased packet loss. NodePort is recommended in cases where latency is a priority.

Default: "NodePort"

Override redpanda.fullname template.

Default: ""

Redpanda Docker image settings.

Default:

{"pullPolicy":"IfNotPresent","repository":"docker.redpanda.com/redpandadata/redpanda","tag":""}

The imagePullPolicy. If image.tag is 'latest', the default is Always.

Default: "IfNotPresent"

Docker repository from which to pull the Redpanda Docker image.

Default:

"docker.redpanda.com/redpandadata/redpanda"

The Redpanda version. See DockerHub for: All stable versions and all unstable versions.

Default: Chart.appVersion.

Pull secrets may be used to provide credentials to image repositories See the Kubernetes documentation.

Default: []

DEPRECATED Enterprise license key (optional). For details, see the License documentation.

Default: ""

DEPRECATED Secret name and secret key where the license key is stored.

Default: {}

Listener settings. Override global settings configured above for individual listeners. For details, see the listeners documentation.

Default:

{"admin":{"external":{"default":{"advertisedPorts":[31644],"port":9645,"tls":{"cert":"external"}}},"port":9644,"tls":{"cert":"default","requireClientAuth":false}},"http":{"authenticationMethod":null,"enabled":true,"external":{"default":{"advertisedPorts":[30082],"authenticationMethod":null,"port":8083,"tls":{"cert":"external","requireClientAuth":false}}},"kafkaEndpoint":"default","port":8082,"tls":{"cert":"default","requireClientAuth":false}},"kafka":{"authenticationMethod":null,"external":{"default":{"advertisedPorts":[31092],"authenticationMethod":null,"port":9094,"tls":{"cert":"external"}}},"port":9093,"tls":{"cert":"default","requireClientAuth":false}},"rpc":{"port":33145,"tls":{"cert":"default","requireClientAuth":false}},"schemaRegistry":{"authenticationMethod":null,"enabled":true,"external":{"default":{"advertisedPorts":[30081],"authenticationMethod":null,"port":8084,"tls":{"cert":"external","requireClientAuth":false}}},"kafkaEndpoint":"default","port":8081,"tls":{"cert":"default","requireClientAuth":false}}}

Admin API listener (only one).

Default:

{"external":{"default":{"advertisedPorts":[31644],"port":9645,"tls":{"cert":"external"}}},"port":9644,"tls":{"cert":"default","requireClientAuth":false}}

Optional external access settings.

Default:

{"default":{"advertisedPorts":[31644],"port":9645,"tls":{"cert":"external"}}}

Name of the external listener.

Default:

{"advertisedPorts":[31644],"port":9645,"tls":{"cert":"external"}}

The port advertised to this listener's external clients. List one port if you want to use the same port for each broker (would be the case when using NodePort service). Otherwise, list the port you want to use for each broker in order of StatefulSet replicas. If undefined, listeners.admin.port is used.

Default: {"cert":"external"}

The port for both internal and external connections to the Admin API.

Default: 9644

Optional TLS section (required if global TLS is enabled)

Default:

{"cert":"default","requireClientAuth":false}

Name of the Certificate used for TLS (must match a Certificate name that is registered in tls.certs).

Default: "default"

If true, the truststore file for this listener is included in the ConfigMap.

Default: false

HTTP API listeners (aka PandaProxy).

Default:

{"authenticationMethod":null,"enabled":true,"external":{"default":{"advertisedPorts":[30082],"authenticationMethod":null,"port":8083,"tls":{"cert":"external","requireClientAuth":false}}},"kafkaEndpoint":"default","port":8082,"tls":{"cert":"default","requireClientAuth":false}}

Kafka API listeners.

Default:

{"authenticationMethod":null,"external":{"default":{"advertisedPorts":[31092],"authenticationMethod":null,"port":9094,"tls":{"cert":"external"}}},"port":9093,"tls":{"cert":"default","requireClientAuth":false}}

If undefined, listeners.kafka.external.default.port is used.

Default: [31092]

The port used for external client connections.

Default: 9094

The port for internal client connections.

Default: 9093

RPC listener (this is never externally accessible).

Default:

{"port":33145,"tls":{"cert":"default","requireClientAuth":false}}

Schema registry listeners.

Default:

{"authenticationMethod":null,"enabled":true,"external":{"default":{"advertisedPorts":[30081],"authenticationMethod":null,"port":8084,"tls":{"cert":"external","requireClientAuth":false}}},"kafkaEndpoint":"default","port":8081,"tls":{"cert":"default","requireClientAuth":false}}

Log-level settings.

Default:

{"logLevel":"info","usageStats":{"enabled":true}}

Log level Valid values (from least to most verbose) are: warn, info, debug, and trace.

Default: "info"

Send usage statistics back to Redpanda Data. For details, see the stats reporting documentation.

Default: {"enabled":true}

Monitoring. This will create a ServiceMonitor that can be used by Prometheus-Operator or VictoriaMetrics-Operator to scrape the metrics.

Default:

{"enabled":false,"labels":{},"scrapeInterval":"30s"}

Override redpanda.name template.

Default: ""

Node selection constraints for scheduling Pods, can override this for StatefulSets. For details, see the Kubernetes documentation.

Default: {}

Default: {}

Default: true

Default: {}

Default: true

Rack Awareness settings. For details, see the Rack Awareness documentation.

Default:

{"enabled":false,"nodeAnnotation":"topology.kubernetes.io/zone"}

When running in multiple racks or availability zones, use a Kubernetes Node annotation value as the Redpanda rack value. Enabling this requires running with a service account with "get" Node permissions. To have the Helm chart configure these permissions, set serviceAccount.create=true and rbac.enabled=true.

Default: false

The common well-known annotation to use as the rack ID. Override this only if you use a custom Node annotation.

Default:

"topology.kubernetes.io/zone"

Role Based Access Control.

Default:

{"annotations":{},"enabled":false}

Annotations to add to the rbac resources.

Default: {}

Enable for features that need extra privileges. If you use the Redpanda Operator, you must deploy it with the --set rbac.createRPKBundleCRs=true flag to give it the required ClusterRoles.

Default: false

Pod resource management. This section simplifies resource allocation by providing a single location where resources are defined. Helm sets these resource values within the statefulset.yaml and configmap.yaml templates. The default values are for a development environment. Production-level values and other considerations are documented, where those values are different from the default. For details, see the Pod resources documentation.

Default:

{"cpu":{"cores":1},"memory":{"container":{"max":"2.5Gi"}}}

CPU resources. For details, see the Pod resources documentation.

Default: {"cores":1}

Redpanda makes use of a thread per core model. For details, see this blog. For this reason, Redpanda should only be given full cores. Note: You can increase cores, but decreasing cores is not currently supported. See the GitHub issue. This setting is equivalent to --smp, resources.requests.cpu, and resources.limits.cpu. For production, use 4 or greater. To maximize efficiency, use the static CPU manager policy by specifying an even integer for CPU resource requests and limits. This policy gives the Pods running Redpanda brokers access to exclusive CPUs on the node. See https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy.

Default: 1

Memory resources For details, see the Pod resources documentation.

Default:

{"container":{"max":"2.5Gi"}}

Enables memory locking. For production, set to true. enable_memory_locking: false It is recommended to have at least 2Gi of memory per core for the Redpanda binary. This memory is taken from the total memory given to each container. The Helm chart allocates 80% of the container's memory to Redpanda, leaving the rest for the Seastar subsystem (reserveMemory) and other container processes. So at least 2.5Gi per core is recommended in order to ensure Redpanda has a full 2Gi. These values affect --memory and --reserve-memory flags passed to Redpanda and the memory requests/limits in the StatefulSet. Valid suffixes: k, M, G, T, P, Ki, Mi, Gi, Ti, Pi To create Guaranteed Pod QoS for Redpanda brokers, provide both container max and min values for the container. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a memory limit and a memory request. * For every container in the Pod, the memory limit must equal the memory request.

Default: {"max":"2.5Gi"}

Maximum memory count for each Redpanda broker. Equivalent to resources.limits.memory. For production, use 10Gi or greater.

Default: "2.5Gi"

Service account management.

Default:

{"annotations":{},"create":false,"name":""}

Annotations to add to the service account.

Default: {}

Specifies whether a service account should be created.

Default: false

The name of the service account to use. If not set and serviceAccount.create is true, a name is generated using the redpanda.fullname template.

Default: ""

Additional flags to pass to redpanda,

Default: []

Additional labels to be added to statefulset label selector. For example, my.k8s.service: redpanda.

Default: {}

DEPRECATED Please use statefulset.podTemplate.annotations. Annotations are used only for Statefulset.spec.template.metadata.annotations. The StatefulSet does not have any dedicated annotation.

Default: {}

Default: 1

Default: ""

Default: ""

Default: "busybox"

Default: "latest"

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request.

Default: {}

Default: ""

Default: false

Default: "xfs"

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request.

Default: {}

In environments where root is not allowed, you cannot change the ownership of files and directories. Enable setDataDirOwnership when using default minikube cluster configuration.

Default: false

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request.

Default: {}

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request.

Default: {}

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request.

Default: {}

Default: 3

Default: 10

Default: 10

Node selection constraints for scheduling Pods of this StatefulSet. These constraints override the global nodeSelector value. For details, see the Kubernetes documentation.

Default: {}

Inter-Pod Affinity rules for scheduling Pods of this StatefulSet. For details, see the Kubernetes documentation.

Default: {}

Anti-affinity rules for scheduling Pods of this StatefulSet. For details, see the Kubernetes documentation. You may either edit the default settings for anti-affinity rules, or specify new anti-affinity rules to use instead of the defaults.

Default:

{"custom":{},"topologyKey":"kubernetes.io/hostname","type":"hard","weight":100}

Change podAntiAffinity.type to custom and provide your own podAntiAffinity rules here.

Default: {}

The topologyKey to be used. Can be used to spread across different nodes, AZs, regions etc.

Default: "kubernetes.io/hostname"

Valid anti-affinity types are soft, hard, or custom. Use custom if you want to supply your own anti-affinity rules in the podAntiAffinity.custom object.

Default: "hard"

Weight for soft anti-affinity rules. Does not apply to other anti-affinity types.

Default: 100

Additional annotations to apply to the Pods of this StatefulSet.

Default: {}

Additional labels to apply to the Pods of this StatefulSet.

Default: {}

A subset of Kubernetes' PodSpec type that will be merged into the redpanda StatefulSet via a strategic merge patch.

Default: {"containers":[]}

PriorityClassName given to Pods of this StatefulSet. For details, see the Kubernetes documentation.

Default: ""

Default: 3

Default: 1

Default: 10

Default: 1

Number of Redpanda brokers (Redpanda Data recommends setting this to the number of worker nodes in the cluster)

Default: 3

Default: 101

Default: "OnRootMismatch"

Default: 101

Default: true

Default: ""

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a memory limit and a memory request. * For every container in the Pod, the memory limit must equal the memory request. * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request. To maximize efficiency, use the static CPU manager policy by specifying an even integer for CPU resource requests and limits. This policy gives the Pods running Redpanda brokers access to exclusive CPUs on the node. For details, see https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy

Default: {}

Default: {}

Default: true

Default: false

Default: ":8085"

Default:

"docker.redpanda.com/redpandadata/redpanda-operator"

Default: "v2.1.10-23.2.18"

Default: ":9082"

To create Guaranteed Pods for Redpanda brokers, provide both requests and limits for CPU and memory. For details, see https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request. * Every container in the Pod must have a CPU limit and a CPU request. * For every container in the Pod, the CPU limit must equal the CPU request. To maximize efficiency, use the static CPU manager policy by specifying an even integer for CPU resource requests and limits. This policy gives the Pods running Redpanda brokers access to exclusive CPUs on the node. For details, see https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy

Default: {}

Default: "all"

Default: {}

Adjust the period for your probes to meet your needs. For details, see the Kubernetes documentation.

Default:

{"failureThreshold":120,"initialDelaySeconds":1,"periodSeconds":10}

Termination grace period in seconds is time required to execute preStop hook which puts particular Redpanda Pod (process/container) into maintenance mode. Before settle down on particular value please put Redpanda under load and perform rolling upgrade or rolling restart. That value needs to accommodate two processes: * preStop hook needs to put Redpanda into maintenance mode * after preStop hook Redpanda needs to handle gracefully SIGTERM signal Both processes are executed sequentially where preStop hook has hard deadline in the middle of terminationGracePeriodSeconds. REF: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination

Default: 90

Taints to be tolerated by Pods of this StatefulSet. These tolerations override the global tolerations value. For details, see the Kubernetes documentation.

Default: []

Default: 1

Default:

"topology.kubernetes.io/zone"

Default: "ScheduleAnyway"

Default: "RollingUpdate"

Persistence settings. For details, see the storage documentation.

Default:

{"hostPath":"","persistentVolume":{"annotations":{},"enabled":true,"labels":{},"nameOverwrite":"","size":"20Gi","storageClass":""},"tiered":{"config":{"cloud_storage_access_key":"","cloud_storage_api_endpoint":"","cloud_storage_azure_container":null,"cloud_storage_azure_managed_identity_id":null,"cloud_storage_azure_shared_key":null,"cloud_storage_azure_storage_account":null,"cloud_storage_bucket":"","cloud_storage_cache_size":5368709120,"cloud_storage_credentials_source":"config_file","cloud_storage_enable_remote_read":true,"cloud_storage_enable_remote_write":true,"cloud_storage_enabled":false,"cloud_storage_region":"","cloud_storage_secret_key":""},"credentialsSecretRef":{"accessKey":{"configurationKey":"cloud_storage_access_key"},"secretKey":{"configurationKey":"cloud_storage_secret_key"}},"hostPath":"","mountType":"emptyDir","persistentVolume":{"annotations":{},"labels":{},"storageClass":""}}}

Absolute path on the host to store Redpanda's data. If unspecified, then an emptyDir volume is used. If specified but persistentVolume.enabled is true, storage.hostPath has no effect.

Default: ""

If persistentVolume.enabled is true, a PersistentVolumeClaim is created and used to store Redpanda's data. Otherwise, storage.hostPath is used.

Default:

{"annotations":{},"enabled":true,"labels":{},"nameOverwrite":"","size":"20Gi","storageClass":""}

Additional annotations to apply to the created PersistentVolumeClaims.

Default: {}

Additional labels to apply to the created PersistentVolumeClaims.

Default: {}

Option to change volume claim template name for tiered storage persistent volume if tiered.mountType is set to persistentVolume

Default: ""

To disable dynamic provisioning, set to -. If undefined or empty (default), then no storageClassName spec is set, and the default dynamic provisioner is chosen (gp2 on AWS, standard on GKE, AWS & OpenStack).

Default: ""

Tiered Storage settings Requires enterprise.licenseKey or enterprised.licenseSecretRef For details, see the Tiered Storage documentation.

Default:

{"cloud_storage_access_key":"","cloud_storage_api_endpoint":"","cloud_storage_azure_container":null,"cloud_storage_azure_managed_identity_id":null,"cloud_storage_azure_shared_key":null,"cloud_storage_azure_storage_account":null,"cloud_storage_bucket":"","cloud_storage_cache_size":5368709120,"cloud_storage_credentials_source":"config_file","cloud_storage_enable_remote_read":true,"cloud_storage_enable_remote_write":true,"cloud_storage_enabled":false,"cloud_storage_region":"","cloud_storage_secret_key":""}

AWS or GCP access key (required for AWS and GCP authentication with access keys). See the property reference documentation.

Default: ""

AWS or GCP API endpoint. * For AWS, this can be left blank as it is generated automatically using the bucket and region. For example, <bucket>.s3.<region>.amazonaws.com. * For GCP, use storage.googleapis.com See the property reference documentation.

Default: ""

Name of the Azure container to use with Tiered Storage (required for ABS/ADLS). Note that the container must belong to the account specified by cloud_storage_azure_storage_account. See the property reference documentation.

Default: nil

Shared key to be used for Azure Shared Key authentication with the Azure storage account specified by cloud_storage_azure_storage_account. Note that the key should be base64 encoded. See the property reference documentation.

Default: nil

Name of the Azure storage account to use with Tiered Storage (required for ABS/ADLS). See the property reference documentation.

Default: nil

AWS or GCP bucket name used for Tiered Storage (required for AWS and GCP). See the property reference documentation.

Default: ""

Maximum size of the disk cache used by Tiered Storage. Default is 20 GiB. See the property reference documentation.

Default: 5368709120

Source of credentials used to connect to cloud services (required for AWS and GCP authentication with IAM roles). * config_file * aws_instance_metadata * sts * gcp_instance_metadata * azure_aks_oidc_federation * azure_vm_instance_metadata See the property reference documentation.

Default: "config_file"

Cluster level default remote read configuration for new topics. See the property reference documentation.

Default: true

Cluster level default remote write configuration for new topics. See the property reference documentation.

Default: true

Global flag that enables Tiered Storage if a license key is provided. See the property reference documentation.

Default: false

AWS or GCP region for where the bucket used for Tiered Storage is located (required for AWS and GCP). See the property reference documentation.

Default: ""

AWS or GCP secret key (required for AWS and GCP authentication with access keys). See the property reference documentation.

Default: ""

Absolute path on the host to store Redpanda's Tiered Storage cache.

Default: ""

Additional annotations to apply to the created PersistentVolumeClaims.

Default: {}

Additional labels to apply to the created PersistentVolumeClaims.

Default: {}

To disable dynamic provisioning, set to "-". If undefined or empty (default), then no storageClassName spec is set, and the default dynamic provisioner is chosen (gp2 on AWS, standard on GKE, AWS & OpenStack).

Default: ""

Default: true

TLS settings. For details, see the TLS documentation.

Default:

{"certs":{"default":{"caEnabled":true},"external":{"caEnabled":true}},"enabled":true}

List all Certificates here, then you can reference a specific Certificate's name in each listener's listeners.<listener name>.tls.cert setting.

Default:

{"default":{"caEnabled":true},"external":{"caEnabled":true}}

This key is the Certificate name. To apply the Certificate to a specific listener, reference the Certificate's name in listeners.<listener-name>.tls.cert.

Default: {"caEnabled":true}

Set the caEnabled flag to true only for Certificates that are not authenticated using public authorities.

Default: true

Example external tls configuration uncomment and set the right key to the listeners that require them also enable the tls setting for those listeners.

Default: {"caEnabled":true}

Set the caEnabled flag to true only for Certificates that are not authenticated using public authorities.

Default: true

Enable TLS globally for all listeners. Each listener must include a Certificate name in its <listener>.tls object. To allow you to enable TLS for individual listeners, Certificates in auth.tls.certs are always loaded, even if tls.enabled is false. See listeners.<listener-name>.tls.enabled.

Default: true

Taints to be tolerated by Pods, can override this for StatefulSets. For details, see the Kubernetes documentation.

Default: []

Redpanda tuning settings. Each is set to their default values in Redpanda.

Default: {"tune_aio_events":true}

Increase the maximum number of outstanding asynchronous IO operations if the current value is below a certain threshold. This allows Redpanda to make as many simultaneous IO requests as possible, increasing throughput. When this option is enabled, Helm creates a privileged container. If your security profile does not allow this, you can disable this container by setting tune_aio_events to false. For more details, see the tuning documentation.

Default: true

helm-charts's People

Contributors

abhilashjoseph avatar acristu avatar adrianmester avatar alejandroesc avatar anapsix avatar andrewhsu avatar benpope avatar bhataprameya avatar burandobata avatar chrisseto avatar dependabot[bot] avatar gousteris avatar ivotron avatar jakescahill avatar jan-g avatar joejulian avatar jrkinley avatar koooge avatar may-cdev avatar missingcharacter avatar prune998 avatar rafalkorepta avatar redp01 avatar reytech-dev avatar rockdata01 avatar vbotbuildovich avatar vince-chenal avatar vovtz avatar vuldin avatar weeco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Ensure all tests run and pass after refactor

When TLS is disabled, all associated non-TLS tests run and pass successfully. When TLS is enabled, the kafka test passes but there is an issue with the pandaproxy/REST TLS test. The associated section of the ConfigMap:

    pandaproxy:
      pandaproxy_api:
      - address: 0.0.0.0
        name: internal
        port: 8082
      pandaproxy_api_tls:
      - name: internal
        enabled: true
        require_client_auth: true
        cert_file: /etc/tls/certs/cert1/tls.crt
        key_file: /etc/tls/certs/cert1/tls.key
        truststore_file: /etc/tls/certs/cert1/ca.crt
    schemaregistry:
      schemaregistry_api:
      - address: 0.0.0.0
        name: schema-registry
        port: 8081
      schemaregistry_api_tls:
      - name: schema-registry
        enabled: true
        require_client_auth: false
        cert_file: /etc/tls/certs/cert1/tls.crt
        key_file: /etc/tls/certs/cert1/tls.key

Running the same command as in the test container:

> k exec -it -n helm-test redpanda-0 -- curl -m 3 --ssl-reqd --cacert /etc/tls/certs/cert1/ca.crt https://redpanda-0.redpanda.helm-test.svc.cluster.local.:8082/brokers
Defaulted container "redpanda" out of: redpanda, redpanda-configurator (init)
curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0
command terminated with exit code 56

Setting --memory=(pod request memory) is too aggressive

Currently we set the --memory argument of redpanda to use 100.00% of the pod request memory, with no byte left over.

However, this --memory parameter doesn't set an absolute cap on the rp process memory use (as seen by cgroups): it is only for the primary memory map for the seastar allocator, but the redpanda process will use a bit than that: for file pages and page tables, at least: these are added the anonymous map size which will eventually grow to almost exactly the --memory value.

In practice this causes the rp process to be OOM killed by the kernel as it exceeds its cgroup budget.

So in addition to the larger buffer we need outside the container (for kernel memory, sidecards, etc) we need some buffer inside the container. The page tables are only about 0.2% of the total memory. The file pages seem more variable, but let's say they won't exceed 200 MB.

So we can we use a --memory argument which is less than the request amount R by 0.002 * R + 200 Mi ?

`serviceAccount` and `fsGroup` connection

Look into how serviceAccount and fsGroup are related.

# TODO how does creating a service account impact fsGroup?
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""
statefulset:
  # When using persistent storage the volume will be mounted as root. In order for redpanda to use the volume
  # we must set the fsGroup to the uid of redpanda, which is 101
  podSecurityContext:
    fsGroup: 101
    # runAsNonRoot: true
    # runAsUser: 1000

`auth.tls.certs` remaining tasks

# Authentication
auth:
  #
  # TLS configuration
  tls:
    # Enable global TLS, which turns on TLS by default for all listeners
    # If enabled, each listener must set a certificate name (ie. listeners.<group>.<listener>.tls.cert)
    # If disabled, listeners may still enable TLS individually (see listeners.<group>.<listener>.tls.enabled)
    enabled: false
    # list all certificates below, then reference a certificate's name in each listener (see listeners.<listener name>.tls.cert)
    # TODO consider switching certs to list
    certs:
      # This is the certificate name that is used to associate the certificate with a listener
      # See listeners.<listener group>.tls.cert for more information
      # TODO add custom cert support: https://github.com/redpanda-data/helm-charts/pull/51
      kafka:
        # issuerRef:
        #   name: redpanda-cert1-selfsigned-issuer
        # The caEnabled flag determines whether the ca.crt file is included in the TLS mount path on each Redpanda pod
        # TODO remove caEnabled (rely on requireClientAuth): https://github.com/redpanda-data/helm-charts/issues/74
        caEnabled: true
        # duration: 43800h
      rpc:
        caEnabled: true
      admin:
        caEnabled: true
      proxy:
        caEnabled: true
      schema:
        caEnabled: true
  • remove caEnabled
  • consider switching from map to list/array

Merge value files into a single `values.yaml`

Right now there are five or six different values files (depending on the branch). If a user wants to install a cluster with all of these features in play then the command becomes long and difficult to manage. We can merge all values files, and then have boolean properties for each section that enable reading related properties. Then all properties would be in one file, and the command to install the cluster would be one of the following:

helm install redpanda .               # for installing from cloned repo
helm install redpanda/redpanda   # for installing from external helm repo

This would also better facilitate getting redpanda added to krew, arkade, or other such k8s-tool management libraries.

Handle centralized config

With [email protected] comes centralized config. This removes all cluster-wide props from redpanda.yaml and requires using either rpk config edit or rpk config import. The edit command is seen as the primary method for setting cluster-wide props, but this doesn't work well with infrastructure-as-code scenarios.

The helm chart currently writes all values to redpanda.yaml and redpanda complains about this during startup. Interestingly enough, it seems to respect the configuration regardless. But we should separate cluster-level and node-level config into separate sections within values.yaml, and then only write the node-level config to redpanda.yaml. Cluster-wide config could be passed in once the cluster has started with rpk config import or included in the start command with conditional arguments (redpanda start --set cloud_storage_enabled=true ...).

Expose the StatefulSet for use external to Kubernetes

A client needs to access each specific broker, so a LoadBalancer is needed for each broker.

The broker needs to advertise the address of the LoadBalancer, which is a little more tricky. I think it would be worth investigating https://github.com/kubernetes-sigs/external-dns - this should allow a predictable DNS entry for each LoadBalancer associated with each Redpanda broker, and generate the advertised_kafka_api with helm. Some docs here: https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md

Something like:

service.loadbalancer.yaml

{{- $lb := .Values.loadBalancer }}
{{- if $lb.enabled }}
{{- range untilStep 0 (.Values.statefulset.replicas|int) 1 }}
apiVersion: v1
kind: Service
metadata:
  name: redpanda-{{ . }}
annotations:
  external-dns.alpha.kubernetes.io/hostname: redpanda-{{ . }}.{{.Values.loadBalancer.parentZone}}
spec:
  externalTrafficPolicy: Local
  ports:
    - name: kafka
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    statefulset.kubernetes.io/pod-name: redpanda-{{ . }}
  type: LoadBalancer
---
{{- end }}
{{- end }}

statefulset.yaml

rpk --config $CONFIG config set redpanda.advertised_kafka_api.address redpanda-$NODE_ID.{{.Values.loadBalancer.parentZone}}

Values.yaml

loadBalancer:
  enabled: true
  parentZone: my-project.gcp.domain.com.

Currently Redpanda can currently only listen on one address, which means traffic inside the K8s cluster would have to go via the LoadBalancer.

SASL Authentication failed

I was able to go through the steps in methods 1 and 2 without issues. But when going through the steps outlined in method 3 I run into the following error:

> kubectl -n redpanda run rpk -ti --rm --restart=Never --image vectorized/redpanda:v21.11.15 --command -- /bin/bash -c "rpk topic create test-topic --user admin --password test --sasl-mechanism SCRAM-SHA-256 --brokers redpanda-0.redpanda.redpanda.svc.cluster.local.:9092 --tls-enabled --tls-truststore <(echo '$CERT')"
unable to create topics [test-topic]: SASL authentication failed: security: Invalid credentials: SASL_AUTHENTICATION_FAILED: SASL Authentication failed.

Running the following command is successful:

> helm test redpanda -n redpanda
NAME: redpanda                                   
LAST DEPLOYED: Fri Apr 29 10:17:03 2022                                                            
NAMESPACE: redpanda                                                                                
STATUS: deployed                                                                                   
REVISION: 1             
TEST SUITE:     redpanda-test-kafka-sasl-status                                                    
Last Started:   Fri Apr 29 10:19:57 2022                                                           
Last Completed: Fri Apr 29 10:20:05 2022                                                           
Phase:          Succeeded
NOTES:
Congratulations on installing redpanda!

I'll look more into this, maybe the credentials aren't actually being enabled for SASL as the README says.

===========================

Adding more details on my environment:

System:

Ubuntu 22.04
Docker 20.10.12

Install arkade (my version was v0.8.23):

curl -sLS https://get.arkade.dev | sudo sh
> ark get helm jq minikube [email protected] kubens kubectx
> minikube start --kubernetes-version v1.23.3
> ark install cert-manager
> helm install redpanda . -f values_add_tls.yaml -f values_add_sasl.yaml -n redpanda --create-namespace
> CERT=$(kubectl exec -n redpanda  redpanda-0 -c redpanda -- cat /etc/tls/certs/ca.crt)
> kubectl -n redpanda run rpk -ti --rm --restart=Never --image vectorized/redpanda:v21.11.15 --command -- /bin/bash -c "rpk topic create test-topic --user admin --password test --sasl-mechanism SCRAM-SHA-256 --brokers redpanda-0.redpanda.redpanda.svc.cluster.local.:9092 --tls-enabled --tls-truststore <(echo '$CERT')"
unable to create topics [test-topic]: SASL authentication failed: security: Invalid credentials: SASL_AUTHENTICATION_FAILED: SASL Authentication failed.
pod "rpk" deleted
pod redpanda/rpk terminated (Error)

Arkade installed [email protected], but that probably doesn't impact much since the commands above pin the k8s version.

top-level `externalAccess` in `values.yaml`

externalAccess:
  enabled: true

See https://github.com/bitnami/charts/blob/master/bitnami/kafka/values.yaml#L730

Also each listener currently has an external section that isn't part of the Redpanda API:

listeners:
  admin:
    enabled: true
    port: 9644
    address: 0.0.0.0
    external:
      enabled: false

This can be treated similar to how the tls listener section is handled, where it overrides the global value.

Another idea regarding each listener's section: it may not make sense to exist. Most listeners are part of a group (for kafka, rest, schemaregistry, etc.). Multiple listeners, some external, can be included in the list.

Make Redpanda and helm optional properties be the same

Redpanda and the docs say the listeners for each of the following are optional:

  • config.pandaproxy
  • config.schema_registry
  • config.redpanda.admin
  • config.redpanda.kafka_api
  • config.redpanda.rpc_server

But they aren't optional in the helm chart (possibly for valid reasons). Maybe it makes sense for the helm chart and Redpanda to have the same optional properties, and for default values to be used if no value is given in values.yaml.

Verify memory/core count is correct

We should verify that enough memory is made available for the cores given. If it is not we should provide an error. We should also check to make sure full cores are used and not partial cores.

JIRA Link: K8S-1

Properly handle node_id and seed_servers

The helm chart will always set redpanda.seed_servers to be [] where redpanda.node_id is 0 (broker 0). I believe the issue is that broker 0 may not always be the leader after restarts or if there is a leader election, but the existing statefulset code still assumes broker 0 will be the leader (and sets seed_servers to []). There could also have been some issue with broker 0 that caused it to lose leadership, and so it could be in a state where it doesn't have a complete copy of all partitions. In this scenario, setting broker 0 to be the leader would result in data loss. See notes below for explanation as to why this is an issue and what we can do (both now and in versions >= 22.3) to resolve.

Investigation is needed into what happens once a a new leader is elected and then a helm upgrade is applied. We should also determine the leader prior to restarting the cluster for whatever reason and set seed_server to [] for the appropriate broker. Cluster restart should not impact seed_servers values, as they will be correctly set on all nodes, including the founding node (after the founding node is started).

Work is being done to allow setting the same seed_servers value across all brokers in a cluster, relevant ticket here: redpanda-data/redpanda#333

Also related to this, the following ticket tracks making node_id automatically assigned (and no longer set within redpanda.yaml): redpanda-data/redpanda#2793

Once the above tickets have associated PRs merged, we wouldn't have to worry about handling either node_id or seed_servers in the helm chart. See notes below for how seed_servers will be handled in the future. For now we ensure the leader (or founding node) initially has it set to [] and then populate with other brokers after startup. After 22.3 we can set seed_servers for each node in the same way from the beginning.

Flatten folder hierarchy

It looks like we could dump the contents of the redpanda folder into root and modify a few files to update paths. This would simplify the folder hierarchy, which is minor but it feels like it makes more sense to me. It's minor, but also weird to me that this project has almost all files in yet another folder and having to always cd helm-charts/redpanda.

Expand `*-tls-enabled` templates to work for more than just first listener in group

There are multiple listener groups, one for each type of listener (kafka, schema, etc.). Each group has an associated template in _helpers.tpl called *_tls_enabled (replace * with group name).

Admin and RPC listeners are not groups (there is only ever one of these listeners) so this doesn't apply.

The TLS templates for these groups need to properly handle checking each listener in the group to see whether TLS is enabled. If any have TLS enabled, then the template should return true. This will ensure the proper tests are run, and also enabled future improvements/simplifications to the tests.

Expand top-level `storage` parameter

# Persistence
storage:
  enabled: true
  # Absolute path on host to store Redpanda's data.
  # If not specified, then `emptyDir` will be used instead.
  # If specified, but `persistentVolume.enabled` is `true`, then has no effect.
  hostPath: ""
  # If `enabled` is `true` then a PersistentVolumeClaim will be created and
  # used to store Redpanda's data, otherwise `hostPath` is used.
  #
  persistentVolume:
    enabled: true
    size: 100Gi
    # If defined, then `storageClassName: <storageClass>`.
    # If set to "-", then `storageClassName: ""`, which disables dynamic
    # provisioning.
    # If undefined or empty (default), then no `storageClassName` spec is set,
    # so the default provisioner will be chosen (gp2 on AWS, standard on
    # GKE, AWS & OpenStack).
    storageClass: ""
    # Additional labels to apply to the created PersistentVolumeClaims.
    labels: {}
    # Additional annotations to apply to the created PersistentVolumeClaims.
    annotations: {}
  #
  # Shadow indexing
  cloud:
    enabled: true                             # enable cloud storage
    region: "local"                           # S3 region
    bucket: "redpanda"                        # S3 bucket
    api:
      address: "minio"                        # S3 API endpoint (will be generated for AWS if left blank)
      port: 9000                              # S3 API port (default 443)
    accessKey: "minio"                        # S3 access key
    secretKey: "minio123"                     # S3 secret key
    remoteReadEnabled: true                   # cluster-wide cloud storage read
    remoteWriteEnabled: true                  # cluster-wide cloud storage write
    maxUploadWaitIntervalSec: 30              # seconds between segment uploads (else segments uploaded when size == segment.bytes)
    tlsEnabled: true

EDIT:

This has become a much more focused issue.

The bitnami example is good, and we already have a top-level storage section which is the main benefit. We can open another ticket in the future to investigate adding new functionality to this section as needed, and when focusing on those tickets we can see if and how the bitnami chart handles those features.

We had an internal discussion around a storage.enabled parameter and decided to remove it, as we don't see a reason someone would want to disable persistence at the moment. We can revisit this in the future if it makes sense.

Lowering the default persistent volume size is still a great idea and will be included.

The last issue that wasn't initially spelled out but is being focused on is making sure the proper ownership of the persistent volume of type HostPath is set. This happens automatically in some environments, but in minikube it doesn't (see this issue). Resolving this ticket will mean setting storage.persistentVolume.enabled to true or false works in any environment.

The last change alluded to above is including shadow indexing features, and that already has another ticket and will be completed when that ticket is focused on (likely not part of the initial release of v2).

Add support for SASL

Currently, the config map needs to be updated manually after deployment to turn on SASL.

`require_client_auth` exists but is never used

The parameter require_client_auth appears in the TLS sections of the various listeners, but is never used to generate any kubernetes object. This much be a copy/paste issue and could be removed, but I want to look into where it came from, the benefits of making it functional, and the time it would take to do so.

Add to helm repo https://charts.vectorized.io

Now that we have brought back and are actively improving this chart, we should improve on ease-of-use by adding it to the helm repo. At the moment users must clone the repo in order to use, which is more involved than:

> helm add redpanda/redpanda

This will also facilitate getting this chart into other tools such as arkade, artifacthub, and others.

The repo already has the operator-based helm chart:

> helm search repo redpanda
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                 
redpanda/redpanda-operator      v22.1.3         v22.1.3         Redpanda operator helm chart

This gets updated with the following CI workflow, which can be copied and modified slightly for this chart: https://github.com/redpanda-data/redpanda/actions/workflows/helm-chart-release.yml

I assume the chart name would be redpanda/redpanda.

Remove ability to set non-container properties

Some config properties have values that would never work in a container environment. A couple of examples:

  • tune_cpu
  • tune_disk_irq

These properties should not be included in values.yaml, and if detected they should either 1) be ignored or 2) block startup of the cluster (and print a useful error message).

If the default values for these props are not what is needed for a container environment, then these props should have their values hard-coded to the correct value.

JIRA Link: K8S-3

Test each type of listener

Tests are hard-coded to specific listeners, but there could be many listeners with various combinations of parameter values that need to be tested.

JIRA Link: K8S-6

Add `pandaproxy_client` and `schema_registry_client` sections, additional `rpk` parameters

config:
  pandaproxy_client:
    sasl_mechanism:                                           # The SASL mechanism to use when connecting
    consumer_heartbeat_interval_ms: 500                       # Interval (in milliseconds) for consumer heartbeats
    consumer_rebalance_timeout_ms: 2000                       # Timeout (in milliseconds) for consumer rebalance
    consumer_request_max_bytes: 1048576                       # Max bytes to fetch per request
    consumer_request_timeout_ms: 100                          # Interval (in milliseconds) for consumer request timeout
    scram_username:                                           # Username to use for SCRAM authentication mechanisms
    produce_batch_delay_ms: 100                               # Delay (in milliseconds) to wait before sending batch
    produce_batch_size_bytes: 1048576                         # Number of bytes to batch before sending to broker
    consumer_session_timeout_ms: 300000                       # Timeout (in milliseconds) for consumer session
    produce_batch_record_count: 1000                          # Number of records to batch before sending to broker
    retry_base_backoff_ms: 100                                # Delay (in milliseconds) for initial retry backoff
    retries: 5                                                # Number of times to retry a request to a broker
    scram_password:                                           # Password to use for SCRAM authentication mechanisms
#    broker_tls: { enabled: 0 key/cert files: {nullopt} ca file: {nullopt} client_auth_required: 0 }  # TLS configuration for the brokers
    brokers: [{host: "172.17.0.7", port: 9092}]                 # List of address and port of the brokers

  schema_registry_client:
    sasl_mechanism:                                           # The SASL mechanism to use when connecting
    consumer_heartbeat_interval_ms: 500                       # Interval (in milliseconds) for consumer heartbeats
    consumer_rebalance_timeout_ms: 2000                       # Timeout (in milliseconds) for consumer rebalance
    consumer_request_max_bytes: 1048576                       # Max bytes to fetch per request
    consumer_request_timeout_ms: 100                          # Interval (in milliseconds) for consumer request timeout
    scram_username:                                           # Username to use for SCRAM authentication mechanisms
    produce_batch_delay_ms: 0                                 # Delay (in milliseconds) to wait before sending batch
    produce_batch_size_bytes: 0                               # Number of bytes to batch before sending to broker
    consumer_session_timeout_ms: 10000                        # Timeout (in milliseconds) for consumer session
    produce_batch_record_count: 0                             # Number of records to batch before sending to broker
    retry_base_backoff_ms: 100                                # Delay (in milliseconds) for initial retry backoff
    retries: 5                                                # Number of times to retry a request to a broker
    scram_password:                                           # Password to use for SCRAM authentication mechanisms
#    broker_tls: { enabled: 0 key/cert files: {nullopt} ca file: {nullopt} client_auth_required: 0 }  # TLS configuration for the brokers
    brokers: [{host: "172.17.0.7", port: 9092}]                 # List of address and port of the brokers

    # unknown
    enable_admin_api: true                                      # Enable admin API
    segment_bytes:                                          # for ballpark upper limit, divide disk size (in bytes) by number of partitions

  rpk:
    coredump_dir: /var/lib/redpanda/coredump
    enable_memory_locking: false                              # Enables memory locking.
    enable_usage_stats: false                                 # Send usage stats back to Redpanda
    overprovisioned: true                                     # This should be set to true unless CPU affinity is possible
    tune_aio_events: false                                    # Increases the number of allowed asynchronous IO events
    tune_ballast_file: false                                  # 
    tune_clocksource: false                                   # Syncs NTP
    tune_coredump: false                                      # Installs a custom script to process coredumps and save them to the given directory
    tune_disk_nomerges: false                                 # 
    tune_disk_scheduler: false                                # 
    tune_disk_write_cache: false                              # 
    tune_fstrim: false                                        # 
    tune_network: false                                       # 
    tune_swappiness: false                                    # 
    tune_transparent_hugepages: false                         # 

Remove cert-manager requirement

Disabling TLS for some reason doesn't remove the requirement to have cert-manager installed prior to installing Redpanda. By default the helm chart should install Redpanda without first requiring cert-manager.

Use the upstream MetalLB Helm chart

Instead of using the downstream Bitnami chart, we can use the upstream/origin MetalLB chart. Instructions are here.

This also lets us simply point to MetalLB's documentation for installation via Helm, Kustomize or manifest instead of duplicating that information here.

Create users listed in `auth.sasl.users`

# Authentication
auth:
  #
  # SASL configuration
  sasl:
    enabled: false
    # user list
    # TODO create user at startup: https://github.com/redpanda-data/helm-charts/issues/73
    users:
    - name: admin
      # If password isn't given, then the secretName must point to an already existing secret
      # Work in progress! secretName is not yet used (planned for v2 release)
      # passwordSecret: adminPassword

Expand details of each entry in users list, possibly pointing to a secret for the user password.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.