Coder Social home page Coder Social logo

helm-charts's Introduction

Fluent Helm Charts

License Release Status

This functionality is in beta and is subject to change. The code is provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add fluent https://fluent.github.io/helm-charts

You can then run helm search repo fluent to see the charts.

Contributing

We'd love to have you contribute! Please refer to our contribution guidelines for details.

License

Apache 2.0 License.

helm-charts's People

Contributors

alecrajeev avatar alexismtr avatar applike-ss avatar boojapho avatar dependabot[bot] avatar dioguerra avatar dshackith avatar edsiper avatar elyzov avatar felfa01 avatar havefun83 avatar hobti01 avatar hwchiu avatar iamleot avatar jkosecki avatar malderete avatar mhoyer avatar mikutas avatar naseemkullah avatar pecastro avatar raffis avatar sebbrandt87 avatar slariviere avatar sppwf avatar stevehipwell avatar th0masl avatar thethir13en avatar tombokombo avatar towmeykaw avatar wenchajun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

deploy.spec.updateStrategy vs. deploy.spec.strategy

Inside the deployment manifest, updateStrategy is not expected here.

updateStrategy:
{{- toYaml . | nindent 4 }}
{{- end }}

jkr@joe-nb helm-charts-1 % kubectl explain deploy.spec.updateStrategy                                                     
error: field "updateStrategy" does not exist

image

I could provide a PR to fix this (and update the values key name, too). but i'm not sure how to bump the version (its breaking and requires a major bump or its just patch bump)

Differences from the fluent-bit chart in helm/charts

I am looking into migrating from using the [helm chart]https://github.com/helm/charts/blob/master/stable/fluent-bit/) to using the chart from this repo.
This makes sense since that repo is being deprecated

Upon looking at the fluent-bit chart here, there seem to be less "stuff" here, while some of them make sense (apiVersions, for example, it makes sense to not support older k8s versions here)
There seem to be some functionality/options missing.
I'll be happy to put up a PR here to bring this functionality over to this chart if this is something that makes sense for you.

[fluentd] Ability to add custom config files is missing

(currently) there is no ability to define custom config files either by:

  1. Adding one via a predefined include() block or a custom YAML block in values.yaml
  2. Configuring a set of variables to dynamically build one from a predefined template.

Are there any plans in implementing any one of the above in the near future?

kmsg input conflicts with psp

I've tried to deploy fluent bit via helm-chart(with enabled psp and kmsg input) and bumped into the problem. Pods aren't able to start.
Logs:

[2021/04/22 13:01:17] [ info] [engine] started (pid=1)
[2021/04/22 13:01:17] [ info] [storage] version=1.1.1, initializing...
[2021/04/22 13:01:17] [ info] [storage] in-memory
[2021/04/22 13:01:17] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/04/22 13:01:17] [error] [plugins/in_kmsg/in_kmsg.c:291 errno=2] No such file or directory
[2021/04/22 13:01:17] [error] Failed initialize input kmsg.1
[2021/04/22 13:01:17] [ info] [input] pausing systemd.0
[2021/04/22 13:01:17] [error] [lib] backend failed

After that, I'd added

daemonSetVolumes:
...
  - name: kmsg
    hostPath:
      path: /dev/kmsg
      type: CharDevice
...

and

daemonSetVolumeMounts:
...
  - name: kmsg
    mountPath: /dev/kmsg
    readOnly: true
...

After that, I bumped into another problem

[2021/04/22 13:39:03] [ info] [engine] started (pid=1)
[2021/04/22 13:39:03] [ info] [storage] version=1.1.1, initializing...
[2021/04/22 13:39:03] [ info] [storage] in-memory
[2021/04/22 13:39:03] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/04/22 13:39:03] [error] [plugins/in_kmsg/in_kmsg.c:291 errno=1] Operation not permitted
[2021/04/22 13:39:03] [error] [lib] backend failed
[2021/04/22 13:39:03] [error] Failed initialize input kmsg.1
[2021/04/22 13:39:03] [ info] [input] pausing systemd.0

I'd found a similar issue. But if try to add things below, you got troubles with PSP

securityContext: 
    privileged: true
    runAsUser: 0

Should I make some PR that gives more ability to tune the psp from value.yaml?

support for [PLUGINS]

Hey, this might not even be an issue as I'm new to helm, this chart and fluent-bit in general.
I want to add a [PLUGINS] block (is this the terminology?) (currently under plugins.conf).
Is it possible here? it should be similar to config.customParser/config.inputs etc/...
Is there a workaround if not?

Thanks!

tls_verify: "on" tls_ca: ""

Tell me how to transfer CA.pem file to pod fluent-bit
There is a setting
tls: "on"
tls_verify: "on"
# TLS certificate for the Elastic (in PEM format). Use if tls = on and tls_verify = on.
tls_ca: ""
# TLS debugging levels = 1-4
In the line tls_ca: "" write the path to CA.pem?
How does the CA.pem file get into the pod?

fluent-bit pushing to elastic

I'm deploying the fluent-bit helm chart into a cluster with the bitnami elasticsearch chart. This is my values.yaml where I override the elastic search host but otherwise everything else is default:

fluent-bit:
  config:
    outputs: |
      [OUTPUT]
          Name es
          Match kube.*
          Host {{ .Release.Name }}-elasticsearch-master
          Logstash_Format On
          Retry_Limit False
      [OUTPUT]
          Name es
          Match host.*
          Host {{ .Release.Name }}-elasticsearch-master
          Logstash_Format On
          Logstash_Prefix node
          Retry_Limit False

Elastic seems to be up:

>kubectl describe service cosmos-elasticsearch-master
Name:              cosmos-elasticsearch-master
Namespace:         default
Labels:            app.kubernetes.io/component=master
                   app.kubernetes.io/instance=cosmos
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=elasticsearch
                   helm.sh/chart=elasticsearch-14.2.1
Annotations:       meta.helm.sh/release-name: cosmos
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/component=master,app.kubernetes.io/instance=cosmos,app.kubernetes.io/name=elasticsearch
Type:              ClusterIP
IP:                10.96.187.224
Port:              http  9200/TCP
TargetPort:        http/TCP
Endpoints:         10.244.1.44:9200,10.244.3.31:9200
Port:              tcp-transport  9300/TCP
TargetPort:        transport/TCP
Endpoints:         10.244.1.44:9300,10.244.3.31:9300
Session Affinity:  None
Events:            <none>

And fluent-bit is deployed:

>kubectl describe daemonset.apps/cosmos-fluent-bit
Name:           cosmos-fluent-bit
Selector:       app.kubernetes.io/instance=cosmos,app.kubernetes.io/name=fluent-bit
Node-Selector:  <none>
Labels:         app.kubernetes.io/instance=cosmos
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=fluent-bit
                app.kubernetes.io/version=1.7.2
                helm.sh/chart=fluent-bit-0.15.1
Annotations:    deprecated.daemonset.template.generation: 1
                meta.helm.sh/release-name: cosmos
                meta.helm.sh/release-namespace: default
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/instance=cosmos
                    app.kubernetes.io/name=fluent-bit
  Annotations:      checksum/config: dce583b7cabafc0873dd55b1d27f2f134fd0682c8cabd2a4c5c76750bc36143f
                    checksum/luascripts: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
  Service Account:  cosmos-fluent-bit
  Containers:
   fluent-bit:
    Image:        fluent/fluent-bit:1.7.2
    Port:         2020/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/machine-id from etcmachineid (ro)
      /fluent-bit/etc/custom_parsers.conf from config (rw,path="custom_parsers.conf")
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
      fluent-bit/etc/fluent-bit.conf from config (rw,path="fluent-bit.conf")
  Volumes:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cosmos-fluent-bit
    Optional:  false
   varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:
   varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:
   etcmachineid:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/machine-id
    HostPathType:  File
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulCreate  24m   daemonset-controller  Created pod: cosmos-fluent-bit-96n67
  Normal  SuccessfulCreate  24m   daemonset-controller  Created pod: cosmos-fluent-bit-x7k5l
  Normal  SuccessfulCreate  24m   daemonset-controller  Created pod: cosmos-fluent-bit-8wwmm

However it seems that none of the fluent-bit pods can access the cosmos-elasticsearch-master service. I get a constant stream of these:

[2021/03/29 19:19:11] [error] [upstream] connection #XX to cosmos-elasticsearch-master:9200 timed out after 10 seconds

I see this #13 issue perhaps had similar problems with fluentd but I'm not sure what the solution was with passing environment variables. Any ideas?

Output plugin 'loki' cannot be loaded - fluent/fluent-bit:1.6.1

In our Kubernetes cluster we use Fluentbit, Loki and Grafana.
Since the fluentbit image 1.6.1 contains a Loki output plugin, we wanted to use the fluentbit image 1.6.1 in the helm chart.

We get the following error message from the fluentbit pod:

2020-10-20T07:55:43.405588570Z Fluent Bit v1.6.0 2020-10-20T07:55:43.405668159Z * Copyright (C) 2019-2020 The Fluent Bit Authors 2020-10-20T07:55:43.405677844Z * Copyright (C) 2015-2018 Treasure Data 2020-10-20T07:55:43.405683483Z * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd 2020-10-20T07:55:43.405688511Z * https://fluentbit.io 2020-10-20T07:55:43.405693483Z 2020-10-20T07:55:43.406466289Z Output plugin 'loki' cannot be loaded 2020-10-20T07:55:43.406488412Z Error: You must specify an output target. Aborting 2020-10-20T07:55:43.406494765Z

Our values.yml looks like this:

kind: DaemonSet
replicaCount: 1
image:
  repository: "fluent/fluent-bit"
  pullPolicy: Always
  tag: "1.6.1"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
  create: true
  annotations: {}
  name:
rbac:
  create: true
podSecurityPolicy:
  create: false
podSecurityContext: {}
securityContext: {}
service:
  type: ClusterIP
  port: 2020
  labels: {}
  annotations: {}
serviceMonitor:
  enabled: false
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
podLabels: {}
priorityClassName: ""
env: []
envFrom: []
extraPorts: []
extraVolumes: []
extraVolumeMounts: []
config:
  service: |
    [SERVICE]
        Flush 1
        Daemon Off
        Log_Level info
        Parsers_File parsers.conf
        Parsers_File custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*
        Mem_Buf_Limit 10MB
        Refresh_Interval  10
        Ignore_Older 2d
        Skip_Long_Lines On
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Merge_Log_Key app
        Keep_Log Off
        K8S-Logging.Parser Off
        K8S-Logging.Exclude Off
  outputs: |
    [OUTPUT]
        name               loki
        match              *
        host               loki.monitoring.svc.cluster.local
        port               3100
  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L

I don't quite understand why the log fluentbit says version 1.6.0 although we specified version 1.6.1. In the pod description there is also version 1.6.1.
I suspect that this is why it does not recognize the output loki plugin.

conditional path for varlibdockercontainers if using containerd/cri-o

I couldnt help but notice that /var/lib/docker/containers is hardcoded, but if using containerd/cri-o i believe this path doesn't exist. On my nodes for example there is /var/lib/containerd and /var/lib/containers

I'm not sure how that path is used by fluentbit (or fluentd i suppose) but shouldn't this be updated or a config option that changes if runtime is containerd/cri-o?

Add Servicemonitor support

Hi Team,

Do you think adding support for serviceMonitors used with Prometheus Operator?

Thanks
Sergiu

helm test doesn't create the test pod for "Fluent-bit" chart when Kind is "DaemonSet"

Logs :

helm test fluentbit -n logging --logs

Pod fluentbit-fluent-bit-test-connection pending
Pod fluentbit-fluent-bit-test-connection pending
NAME: fluentbit
LAST DEPLOYED: Tue Feb 16 14:48:44 2021
NAMESPACE: logging
STATUS: deployed
REVISION: 1
TEST SUITE: fluentbit-fluent-bit-test-connection
Last Started: Tue Feb 16 15:03:22 2021
Last Completed: Tue Feb 16 15:03:28 2021
Phase: Succeeded
NOTES:
Get Fluent Bit build information by running these commands:

export POD_NAME=$(kubectl get pods --namespace logging -l "app.kubernetes.io/name=fluent-bit,app.kubernetes.io/instance=fluentbit" -o jsonpath="{.items[0].metadata.name}")
echo "curl http://127.0.0.1:2020 for Fluent Bit build information"
kubectl --namespace logging port-forward $POD_NAME 2020:2020

Error: unable to get pod logs for fluentbit-fluent-bit-test-connection: pods "fluentbit-fluent-bit-test-connection" not found

Note :
If we remove the labels from "test-connection.yaml" this issue is resolved.
labels:
{{- include "fluent-bit.labels" . | nindent 4 }}

Looks like fluent-bit daemonset has selector labels which matches with that of test-connection pod.
selector:
matchLabels:
app.kubernetes.io/instance: fluentbit
app.kubernetes.io/name: fluent-bit

Fluentd unable to install when deploying prometheus rule

After a lot of debugging and only getting this error output

Error: template: meh/charts/fluentd/templates/_helpers.tpl:15:14: executing "fluentd.fullname" at <.Values.fullnameOverride>: can't evaluate field Values in type []interface {}
helm.go:81: [debug] template: meh/charts/fluentd/templates/_helpers.tpl:15:14: executing "fluentd.fullname" at <.Values.fullnameOverride>: can't evaluate field Values in type []interface {}

I manged to figure out the issue is coming from deploying the charts/fluentd/templates/prometheusrules.yaml the with statement causes the line 16 {{ template "fluentd.fullname" . }} to throw this exception.

[fluentd] Add separate aggregator chart

The two use cases for Fluentd are very different in implementation and as such rather than trying to shoehorn both into a single chart it would make sense to have a separate fluentd-aggregator chart. I already have a fluentd-aggregator chart in my repo and would be happy to create a PR to add it to this repo and deprecate mine.

It would also be useful to have an aggregator Docker image to handle the common aggregation endpoints; an aggregator isn't as concerned with size and if the plugins are layered on top of the base image it would keep the layers easily portable. I've created my own image for my Helm chart, although the chart will work fine with the base Fluentd image (by default it outputs to null and has stdout debugging available), but an official version would be better.

I'd like to call out @mrd2 for his work on PR #55 for the Fluentd collector chart, this would be intended to augment this not replace it.

[fluent-bit] Different way of doing config

Hi! I was messing around a bit with the config of fluent-bit. One problem I see currently is the inability to mix sections such as [INPUT][FILTER][INPUT].
I made a quick test using a list in values.yaml.

pipeline:
  - pipelineType: input
    Name: tail
    Path: /var/log/containers/*.log
    Parser: docker
    Tag: kube.*
    Mem_Buf_Limit: 5MB
    Skip_Long_Lines: "On"    
  - pipelineType: filter
    Name: kubernetes
    Match: kube.*
    Merge_Log: "On"
    Keep_Log: "Off"
    K8S-Logging.Parser: "On"
    K8S-Logging.Exclude: "On"
  - pipelineType: input
    Name: systemd
    Tag: host.*
    Systemd_Filter: _SYSTEMD_UNIT=kubelet.service
    Read_From_Tail: "On" 

and then rendering it with a range

{{- range $key, $pipeline := .Values.pipeline }}
    [{{ $pipeline.pipelineType | upper }}]
    {{- range $key, $val := $pipeline }} 
        {{ if eq $key "pipelineType" }}
        {{- else -}}
        {{ $key }} {{ $val }} 
        {{- end -}}
    {{- end }}
{{- end }}
{{ else }}

Is this something we want? It could be used a second option and still keep the current solution.
If so then I can make a PR, and otherwise it was a fun exercise :D

feature: deploy fluent-bit without kubernetes monitoring

fluent-bit is an excellent fast and lightweight log parser and forwarder. It makes an excellent choice for forwarding logs to upstream enterprise reporting solutions without capturing the kubernetes logs

A simply method to disable the kubernetes monitoring

Describe alternatives you've considered
rolling my own chart

Additional context
the immediate use case is for the network team to create an instance of fluent-bit that will forward syslogs to ELK and Splunk. They do not have access to the cluster resources so the daemon set fails to spin up the pod.

[fluent-bit] Support --dry-run

Now that the --dry-run option is coming in fluent-bit 1.7 (fluent/fluent-bit#2179), it might be nice to support this in the helm chart.

For example, if a new config doesn't pass the dry run, the healthcheck shouldn't pass (and/or the pod shouldn't start/rolled back).

Maybe could be implemented by an init container?

[fluent-bit] Add pod-labels

In the current values file, the only place to add custom labels is at the service. Labels are added to deployment and pod, but these are generated via templates, and not exposed in values file. What I would like to see is an option similar to podAnnotations for labels (eg. podLabels).
We use this to track team ownership for cost reporting.

[fluent-bit] Add annotations to psp

Add annotations to fluent-bit psp

Is your feature request related to a problem? Please describe.
unable to seccomp annotations

Describe the solution you'd like
ability to add annotations from values.yaml file

Describe alternatives you've considered
duplicate the code and add it there

Additional context
none

adding lua filter

What's the best way to add a lua script for filtering?
I have a .lua file containing the script but can't find where to put it.
thanks!

Support service topologyKeys for fluent-bit chart

Hi team. Thanks for the chart!
Do you consider supporting the topologyKeys for the fluent-bit service: https://github.com/fluent/helm-charts/blob/master/charts/fluent-bit/templates/service.yaml
We're using fluent-bit running as a daemonset with extra port and tcp input to ingest structured logs from apps. topologyKey will allow to prefer local instance of fluent-bit to reduce inter-node traffic: https://kubernetes.io/docs/concepts/services-networking/service-topology/#prefer-node-local-zonal-then-regional-endpoints

[fluentd] Fluentd not reading sources config from chart

I am deploying fluentd chart with custom values. We mainly customise fileConfigs.01_sources.conf and fileConfigs.04_outputs.conf from standard chart values.

In fluentd init I see that 01_sources.conf has not been applied to config, but 04_output.conf has.

Has anyone seen this behaviour before?

index name [fluent-bit] does not match pattern '^.*-\\d+$'"

I have fluent-bit deployed in a kubernetes cluster and the output set to push data to elastic search and then using kibana for visualization.

I have created an Index Lifecycle Management policy and tied it to my "fluent-bit" index. However, I am getting the following error:

index name [fluent-bit] does not match pattern '^.*-\d+$'"

I have read many articles on this error for other elastic search indexes and they suggest naming the index such that it has a number at the end. For example, "fluent-bit-00001".

However, I cannot figure out how to rename the fluent-bit index that is created in elastic search.

How can I do this?

[fluent-bit] Feature/values parity with stable/fluent-bit

With the scheduled stable chart repo obsolescence on the horizon it is reasonable to predict a significant uptick in the number of folks consuming the charts found in this repo. With that in mind, I'm wondering if PRs to improve the feature parity and supported values of the fluent-bit chart would be accepted by the maintainers of this repo. I'm thinking that aiming for 1:1 support isn't worth the effort, but I could see the value (pun intended) of providing a better experience than expecting end users to write large multiline strings for each of the config.foo elements.

Thoughts?

Thank you.

[fluent-bit] Migration guide

I tried to migrate from the old stable chart [1] to this new one and found it quite complex. A guide to migration would be a good thing in a view to deprecating the stable chart. This subject has already been discussed. [2]

@naseemkullah

• Copy contents of running config of stable/fluent-bit and paste blocks into the appropriate nested values of config for fluent/fluent-bit
• Mount any extra volumes such as tls certs by means of extraVolumes and extraVolumeMounts

[1] https://github.com/helm/charts/tree/master/stable/fluent-bit
[2] #14 (comment)

Is there a way to remove the prefix (kubernetes_<field>) from the the fields that kubernetes filter extracts?

I'm deploying fluent-bit as a Daemonset with the following configmap

Name:         fluent-bit
Namespace:    monitoring
Labels:       app.kubernetes.io/instance=fluent-bit
                  app.kubernetes.io/managed-by=Helm
                 app.kubernetes.io/name=fluent-bit
                  app.kubernetes.io/version=1.5.7
Annotations:  <none>

Data
====
custom_parsers.conf:
----
[PARSER]
    Name docker_no_time
    Format json
    Time_Keep Off
    Time_Key time
    Time_Format %Y-%m-%dT%H:%M:%S.%L

fluent-bit.conf:
----
[SERVICE]
    Flush 1
    Daemon Off
    Log_Level info
    Parsers_File parsers.conf
    Parsers_File custom_parsers.conf
    HTTP_Server On
    HTTP_Listen 0.0.0.0
    HTTP_Port 2020

[INPUT]
    Name tail
    Path /var/log/containers/*.log
    Parser docker
    Tag kube.*
    Mem_Buf_Limit 5MB
    Skip_Long_Lines On
    DB /var/log/flb_kube.db

[FILTER]
    Name kubernetes
    Match kube.*
    Merge_Log On
    Merge_Log_Trim On
    Keep_Log On
    K8S-Logging.Parser On
    K8S-Logging.Exclude On

[FILTER]
    Name record_modifier
    Match *
    Record fluent-bit-host ${HOSTNAME}

[FILTER]
    Name record_modifier
    Match *
    Record facility fluent-bit
    
[FILTER]
    Name modify
    Match *
    rename log message

[OUTPUT]
    Name gelf
    _facility "fluent-bit"
    Match kube.*
    Host graylog.host.com
    Port 12206
    Mode tcp 
    Gelf_Short_Message_Key message
    Gelf_Level_Key 6
    Gelf_Host_Key fluent-bit-host

all the GELF message recieved by Graylog which comes from kubernetes filter are prefixed with "kubernetes_" such as : kubernetes_container_name, kubernetes_container_image , kubernetes_labels_release , and so on. I would like to find a way to only extract the kuberenetes info without appending the kubernetes_ . is there a way to configure the kuberentes filter to do this .

How to use fluent-bit to push log to big query

Hello,
As I read from official document of fluent-bit doc: we can use fluent-bit to push log to big query
Below is the step by step I have done to push log to big query:

  1. Create a table with auto-detection on BigQuery
  2. Use this chart to create a fluent-bit to tail and push log

This is the debug log I got from our case:

[2021/03/24 04:25:46] [debug] [bigquery:bigquery.0] created event channels: read=24 write=25
[2021/03/24 04:25:46] [ info] [output:bigquery:bigquery.0] project='<project-id>' dataset='staging' table='backend_stag_log_test'
[2021/03/24 04:25:46] [debug] [output:bigquery:bigquery.0] JWT signature:
xxxxxxx
[2021/03/24 04:25:46] [debug] [http_client] not using http_proxy for header
[2021/03/24 04:25:46] [debug] [http_client] header=POST /oauth2/v4/token HTTP/1.1
Host: www.googleapis.com
Content-Length: 735
Content-Type: application/x-www-form-urlencoded


[2021/03/24 04:25:46] [ info] [oauth2] HTTP Status=200
[2021/03/24 04:25:46] [debug] [oauth2] payload:
{"access_token":"xxxxxxxx","expires_in":3599,"token_type":"Bearer"}
[2021/03/24 04:25:46] [ info] [oauth2] access token from 'www.googleapis.com:443' retrieved
[2021/03/24 04:25:46] [debug] [router] match rule tail.0:bigquery.0
[2021/03/24 04:25:46] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2021/03/24 04:25:46] [ info] [sp] stream processor started


[2021/03/24 04:31:11] [debug] [input:tail:tail.0] 0 new files found on path '/var/log/containers/backend-stag-*.log'
[2021/03/24 04:31:12] [debug] [input:tail:tail.0] inode=199365869 events: IN_MODIFY 
[2021/03/24 04:31:12] [debug] [task] created task=0x7f5df4237c80 id=0 OK
[2021/03/24 04:31:12] [debug] [http_client] not using http_proxy for header
[2021/03/24 04:31:12] [debug] [http_client] header=POST /bigquery/v2/projects/<project-id>/datasets/staging/tables/backend_stag_log_test/insertAll HTTP/1.1
Host: www.googleapis.com
Content-Length: 2107
User-Agent: Fluent-Bit
Content-Type: application/json
Authorization: Bearer ya29.c.xxxxxx-dDqtzdWqsI-9168zgOWPHzjjCKcOjAeMpmy9iIxYdCD4VtRcEALuqVJysJ8Xx-xxxx-no4zmiPUKnU
[2021/03/24 04:31:12] [debug] [output:bigquery:bigquery.0] HTTP Status=200
[2021/03/24 04:31:12] [debug] [socket] could not validate socket status for #44 (don't worry)
[2021/03/24 04:31:12] [debug] [out coro] cb_destroy coro_id=13
[2021/03/24 04:31:12] [debug] [task] destroy task=0x7f5df4237c80 (task_id=0)

I see that the fluent-bit have push log to BigQuery success (HTTP Status=200), but when I check the data on table of BigQuery, there is nothings there.

kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

helm version:

version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

fluent-bit version:

1.7-debug

So, How can i continue to debug this situation? And do we have any guides to implement my purpose with fluent-bit, the official document is not enought right now.

[fluentd] failure to push to elastic search

I've been trying to deploy fluentd onto our kubernetes cluster and connect it to a fresh install of elasticsearch (using the helm chart elastic provide here however I keep getting the following error;

2020-05-04 20:35:32 +0000 [warn]: #0 [out_es] failed to flush the buffer. retry_time=7 next_retry_seconds=2020-05-04 20:36:03 +0000 chunk="5a4d8785f38d1d77842098a8627353ef" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached"
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-4.0.7/lib/fluent/plugin/out_elasticsearch.rb:978:in `rescue in send_bulk'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-4.0.7/lib/fluent/plugin/out_elasticsearch.rb:940:in `send_bulk'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-4.0.7/lib/fluent/plugin/out_elasticsearch.rb:774:in `block in write'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-4.0.7/lib/fluent/plugin/out_elasticsearch.rb:773:in `each'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluent-plugin-elasticsearch-4.0.7/lib/fluent/plugin/out_elasticsearch.rb:773:in `write'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.10.2/lib/fluent/plugin/output.rb:1133:in `try_flush'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.10.2/lib/fluent/plugin/output.rb:1439:in `flush_thread_run'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.10.2/lib/fluent/plugin/output.rb:461:in `block (2 levels) in start'
  2020-05-04 20:35:32 +0000 [warn]: #0 /fluentd/vendor/bundle/ruby/2.6.0/gems/fluentd-1.10.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2020-05-04 20:35:36 +0000 [warn]: #0 [out_es] failed to flush the buffer. retry_time=8 next_retry_seconds=2020-05-04 20:36:10 +0000 chunk="5a4d8783dcaa0c27cd3c08fe81e0663c" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch-master\", :port=>9200, :scheme=>\"http\", :path=>\"\"}): read timeout reached"
  2020-05-04 20:35:36 +0000 [warn]: #0 suppressed same stacktrace

I'm deploying using Helm 3.2 onto a Kubernetes 1.15.5 cluster on AWS.

Fluent Bit crashing on startup with a config related error

Hello,

I am installing fluent-bit in a kubernetes cluster using the helm chart. The config map generated is this:

apiVersion: v1
data:
  custom_parsers.conf: |
    [PARSER]
        # https://docs.fluentbit.io/manual/pipeline/parsers/regular-expression

        Name        javak
        Format      regex
        Regex       /(?<timestamp>[\d]{4}-[\d]{2}-[\d]{2}\s[\d]{2}:[\d]{2}:[\d]{2},[\d]{3})\s\[(?<thread>[^ ]*)\]\s(?<level>[^ ]*)\s(?<class>[^ ]*)\s-\s(?<msg>.*)/
        #Time_Key    timestamp
        # https://linux.die.net/man/3/strptime
        #Time_Format %Y-%m-%d %H:%M:%S
  fluent-bit.conf: |
    [SERVICE]
        # https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/configuration-file

        Log_Level debug

        # Helm chart takes configs.customParsers param and puts the parser data under this file name
        # https://github.com/fluent/helm-charts/blob/main/charts/fluent-bit/templates/configmap.yaml
        Parsers_File custom_parsers.conf

    @INCLUDE custom_parsers.conf
    [INPUT]
        # https://docs.fluentbit.io/manual/pipeline/inputs/tail
        # https://fluentbit.io/blog/2020/12/15/multiline-logging-with-fluent-bit/

        Name              tail
        Tag               javak
        Path              /var/log/javak*.log
        Multiline         On
        Parser_Firstline  javak
        DB                /var/lib/fluentbit/javak.db
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [OUTPUT]
        # https://docs.fluentbit.io/manual/pipeline/outputs/elasticsearch

        Name                 es
        Match                javak
        Host                 hello-world.es.amazonaws.com
        Port                 9200
        Logstash_Format      On
        Logstash_Prefix      javak
        Logstash_DateFormat  %Y-%m
        Time_Key             timestamp
        Time_Key_Nanos       On
        AWS_Auth             On
        AWS_Region           ap-south-1
        tls                  On
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: fluentbit-release
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-04-13T14:47:20Z"
  labels:
    app.kubernetes.io/instance: fluentbit-release
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: fluent-bit
    app.kubernetes.io/version: 1.7.3
    helm.sh/chart: fluent-bit-0.15.4
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:custom_parsers.conf: {}
        f:fluent-bit.conf: {}
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/managed-by: {}
          f:app.kubernetes.io/name: {}
          f:app.kubernetes.io/version: {}
          f:helm.sh/chart: {}
    manager: Go-http-client
    operation: Update
    time: "2021-04-13T14:47:20Z"
  name: fluentbit-release-fluent-bit
  namespace: default
  resourceVersion: "407596"
  selfLink: /api/v1/namespaces/default/configmaps/fluentbit-release-fluent-bit
  uid: f5a5e667-a832-4772-a636-dc91382897eb

All the pods fail with this error:

Fluent Bit v1.7.3
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

Section [PARSER] is not valid in the main configuration file. It belongs to 
the Parsers_File configuration files.

I'm not able to figure out what's wrong in the config here. Can you please help?

[fluentd] Small fixes

  • Fix semversion on Charts.yaml
  • Use < | quote > on grafana ConfigMap template

See: #68

TODO: labels that are added to the prom metrics with the fluentd $tag (the "Input entry rate per tag" chart). we probably can transform it to something more legible.

[fluentd] Add StatefulSet

In general it would allow to create log aggregator instances, as described at https://docs.fluentd.org/deployment/high-availability but in more like a active-active setup.

Use case:
We use fluentd as daemonset but with custom config which forwards all processed events into another fluentd instance.
That fluentd instance is later on sending data to elasticsearch. It is more like a buffer for the daemonsets, because that fluend instance stores data on the disk in case of elasticsearch unavailability or slowness.
Other reasons:

  • not everyone uses elasticsearch backend
  • you don't have to expose to this setup data from the host, because daemonset is handling that. This way statefulset can be much simple to set up and is more k8s-agnostic
  • data retains on disks in case of pod deletion and recreation (see below)
  • spawning a lot of short living nodes with pods (like 1000 in 1h) with fleuntd on it was actually hammering elasticsearch making it unavailable

Why not to use daemonset directly to elastic?
Daemonsets tend to have a small buffers on the disk (usually few megabytes, but still), which pauses processing of the input if the buffers are full. So if you have slow backend service you whole logging is throttled and in some cases logs from the short living containers are lost.
Moreover, when used with preemtible instances in GKE or spot instances in AWS it just may not have enough time to retry and flush the buffer before instance is force killed. That's why we reconfigure it to send data to another fluentd instance, which also works as a buffer (it runs on long living nodes).

Why not deployment for fluentd?
As above daemonset, the buffer is kept on the disk, and it works okay until pod is terminated. If the pod is not able to flush data from disk to backend service then data on pvc will be lost.
So the daemonset is ok to the certain extent, but is still lacking, especially in edge situations.

Fix:
Use StatefulSet with dedicated pvc for disk buffer. This way the volumes names are predictable, and you can much easily scale system up an down, while keeping the data - even to the extent of totally wiping pods and recreating them - the disks will be re-attached with old data.
It can be also extended with autoscaling capabilities based on custom metrics - in that case for example disk usage as buffer pool. Termination grace period would need to be adjusted so that pod has time to flush data to the backend service, so this depends on the use case but usually 1h is enough.

Something similar is done currently with elasticsearch clusters (sts with termination period set to 1h so that pod can export data shards).

[fluent-bit] Support the init containers via the values.yaml

Hi Team,

Issue

I would like to know is it possible to support the init-container via the values.yaml?

Scenario.

In order to support the fluent-bit with OpenDistro-For-ElasticSearch, I have to mount the certificate into fluent-bit.
However, the certificate is generated by the OpenDistro ElasticSearch container and store in the container.

I used the following like scrips to create a k8s secret and then mount the k8s secret into fluent-bit container.

kubectl -n logging exec opendistro-es-client-858676946f-trcpt cat /usr/share/elasticsearch/config/root-ca.pem > es-root-ca.pem
kubectl -n logging delete secret es-root-ca
kubectl -n logging create secret generic es-root-ca --from-file=es-root-ca.pem

The idea is to make installation of EFK automated. If the fluent-bit supports the init-containers in its setting, I can inject a init-container to run above steps before booting up the fluent-bit container.

Thanks.

Issue with fluent-bit/templates/tests

coalesce.go:200: warning: cannot overwrite table with non table for image (map[pullPolicy:Always repository:busybox tag:latest])
Error: template: fluent-bit/templates/tests/test-connection.yaml:12:24: executing "fluent-bit/templates/tests/test-connection.yaml" at <.Values.testFramework.image.repository>: can't evaluate field repository in type interface {}

Running
helm install fluent-bit fluent/fluent-bit -f values.yaml -n logging

I can comment out the

# testFramework:
#   # When enabled, performs helm tests
#   image: "dduportal/bats"
#   tag: "0.4.0"

and it installs the helm chart correctly

[fluent-bit] Provide Grafana dashboard for sidecar

It would be really useful if the fluent-bit Helm chart supported installing a Grafana dashboard as a configmap to be picked up by the dashboard sidecar. There is already a dashboard available for the documentation which could be used. I'd be happy to create a PR to do this if the maintainers are interested?

Deploying Fluentd using helm chart gives ImagePullError

I used the default config and the following helm command:

helm install fluentd fluent/fluentd

Events:
Type Reason Age From Message


Normal BackOff 13m (x617 over 153m) kubelet Back-off pulling image "fluent/fluentd-kubernetes-daemonset:v1.11.5-debian-elasticsearch"

image.version is fixed which prevents the usage of own loki images with installed plugins

Hey,

awesome work so far! All the other charts are quite bad configurable. This looks really nice to use. I only have one problem. As we use a customized fluentbit, the tightly coupled image version over the .Chart.AppVersion blocks us from using our image.

What about:

{{- if .Values.image.version }}
  image: "{{ .Values.image.repository }}:{{ .Values.image.version }}"
{{- else }}
  image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
{{- end }}

This way you could still increase versions based on the chart version and also offer custom images.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.