Coder Social home page Coder Social logo

opsgenie / kubernetes-event-exporter Goto Github PK

View Code? Open in Web Editor NEW
1.0K 1.0K 331.0 11.54 MB

Export Kubernetes events to multiple destinations with routing and filtering

License: Apache License 2.0

Go 99.71% Dockerfile 0.29%
alerting events exporting kubernetes observability

kubernetes-event-exporter's People

Contributors

bigwheel avatar blamarvt avatar bobsongplus avatar brianterry avatar d-kononov avatar flaviolemos78 avatar fnuva avatar hikhvar avatar itsmefishy avatar ivilimon avatar jaydp17 avatar jun06t avatar lesh366 avatar makocchi-git avatar mblaschke avatar mdemierre avatar mheck136 avatar mrf avatar mrueg avatar mustafaakin avatar mustafaakin-atl avatar neonsludge avatar njegosrailic avatar omauger avatar sshishov avatar tribonacci avatar vsbus avatar xmcqueen avatar yanowitz avatar youyongsong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-event-exporter's Issues

Kafka sink

As Kafka is commonly deployed in the enterprise, a Kafka sink would be really valuable.

Basics needed:

  • Send JSON events to Kafka
  • Sensible default producer configuration
  • Support layout transformation (like Kinesis, SQS, ...)

Good to have:

  • TLS authentication (for secured Kafka clusters)
  • Async/sync producer
  • Delivery guarantee choice
  • Full producer configuration
  • Avro + Confluent Schema Registry support

I started already in a fork: https://github.com/opsgenie/kubernetes-event-exporter/compare/master...mdemierre:feature/receiver-kafka?expand=1

Please add if you have other requirements or opinions about the implementation.

Incorrect condition for error logging for webhook sink

When using webhook to export Kubernetes event, if the webhook return 200, it logs it as error "not 200/201". Log severity is debug,error. This is due to incorrect condition at

pkg/sinks/webhook.go line 56

Condition should be treat all http status between [200-204] as success

if resp.StatusCode > 204 { 
...
}

AWS ElasticSearch not getting any data

Hi.

I'm having trouble seeing the data from event-exporter in AWS ElasticSearch.

config.yaml

route:
  routes:
    - match:
        - receiver: "es-dump"
receivers:
  - name: "es-dump"
    elasticsearch:
      hosts:
        - "https://vpc-k8s.us-east-1.es.amazonaws.com:443"
      index: "kube-events"

I can see the logs in the event-exporter:

2020-03-30T07:30:35Z DBG app/pkg/exporter/channel_registry.go:56 > sending event to sink event="Started container event-exporter" sink=es-dump

But the kube-events has no data (that 1 was a curl test that I've did to check the connection):

green open kube-events               9vZMUP-5S3KY8BNvpBLVVg 5 1        1  0   9.4kb   4.7kb

Help, please?

Option to Export to Sentry

Description

There should be an option to export warning and error level events to sentry.

Also, I can try to give a pull request for this if it is decided that it will be good addition.

support arm64 image

The docker hub repository of kubernetes-event-exporter only has the images of amd64 architecture, hoping to also add the images of arm64.

Quickstart yaml uses incorrect key

It looks like the yaml to create the default configmap for elasticsearch uses "receivers.elasticsearch.addresses" as the key here,

https://github.com/opsgenie/kubernetes-event-exporter/blob/master/deploy/01-config.yaml#L14

but, per the README it should be "receivers.elasticsearch.hosts"

Using "hosts" configures it to point to the the right target, whereas "addresses" seems to default to http://localhost:9200/

If it is supposed to be hosts instead of addresses, I'm happy to open the MR to correct it.

Log to stdout

We would like to run this within a container and a service account to reach kubernetes API, it would be awesome to log events directly to stdout.

I see that console output is still on the TODO in the readme.md, are there any plans to get it in?

file sink is not available with distroless image.

kubernetes-event-exporter is using distroless as base image. But distroless does not have a functioning filesystem. When I set receiver to a file, it will report the file path does not exist. But when I change the base image to alpine, file sink will work well.

Filtering not done properly on Type

I am trying to get only the events of Warning type, but somehow the config below also prints out Normal events

I have the following config

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
  namespace: monitoring
data:
  config.yaml: |
    #logLevel: debug
    logLevel: error
    route:
      routes:
        - match:
          - type: "Warning"
          - receiver: "dump"
    receivers:
      - name: "dump"
        file:
          path: "/dev/stdout"

I am seeing events also that have type=Normal. Am I doing something wrong, or the filtering doesn't work as expected?

Using version opsgenie/kubernetes-event-exporter:0.8
Kube version

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7", GitCommit:"5737fe2e0b8e92698351a853b0d07f9c39b96736", GitTreeState:"clean", BuildDate:"2020-06-24T19:54:11Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

visible events

Hi,

Is there something particular to do to get all events ?
I've only got 'Successfully assigned xxx to yyy' events with standard elasticsearch output

Support helm deployment

I can contribute that but not sure how it can be distributed since you don't have any repo for helm charts.
any thoughts?

Wrong yaml example for File sink in readme

Hello,
I was experimenting with this exporter and in yaml you have this example:

receivers:
  - name: "file"
    file:
      path: "/tmp/dump"
      layout: # Optional

but in source code there is this:

Path   string                 `yaml:"file"`

which is causing pkg/exporter/engine.go:19 > Cannot initialize sink error="open : no such file or directory" name=file

Either example or source is misleading as it should be:

    file:
      file: "/tmp/dump"
      layout: # Optional

Support basic metadata on custom resources

Hi, thanks for this project! It makes exporting events out of kubernetes very simple!

I'd like to post this feature request. With some guidance I can try to open a PR myself to implement it if you think it's a valid case.

At the moment the event exporter is not able to read metadata for custom resources and logs the error, for example:

2020-01-21T18:25:47Z ERR app/pkg/kube/watcher.go:68 > Cannot list labels of the object error="no matches for kind \"Application\" in version \"v1alpha1\""

Would it be possible to be able to get basic metadata assuming that resources are valid kubernetes objects? What i'm mostly interested in is the name of of the resource.

Allow logs to be formatted in json

We're pretty-printing the logs for better usability, which makes sense when the user is directly looking at the logs from the container.

But most people running the event-exporter in their kubernetes clusters will be pushing these logs to some centralized logging system, which may or may not deal properly with colored logs & non-json logs.

It'd be cool if we allow an option to output the logs in json.

Let me know if this sounds like something we'd like to merge in this project, if so I'll go ahead & create a PR

Wrting data to elasticsearch through fluentd-aggregator

Is there any way to write data in elastic search through fluentd-aggregator so that we will not miss any event due to unreachable of elastic search host?

A complete scenario is:

  • Let say the elasticsearch is deployed in a different network and it will expose the service to writing the data in it.
  • kubernetes-event-exporter use the exposed elasticsearch IP and write the data in it. But what happens when the exposed IP get down for some time then we lost the event at that time but if we redirect it through the fluentd-aggregator then the fluentd-aggregator will buffer the event if elasticsearch is unreachable

Support kustomize

Kustomize is a popular tool for deploying to kubernetes. It could be supported by adding a kustomization.yaml file into the deploy/ dir:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- 00-roles.yaml
- 01-config.yaml
- 02-deployment.yaml

To make the namespace customizable, it would be nice to split the Namespace object definition out to another file (00-namespace.yaml - not included in the .resources array above).

A user's kustomization.yaml might look like:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/opsgenie/kubernetes-event-exporter//deploy?ref=v0.8

Treat all 2xx response code as valid when using the webhook receiver

Description

I'm sending events to loki with a webhook configuration.
Events are pushed and accepted with a 204 response code from loki

However, because the status code is not 200, logs are showing this error:

2020-06-19T13:27:31Z DBG app/pkg/exporter/channel_registry.go:59 > Cannot send event error="not 200: " event="Started container xxxxxxxxxxxxxxxxx" sink=loki

kubernetes-event-exporter only consider http.StatusOK as a correct response code from this bit of code

Proposed solution

Treat all 2xx response code as valid

Log level configuration

It would be great to be able to switch logging level using configmap and/or env variables.

Prometheus Exporting

We can probably use counters to count events that match certain criteria so that people can see if something is increasing or not.

LICENSE missing

Hello,

Nice presentation and project.

I'd like to contribute the Kafka Sink, as it could be useful in our infrastructure.

However, there is no LICENSE file in this repo. What is the license of this project and what are the requirements to contribute?

Thanks

Not all kinds of events are logged

Pretty much what the title says. For instance i noticed "UpdatedLoadBalancer", "FailedGetScale", "NodeNotReady", "NodeReady", "SuccessfulRescale", etc.

Might be due to RBAC and the default role suggested in the config. Will investigate in this direction, but would also appreciate a hint.๏ฟฝ

Live reload of configuration

The configuration is done via a file, in K8s a ConfigMap is mounted. However, when you change a ConfigMap, it is not reflected in Pod without a restart. So, this tool can know it is talking to a ConfigMap directly and watch the changes. However, the receivers need to be modified without breaking the flow. It can be challenging.

Add Cluster-ID field

Hi,

thx for this, i have been looking for sth similar for a while now.;)

We are pushing events from 3+ clusters into the same elasticsearch-index. Now it would be very useful if we could differentiate those events by cluster.

Not sure if there is sth. like a cluster-id that can be read from the api. Otherwise it would be okay to be able to add custom tags.

regards,
strowi

Make it work without cluster scope

Hi,
I'm working on a project using a large shared kubernetes cluster in the company and we don't have the permissions to define ClusterRoles. Trying to start event-exporter obviously fails (just to monitor my namespace). Is it possible to make it work without ClusterRole ?

Events not indexed by elasticsearch 6.0: can kubernetes-event-exporter be made compatible with older versions?

I was trying to send events to an elasticsearch receiver, but no events were indexed. A "fix" that helps consists of setting req.DocumentType = "documentType" (or other string) after the lines

if e.cfg.UseEventID {
req.DocumentID = string(ev.UID)
}

In order to diagnose the problem a few lines added after

resp, err := req.Do(ctx, e.client)
if err != nil {
return err
}
for printing the response from elasticsearch

        buf := new(bytes.Buffer)
        buf.ReadFrom(resp.Body)
        newStr := buf.String()
        fmt.Printf("resp.Body %s\n", newStr)

returned:

resp.Body {"error":{"root_cause":[{"type":"invalid_type_name_exception","reason":"Document mapping type name can't start with '_', found: [_doc]"}],"type":"invalid_type_name_exception","reason":"Document mapping type name can't start with '_', found: [_doc]"},"status":400}

This seems like a known problem in some older elasticsearch versions.
The problem is "fixed" by setting req.DocumentType = "documentType"

git diff pkg/sinks/elasticsearch.go
diff --git a/pkg/sinks/elasticsearch.go b/pkg/sinks/elasticsearch.go
index 3306faa..e873629 100644
--- a/pkg/sinks/elasticsearch.go
+++ b/pkg/sinks/elasticsearch.go
@@ -130,6 +130,8 @@ func (e *Elasticsearch) Send(ctx context.Context, ev *kube.EnhancedEvent) error
                req.DocumentID = string(ev.UID)
        }
 
+        req.DocumentType = "documentType"
+
        resp, err := req.Do(ctx, e.client)
        if err != nil {
                return err

The 01-config.yaml is as follows, and I'm using https://github.com/opsgenie/kubernetes-event-exporter/releases/tag/v0.8

apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
  namespace: monitoring
data:
  config.yaml: |
    logLevel: debug
    logFormat: json
    route:
      routes:
        - match:
            - receiver: "dump"
            - receiver: "elasticsearch"
    receivers:
      - name: "dump"
        file:
          path: "/dev/stdout"
      - name: "elasticsearch"
        elasticsearch:
          hosts:
            - "https://a-cluster-in-aws...es.amazonaws.com"
          index: "kube-events"
          indexFormat: "kube-events-{2006.01.02}"

Sink `Close()` method is never called

Whie implementing a new sink, I noticed that my Close() method was never called.

I expected it to be called when sending SIGTERM to the process, and that it would be waited for before terminating.

Document comparables

It would be good to know what other options there are - and why you'd choose this.

match string regex issue

I'm trying to collect my events with reason FailedCreatePodContainer, but when event occurred with reason FailedCreate also collected to my elastic search. Here is my config.yaml

apiVersion: v1
data:
  config.yaml: |
    route:
      match:
        - reason: "FailedCreatePodContainer"
        - receiver: "elastic"
      drop:
        - type: "Normal"
    receivers:
      - name: "elastic"
        elasticsearch:
          hosts:
          - http://elastic:9200
          index: kube-events
kind: ConfigMap

I saw below code in router.go , do we need to modify or is there any other way to handle this regex.

func matchString(pattern, s string) bool {
	matched, _ := regexp.MatchString(pattern, s)
	return matched
}

Publish latest docker image

I was chasing down an issue with webhooks layout not supporting inner-array fields properly, and turns out you've already pushed a fix for it :D However it's not available in the latest docker hub image.

Could you publish a new binary version to GitHub and a new docker image to docker hub?

Thanks!

power-rangers

Listening for custom resource events

It's not obvious from the code why the watcher isn't getting events about custom resources, but it doesn't look like it's a configuration issue. Could you please clarify if that's the expected behaviour? Here's an example event:

Name:             thanos.16118765932ddc8c                                                                                                                                                                                                                                             
Namespace:        monitoring                                                                                                                                                                                                                                                          
Labels:           <none>                                                                                                                                                                                                                                                              
Annotations:      <none>                                                                                                                                                                                                                                                              
API Version:      v1                                                                                                                                                                                                                                                                  
Count:            177                                                                                                                                                                                                                                                                 
Event Time:       <nil>                                                                                                                                                                                                                                                               
First Timestamp:  2020-05-23T02:30:13Z                                                                                                                                                                                                                                                
Involved Object:                                                                                                                                                                                                                                                                      
  API Version:       helm.fluxcd.io/v1                                                                                                                                                                                                                                                
  Kind:              HelmRelease                                                                                                                                                                                                                                                      
  Name:              thanos                                                                                                                                                                                                                                                           
  Namespace:         monitoring                                                                                                                                                                                                                                                       
  Resource Version:  111701013                                                                                                                                                                                                                                                        
  UID:               75ed7978-fd31-4601-810a-2ef0b3ea3e39                                                                                                                                                                                                                             
Kind:                Event                                                                                                                                                                                                                                                            
Last Timestamp:      2020-05-23T11:18:10Z                                                                                                                                                                                                                                             
Message:             managed release 'thanos' in namespace 'monitoring' sychronized                                                                                                                                                                                                   
Metadata:                                                                                                                                                                                                                                                                             
  Creation Timestamp:  2020-05-23T02:30:13Z                                                                                                                                                                                                                                           
  Resource Version:    111953392                                                                                                                                                                                                                                                      
  Self Link:           /api/v1/namespaces/monitoring/events/thanos.16118765932ddc8c                                                                                                                                                                                                   
  UID:                 9af8aa5f-f678-48cb-995d-81ad3334bacd                                                                                                                                                                                                                           
Reason:                ReleaseSynced                                                                                                                                                                                                                                                  
Reporting Component:                                                                                                                                                                                                                                                                  
Reporting Instance:                                                                                                                                                                                                                                                                   
Source:                                                                                                                                                                                                                                                                               
  Component:  helm-operator                                                                                                                                                                                                                                                           
Type:         Normal                                                                                                                                                                                                                                                                  
Events:       <none>

Other standard events are successfully captured.

Support for SASL in kafka receiver

OverView

Right now the package support TLS but would be great to have support for SASL. Sarama has already added a support for SASL and going forward most the people who are setting up their kafka would use SASL for its kafka clusters.

Elasticsearch Index Append Date to Name

The default value for indices sent to Elasticsearch is to append the date (doc). It would be nice to have the option to add the date to the end of the index that the kubernetes-event-exporter makes.

I tried to add the date to the end of an index in the configmap with the format found in that doc and the messages never made it to Elastic.

Not emitting events

We seem to be consistently running into an issue with the kubernetes-event-exporter, where the application stops emitting any events after some period of time. We're not running into any resource constraints as far as we can tell, as the CPU is not being throttled and we're using <10% of the configured memory limit.

We're using EKS specifically, with a mix of clusters on 1.14 and 1.15 (we're in the middle of upgrading all of our clusters). The issue seems to be identical on both.

Here's the config we're using on all clusters:

apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-cfg
data:
  config.yaml: |
    logLevel: error
    logFormat: json
    route:
      routes:
        - match:
            - receiver: "dump"
    receivers:
      - name: "dump"
        file:
          path: "/dev/stdout"

Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"kubernetes-event-exporter","app.kubernetes.io/version":"0.8","argocd.argoproj.io/instance":"kubernetes-event-exporter","helm.sh/chart":"kubernetes-event-exporter-0.1.0"},"name":"kubernetes-event-exporter","namespace":"cre"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/name":"kubernetes-event-exporter"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/name":"kubernetes-event-exporter"}},"spec":{"containers":[{"args":["-conf=/data/config.yaml"],"image":"opsgenie/kubernetes-event-exporter:0.8","imagePullPolicy":"IfNotPresent","name":"kubernetes-event-exporter","resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"512Mi"}},"securityContext":{"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true,"runAsUser":1000},"volumeMounts":[{"mountPath":"/data","name":"cfg"}]}],"imagePullSecrets":[{"name":"mpi-artifactory-credentials"}],"nodeSelector":{"unicron.mpi-internal.com/role":"mgmt"},"priorityClassName":"unicron-important","securityContext":{},"serviceAccountName":"kubernetes-event-exporter","tolerations":[{"effect":"NoSchedule","key":"unicron.mpi-internal.com/dedicated","operator":"Equal","value":"cluster-management"}],"volumes":[{"configMap":{"name":"event-exporter-cfg"},"name":"cfg"}]}}}}
  creationTimestamp: "2020-07-23T14:20:25Z"
  generation: 2
  labels:
    app.kubernetes.io/instance: kubernetes-event-exporter
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: kubernetes-event-exporter
    app.kubernetes.io/version: "0.8"
    argocd.argoproj.io/instance: kubernetes-event-exporter
    helm.sh/chart: kubernetes-event-exporter-0.1.0
  name: kubernetes-event-exporter
  namespace: cre
  resourceVersion: "84401510"
  selfLink: /apis/extensions/v1beta1/namespaces/cre/deployments/kubernetes-event-exporter
  uid: a676cc16-ccef-11ea-a0f5-0aa8030a7542
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: kubernetes-event-exporter
      app.kubernetes.io/name: kubernetes-event-exporter
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: kubernetes-event-exporter
        app.kubernetes.io/name: kubernetes-event-exporter
    spec:
      containers:
      - args:
        - -conf=/data/config.yaml
        image: opsgenie/kubernetes-event-exporter:0.8
        imagePullPolicy: IfNotPresent
        name: kubernetes-event-exporter
        resources:
          limits:
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 512Mi
        securityContext:
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data
          name: cfg
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: mpi-artifactory-credentials
      nodeSelector:
        unicron.mpi-internal.com/role: mgmt
      priorityClassName: unicron-important
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kubernetes-event-exporter
      serviceAccountName: kubernetes-event-exporter
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: unicron.mpi-internal.com/dedicated
        operator: Equal
        value: cluster-management
      volumes:
      - configMap:
          defaultMode: 420
          name: event-exporter-cfg
        name: cfg
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2020-07-23T14:20:25Z"
    lastUpdateTime: "2020-07-24T13:37:09Z"
    message: ReplicaSet "kubernetes-event-exporter-b7fd4c84d" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2020-07-31T14:21:59Z"
    lastUpdateTime: "2020-07-31T14:21:59Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

We're using 0.8 of the event exporter.

Is there any further debugging or anything we can do to help track down root cause of this? Is this a misconfiguration on our side?

Thanks!

Leader election for deduplication

Right now, as a quick workaround, the tool discards events older than 5 seconds. It's not ideal and can cause missing events or duplicate events upon restarts.

Elasticsearch output

Hi,

I tried elasticsearch sink and got this:

$ kubectl logs deploy/event-exporter -n monitoring
2019-12-13T13:05:08Z INF app/pkg/exporter/engine.go:25 > Registering sink name=dump type=*sinks.Elasticsearch
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" namespace=monitoring reason=Pulling
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" namespace=monitoring reason=Pulled
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Created container event-exporter" namespace=monitoring reason=Created
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Created container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Created container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Started container event-exporter" namespace=monitoring reason=Started
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Started container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Started container event-exporter" sink=dump

However:

apiVersion: v1
data:
  config.yaml: |
    route:
      match:
        - receiver: "dump"
    receivers:
      - name: "dump"
        elasticsearch:
          addresses:
          - https://elk-d-node01.domain.tld:9200
          index: kube-events
          username: elastic
          password: changeme
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"config.yaml":"route:\n  match:\n    - receiver: \"dump\"\nreceivers:\n  - name: \"dump\"\n    elasticsearch:\n      addresses:\n      - https://elk-d-node01.domain.tld:9200\n      index: kube-events\n      username: elastic\n      password: changeme\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"event-exporter-cfg","namespace":"monitoring"}}
  creationTimestamp: "2019-12-13T13:04:54Z"
  name: event-exporter-cfg
  namespace: monitoring
  resourceVersion: "297914"
  selfLink: /api/v1/namespaces/monitoring/configmaps/event-exporter-cfg
  uid: 93a5a61a-8150-44e0-916c-9b34a0807421

How do I remap whole kubernetes event to key

I am trying to do this :

receivers:
      - name: "my-recv"
        webhook:
          endpoint: "http://example.com"
          headers:
            User-Agent: kube-event-exporter 1.0
          layout:
            data: "{{ . }}"

So I want matched kubernetes event to be sent under data key in JSON like :

{
  "data": {
    "metadata": {},
    "reason": "SuccessfulCreate",
    "message": "Created pod: x-6htmw",
    "source": {
      "component": "replicaset-controller"
    },
    "count": 1,
    "type": "Normal",
    "eventTime": "None",
    "reportingComponent": "",
    "reportingInstance": "",
    "involvedObject": {
      "kind": "ReplicaSet",
      "namespace": "x",
      "name": "x-779848ccb7",
      "uid": "x",
      "apiVersion": "apps/v1",
      "resourceVersion": "x",
      "labels": {
        "app": "x"
      },
      "annotations": {
        "serving.knative.dev/creator": "system:serviceaccount:flux:flux"
      }
    }
  }
}

It is failing to send. Can you help me out ?

feature: disabling log labels

Feature request:

To be able disable to scrape and output object labels/annotations

because theres duplication data, and the labels is almost always the same from event to event

Error when building from docker file

Hi, I'm interested in testing out this project but I'm having trouble building the image for my self. I'm running Linux Mint 19.2 with Docker version 19.03.6, build 369ce74a3c.

When I run

docker build .

instead of building successfully, results in
...

Step 4/7 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO11MODULE=on go build -mod=vendor -v -a -o /main .
---> Running in 5cee51768c01
go: inconsistent vendoring in /app:
cloud.google.com/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
github.com/Masterminds/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt

...

run 'go mod vendor' to sync, or use -mod=mod or -mod=readonly to ignore the vendor directory
The command '/bin/sh -c CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO11MODULE=on go build -mod=vendor -v -a -o /main .' returned a non-zero code: 1

I also noticed the same error trace when looking at some of the github build tests
https://github.com/opsgenie/kubernetes-event-exporter/runs/637785627

I apologize, I'm still very much a beginner in golang so some of this is a bit beyond me. Perhaps something is different in my setup or the image for golang:1.14 was recently updated? Any help would be appreciated.

Also I'm only interested in building and running from a container and don't want to run anything installed locally.

Support for pod exit code in event

It would be good to be able to include the exit code in the event associated with a pod or a job's task being completed. is this possible?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.