opsgenie / kubernetes-event-exporter Goto Github PK
View Code? Open in Web Editor NEWExport Kubernetes events to multiple destinations with routing and filtering
License: Apache License 2.0
Export Kubernetes events to multiple destinations with routing and filtering
License: Apache License 2.0
As Kafka is commonly deployed in the enterprise, a Kafka sink would be really valuable.
Basics needed:
Good to have:
I started already in a fork: https://github.com/opsgenie/kubernetes-event-exporter/compare/master...mdemierre:feature/receiver-kafka?expand=1
Please add if you have other requirements or opinions about the implementation.
When using webhook to export Kubernetes event, if the webhook return 200, it logs it as error "not 200/201". Log severity is debug,error. This is due to incorrect condition at
pkg/sinks/webhook.go line 56
Condition should be treat all http status between [200-204] as success
if resp.StatusCode > 204 {
...
}
Hi.
I'm having trouble seeing the data from event-exporter in AWS ElasticSearch.
config.yaml
route:
routes:
- match:
- receiver: "es-dump"
receivers:
- name: "es-dump"
elasticsearch:
hosts:
- "https://vpc-k8s.us-east-1.es.amazonaws.com:443"
index: "kube-events"
I can see the logs in the event-exporter:
2020-03-30T07:30:35Z DBG app/pkg/exporter/channel_registry.go:56 > sending event to sink event="Started container event-exporter" sink=es-dump
But the kube-events has no data (that 1 was a curl test that I've did to check the connection):
green open kube-events 9vZMUP-5S3KY8BNvpBLVVg 5 1 1 0 9.4kb 4.7kb
Help, please?
There should be an option to export warning and error level events to sentry.
Also, I can try to give a pull request for this if it is decided that it will be good addition.
The docker hub repository of kubernetes-event-exporter only has the images of amd64 architecture, hoping to also add the images of arm64.
It looks like the yaml to create the default configmap for elasticsearch uses "receivers.elasticsearch.addresses" as the key here,
https://github.com/opsgenie/kubernetes-event-exporter/blob/master/deploy/01-config.yaml#L14
but, per the README it should be "receivers.elasticsearch.hosts"
Using "hosts" configures it to point to the the right target, whereas "addresses" seems to default to http://localhost:9200/
If it is supposed to be hosts instead of addresses, I'm happy to open the MR to correct it.
We would like to run this within a container and a service account to reach kubernetes API, it would be awesome to log events directly to stdout.
I see that console output is still on the TODO in the readme.md, are there any plans to get it in?
kubernetes-event-exporter is using distroless as base image. But distroless does not have a functioning filesystem. When I set receiver to a file, it will report the file path does not exist. But when I change the base image to alpine, file sink will work well.
I am trying to get only the events of Warning
type, but somehow the config below also prints out Normal
events
I have the following config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
namespace: monitoring
data:
config.yaml: |
#logLevel: debug
logLevel: error
route:
routes:
- match:
- type: "Warning"
- receiver: "dump"
receivers:
- name: "dump"
file:
path: "/dev/stdout"
I am seeing events also that have type=Normal. Am I doing something wrong, or the filtering doesn't work as expected?
Using version opsgenie/kubernetes-event-exporter:0.8
Kube version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.7", GitCommit:"5737fe2e0b8e92698351a853b0d07f9c39b96736", GitTreeState:"clean", BuildDate:"2020-06-24T19:54:11Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Hi,
Is there something particular to do to get all events ?
I've only got 'Successfully assigned xxx to yyy' events with standard elasticsearch output
I can contribute that but not sure how it can be distributed since you don't have any repo for helm charts.
any thoughts?
Not sure how to handle this, however for merge-to-master tests, it would be nice to run a mini kubernetes, like k3s and an Elasticsearch (or any other open source sink) and do certain ops (add pod, scale replica, add a node) and expect a number of events published and compare the contents.
I am wondering if there is any planning for DingTalk receivers? I'd like to make contribute to DingTalk sink.
Hello,
I was experimenting with this exporter and in yaml you have this example:
receivers:
- name: "file"
file:
path: "/tmp/dump"
layout: # Optional
but in source code there is this:
Path string `yaml:"file"`
which is causing pkg/exporter/engine.go:19 > Cannot initialize sink error="open : no such file or directory" name=file
Either example or source is misleading as it should be:
file:
file: "/tmp/dump"
layout: # Optional
Hi, thanks for this project! It makes exporting events out of kubernetes very simple!
I'd like to post this feature request. With some guidance I can try to open a PR myself to implement it if you think it's a valid case.
At the moment the event exporter is not able to read metadata for custom resources and logs the error, for example:
2020-01-21T18:25:47Z ERR app/pkg/kube/watcher.go:68 > Cannot list labels of the object error="no matches for kind \"Application\" in version \"v1alpha1\""
Would it be possible to be able to get basic metadata assuming that resources are valid kubernetes objects? What i'm mostly interested in is the name of of the resource.
Reliability and stability enhancement in k8s multi cluster scenario
We're pretty-printing the logs for better usability, which makes sense when the user is directly looking at the logs from the container.
But most people running the event-exporter in their kubernetes clusters will be pushing these logs to some centralized logging system, which may or may not deal properly with colored logs & non-json logs.
It'd be cool if we allow an option to output the logs in json.
Let me know if this sounds like something we'd like to merge in this project, if so I'll go ahead & create a PR
Ideally, counts and processing times should be counted per sink type and name.
Is there any way to write data in elastic search through fluentd-aggregator
so that we will not miss any event due to unreachable of elastic search host?
A complete scenario is:
Kustomize is a popular tool for deploying to kubernetes. It could be supported by adding a kustomization.yaml
file into the deploy/
dir:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- 00-roles.yaml
- 01-config.yaml
- 02-deployment.yaml
To make the namespace customizable, it would be nice to split the Namespace
object definition out to another file (00-namespace.yaml
- not included in the .resources
array above).
A user's kustomization.yaml
might look like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/opsgenie/kubernetes-event-exporter//deploy?ref=v0.8
It would be awesome if there were endpoints for health / liveness probes defined, (e.g. test if it can connect to k8s API) https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
I'm sending events to loki with a webhook configuration.
Events are pushed and accepted with a 204
response code from loki
However, because the status code is not 200
, logs are showing this error:
2020-06-19T13:27:31Z DBG app/pkg/exporter/channel_registry.go:59 > Cannot send event error="not 200: " event="Started container xxxxxxxxxxxxxxxxx" sink=loki
kubernetes-event-exporter only consider http.StatusOK
as a correct response code from this bit of code
Treat all 2xx response code as valid
It would be great to be able to switch logging level using configmap and/or env variables.
We can probably use counters to count events that match certain criteria so that people can see if something is increasing or not.
Hello,
Nice presentation and project.
I'd like to contribute the Kafka Sink, as it could be useful in our infrastructure.
However, there is no LICENSE file in this repo. What is the license of this project and what are the requirements to contribute?
Thanks
Pretty much what the title says. For instance i noticed "UpdatedLoadBalancer", "FailedGetScale", "NodeNotReady", "NodeReady", "SuccessfulRescale", etc.
Might be due to RBAC and the default role suggested in the config. Will investigate in this direction, but would also appreciate a hint.๏ฟฝ
The configuration is done via a file, in K8s a ConfigMap is mounted. However, when you change a ConfigMap, it is not reflected in Pod without a restart. So, this tool can know it is talking to a ConfigMap directly and watch the changes. However, the receivers need to be modified without breaking the flow. It can be challenging.
Hi,
thx for this, i have been looking for sth similar for a while now.;)
We are pushing events from 3+ clusters into the same elasticsearch-index. Now it would be very useful if we could differentiate those events by cluster.
Not sure if there is sth. like a cluster-id that can be read from the api. Otherwise it would be okay to be able to add custom tags.
regards,
strowi
https://github.com/opsgenie/kubernetes-event-exporter/blob/master/deploy/00-roles.yaml#L19 mentions a role called "view", but it's not defined. Can you help me out providing the right RBAC settings here?
Hi,
I'm working on a project using a large shared kubernetes cluster in the company and we don't have the permissions to define ClusterRoles. Trying to start event-exporter obviously fails (just to monitor my namespace). Is it possible to make it work without ClusterRole ?
I was trying to send events to an elasticsearch receiver, but no events were indexed. A "fix" that helps consists of setting req.DocumentType = "documentType"
(or other string) after the lines
kubernetes-event-exporter/pkg/sinks/elasticsearch.go
Lines 129 to 131 in 336fa39
In order to diagnose the problem a few lines added after
kubernetes-event-exporter/pkg/sinks/elasticsearch.go
Lines 133 to 136 in 336fa39
buf := new(bytes.Buffer)
buf.ReadFrom(resp.Body)
newStr := buf.String()
fmt.Printf("resp.Body %s\n", newStr)
returned:
resp.Body {"error":{"root_cause":[{"type":"invalid_type_name_exception","reason":"Document mapping type name can't start with '_', found: [_doc]"}],"type":"invalid_type_name_exception","reason":"Document mapping type name can't start with '_', found: [_doc]"},"status":400}
This seems like a known problem in some older elasticsearch versions.
The problem is "fixed" by setting req.DocumentType = "documentType"
git diff pkg/sinks/elasticsearch.go
diff --git a/pkg/sinks/elasticsearch.go b/pkg/sinks/elasticsearch.go
index 3306faa..e873629 100644
--- a/pkg/sinks/elasticsearch.go
+++ b/pkg/sinks/elasticsearch.go
@@ -130,6 +130,8 @@ func (e *Elasticsearch) Send(ctx context.Context, ev *kube.EnhancedEvent) error
req.DocumentID = string(ev.UID)
}
+ req.DocumentType = "documentType"
+
resp, err := req.Do(ctx, e.client)
if err != nil {
return err
The 01-config.yaml is as follows, and I'm using https://github.com/opsgenie/kubernetes-event-exporter/releases/tag/v0.8
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
namespace: monitoring
data:
config.yaml: |
logLevel: debug
logFormat: json
route:
routes:
- match:
- receiver: "dump"
- receiver: "elasticsearch"
receivers:
- name: "dump"
file:
path: "/dev/stdout"
- name: "elasticsearch"
elasticsearch:
hosts:
- "https://a-cluster-in-aws...es.amazonaws.com"
index: "kube-events"
indexFormat: "kube-events-{2006.01.02}"
Whie implementing a new sink, I noticed that my Close()
method was never called.
I expected it to be called when sending SIGTERM to the process, and that it would be waited for before terminating.
It would be good to know what other options there are - and why you'd choose this.
I'm trying to collect my events with reason FailedCreatePodContainer, but when event occurred with reason FailedCreate also collected to my elastic search. Here is my config.yaml
apiVersion: v1
data:
config.yaml: |
route:
match:
- reason: "FailedCreatePodContainer"
- receiver: "elastic"
drop:
- type: "Normal"
receivers:
- name: "elastic"
elasticsearch:
hosts:
- http://elastic:9200
index: kube-events
kind: ConfigMap
I saw below code in router.go , do we need to modify or is there any other way to handle this regex.
func matchString(pattern, s string) bool {
matched, _ := regexp.MatchString(pattern, s)
return matched
}
slack is depreciating slack tokens
Just tried to deploy it using https://github.com/opsgenie/kubernetes-event-exporter/blob/master/deploy/02-deployment.yaml
But..
$ docker pull docker.pkg.github.com/opsgenie/kubernetes-event-exporter/exporter:0.2
Error response from daemon: Get https://docker.pkg.github.com/v2/opsgenie/kubernetes-event-exporter/exporter/manifests/0.2: no basic auth credentials
It's not obvious from the code why the watcher isn't getting events about custom resources, but it doesn't look like it's a configuration issue. Could you please clarify if that's the expected behaviour? Here's an example event:
Name: thanos.16118765932ddc8c
Namespace: monitoring
Labels: <none>
Annotations: <none>
API Version: v1
Count: 177
Event Time: <nil>
First Timestamp: 2020-05-23T02:30:13Z
Involved Object:
API Version: helm.fluxcd.io/v1
Kind: HelmRelease
Name: thanos
Namespace: monitoring
Resource Version: 111701013
UID: 75ed7978-fd31-4601-810a-2ef0b3ea3e39
Kind: Event
Last Timestamp: 2020-05-23T11:18:10Z
Message: managed release 'thanos' in namespace 'monitoring' sychronized
Metadata:
Creation Timestamp: 2020-05-23T02:30:13Z
Resource Version: 111953392
Self Link: /api/v1/namespaces/monitoring/events/thanos.16118765932ddc8c
UID: 9af8aa5f-f678-48cb-995d-81ad3334bacd
Reason: ReleaseSynced
Reporting Component:
Reporting Instance:
Source:
Component: helm-operator
Type: Normal
Events: <none>
Other standard events are successfully captured.
Right now the package support TLS but would be great to have support for SASL
. Sarama
has already added a support for SASL
and going forward most the people who are setting up their kafka would use SASL
for its kafka clusters
.
The default value for indices sent to Elasticsearch is to append the date (doc). It would be nice to have the option to add the date to the end of the index that the kubernetes-event-exporter makes.
I tried to add the date to the end of an index in the configmap with the format found in that doc and the messages never made it to Elastic.
We seem to be consistently running into an issue with the kubernetes-event-exporter, where the application stops emitting any events after some period of time. We're not running into any resource constraints as far as we can tell, as the CPU is not being throttled and we're using <10% of the configured memory limit.
We're using EKS specifically, with a mix of clusters on 1.14 and 1.15 (we're in the middle of upgrading all of our clusters). The issue seems to be identical on both.
Here's the config we're using on all clusters:
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
data:
config.yaml: |
logLevel: error
logFormat: json
route:
routes:
- match:
- receiver: "dump"
receivers:
- name: "dump"
file:
path: "/dev/stdout"
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"kubernetes-event-exporter","app.kubernetes.io/version":"0.8","argocd.argoproj.io/instance":"kubernetes-event-exporter","helm.sh/chart":"kubernetes-event-exporter-0.1.0"},"name":"kubernetes-event-exporter","namespace":"cre"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/name":"kubernetes-event-exporter"}},"template":{"metadata":{"labels":{"app.kubernetes.io/instance":"kubernetes-event-exporter","app.kubernetes.io/name":"kubernetes-event-exporter"}},"spec":{"containers":[{"args":["-conf=/data/config.yaml"],"image":"opsgenie/kubernetes-event-exporter:0.8","imagePullPolicy":"IfNotPresent","name":"kubernetes-event-exporter","resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"100m","memory":"512Mi"}},"securityContext":{"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true,"runAsUser":1000},"volumeMounts":[{"mountPath":"/data","name":"cfg"}]}],"imagePullSecrets":[{"name":"mpi-artifactory-credentials"}],"nodeSelector":{"unicron.mpi-internal.com/role":"mgmt"},"priorityClassName":"unicron-important","securityContext":{},"serviceAccountName":"kubernetes-event-exporter","tolerations":[{"effect":"NoSchedule","key":"unicron.mpi-internal.com/dedicated","operator":"Equal","value":"cluster-management"}],"volumes":[{"configMap":{"name":"event-exporter-cfg"},"name":"cfg"}]}}}}
creationTimestamp: "2020-07-23T14:20:25Z"
generation: 2
labels:
app.kubernetes.io/instance: kubernetes-event-exporter
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: kubernetes-event-exporter
app.kubernetes.io/version: "0.8"
argocd.argoproj.io/instance: kubernetes-event-exporter
helm.sh/chart: kubernetes-event-exporter-0.1.0
name: kubernetes-event-exporter
namespace: cre
resourceVersion: "84401510"
selfLink: /apis/extensions/v1beta1/namespaces/cre/deployments/kubernetes-event-exporter
uid: a676cc16-ccef-11ea-a0f5-0aa8030a7542
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: kubernetes-event-exporter
app.kubernetes.io/name: kubernetes-event-exporter
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: kubernetes-event-exporter
app.kubernetes.io/name: kubernetes-event-exporter
spec:
containers:
- args:
- -conf=/data/config.yaml
image: opsgenie/kubernetes-event-exporter:0.8
imagePullPolicy: IfNotPresent
name: kubernetes-event-exporter
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 512Mi
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: cfg
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: mpi-artifactory-credentials
nodeSelector:
unicron.mpi-internal.com/role: mgmt
priorityClassName: unicron-important
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kubernetes-event-exporter
serviceAccountName: kubernetes-event-exporter
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: unicron.mpi-internal.com/dedicated
operator: Equal
value: cluster-management
volumes:
- configMap:
defaultMode: 420
name: event-exporter-cfg
name: cfg
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-07-23T14:20:25Z"
lastUpdateTime: "2020-07-24T13:37:09Z"
message: ReplicaSet "kubernetes-event-exporter-b7fd4c84d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-07-31T14:21:59Z"
lastUpdateTime: "2020-07-31T14:21:59Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
We're using 0.8 of the event exporter.
Is there any further debugging or anything we can do to help track down root cause of this? Is this a misconfiguration on our side?
Thanks!
Right now, as a quick workaround, the tool discards events older than 5 seconds. It's not ideal and can cause missing events or duplicate events upon restarts.
Hi,
I tried elasticsearch sink and got this:
$ kubectl logs deploy/event-exporter -n monitoring
2019-12-13T13:05:08Z INF app/pkg/exporter/engine.go:25 > Registering sink name=dump type=*sinks.Elasticsearch
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" namespace=monitoring reason=Pulling
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" namespace=monitoring reason=Pulled
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Created container event-exporter" namespace=monitoring reason=Created
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Successfully pulled image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Pulling image \"opsgenie/kubernetes-event-exporter:0.2\"" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Created container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Created container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/kube/watcher.go:60 > Received event msg="Started container event-exporter" namespace=monitoring reason=Started
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:46 > sending event to sink event="Started container event-exporter" sink=dump
2019-12-13T13:05:08Z DBG app/pkg/exporter/channel_registry.go:49 > Cannot send event error="dial tcp 127.0.0.1:9200: connect: connection refused" event="Started container event-exporter" sink=dump
However:
apiVersion: v1
data:
config.yaml: |
route:
match:
- receiver: "dump"
receivers:
- name: "dump"
elasticsearch:
addresses:
- https://elk-d-node01.domain.tld:9200
index: kube-events
username: elastic
password: changeme
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config.yaml":"route:\n match:\n - receiver: \"dump\"\nreceivers:\n - name: \"dump\"\n elasticsearch:\n addresses:\n - https://elk-d-node01.domain.tld:9200\n index: kube-events\n username: elastic\n password: changeme\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"event-exporter-cfg","namespace":"monitoring"}}
creationTimestamp: "2019-12-13T13:04:54Z"
name: event-exporter-cfg
namespace: monitoring
resourceVersion: "297914"
selfLink: /api/v1/namespaces/monitoring/configmaps/event-exporter-cfg
uid: 93a5a61a-8150-44e0-916c-9b34a0807421
I am trying to do this :
receivers:
- name: "my-recv"
webhook:
endpoint: "http://example.com"
headers:
User-Agent: kube-event-exporter 1.0
layout:
data: "{{ . }}"
So I want matched kubernetes event to be sent under data key in JSON like :
{
"data": {
"metadata": {},
"reason": "SuccessfulCreate",
"message": "Created pod: x-6htmw",
"source": {
"component": "replicaset-controller"
},
"count": 1,
"type": "Normal",
"eventTime": "None",
"reportingComponent": "",
"reportingInstance": "",
"involvedObject": {
"kind": "ReplicaSet",
"namespace": "x",
"name": "x-779848ccb7",
"uid": "x",
"apiVersion": "apps/v1",
"resourceVersion": "x",
"labels": {
"app": "x"
},
"annotations": {
"serving.knative.dev/creator": "system:serviceaccount:flux:flux"
}
}
}
}
It is failing to send. Can you help me out ?
Cant send team webhook, missing mandatory field:
Cannot send event error="not 200: Summary or Text is required."
Am i missing something?
Thanks
Feature request:
To be able disable to scrape and output object labels/annotations
because theres duplication data, and the labels is almost always the same from event to event
The README.md
states that the Elasticsearch sink supports the optional layout
config parameter.
However reading through https://github.com/opsgenie/kubernetes-event-exporter/blob/master/pkg/sinks/elasticsearch.go seems to indicate that layout
is not a config item like the sinks kinesis
, sqs
.
Is it possible to add the layout
functionality to Elasticsearch as well to match the documentation?
Hi, I'm interested in testing out this project but I'm having trouble building the image for my self. I'm running Linux Mint 19.2 with Docker version 19.03.6, build 369ce74a3c.
When I run
docker build .
instead of building successfully, results in
...
Step 4/7 : RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO11MODULE=on go build -mod=vendor -v -a -o /main .
---> Running in 5cee51768c01
go: inconsistent vendoring in /app:
cloud.google.com/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
github.com/Masterminds/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
...
run 'go mod vendor' to sync, or use -mod=mod or -mod=readonly to ignore the vendor directory
The command '/bin/sh -c CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO11MODULE=on go build -mod=vendor -v -a -o /main .' returned a non-zero code: 1
I also noticed the same error trace when looking at some of the github build tests
https://github.com/opsgenie/kubernetes-event-exporter/runs/637785627
I apologize, I'm still very much a beginner in golang so some of this is a bit beyond me. Perhaps something is different in my setup or the image for golang:1.14 was recently updated? Any help would be appreciated.
Also I'm only interested in building and running from a container and don't want to run anything installed locally.
Instead of implementing services into the service use shoutrrr to attach to all supported systems
It would be good to be able to include the exit code in the event associated with a pod or a job's task being completed. is this possible?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.