Coder Social home page Coder Social logo

graylog2 / graylog-plugin-beats Goto Github PK

View Code? Open in Web Editor NEW
19.0 22.0 18.0 206 KB

[DEPRECATED] Elastic Beats Input plugin for Graylog

Home Page: https://www.graylog.org/

License: GNU General Public License v3.0

Java 96.54% Ruby 3.18% Shell 0.28%
graylog elastic-beats filebeat graylog-plugin winlogbeat metricbeat beats input

graylog-plugin-beats's Introduction

DEPRECATION NOTICE

This project has been merged into graylog2-server, see #28

Please use the issue tracker in the graylog2-server repository for any feature requests or bug reports.


Elastic Beats Input Plugin for Graylog

Build Status

Required Graylog version: 2.2.0 and later

This plugin provides an input for the Elastic Beats (formerly Lumberjack) protocol in Graylog which can be used to receive data by log shippers from the logstash-forwards and the Beats family, like Filebeat, Metricbeat, Packetbeat, or Winlogbeat.

Installation

Download the plugin and place the JAR file in your Graylog plugin directory. By default the plugin directory is the plugins/ directory relative to your Graylog installation directory and can be configured in your graylog.conf file.

Restart Graylog and you are done.

Build

This project is using Maven and requires Java 8 or higher.

You can build the plugin (JAR) with mvn package.

DEB and RPM packages can be build with mvn jdeb:jdeb and mvn rpm:rpm respectively.

Plugin Release

In order to release a new version of the plugin, run the following commands:

$ mvn release:prepare
$ mvn release:perform

This sets the version numbers, creates a tag and pushes to GitHub.

Travis CI will build the release artifacts and upload to GitHub automatically.

License

Copyright (c) 2016 Graylog, Inc.

This library is licensed under the GNU General Public License, Version 3.0.

See https://www.gnu.org/licenses/gpl-3.0.html or the LICENSE.txt file in this repository for the full license text.

graylog-plugin-beats's People

Contributors

bernd avatar dennisoelkers avatar edmundoa avatar garybot2 avatar hc4 avatar joschi avatar jsoref avatar kroepke avatar rompic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graylog-plugin-beats's Issues

metricbeat 6.1.3

Hi there,
Does metricbeat 6.1.3 work with the latest plugin version?

I get a failed to connect error in my metricbeat logs.

Thanks

Await inside BeatsFrameDecoder

I found this line in BeatsFrameDecoder.sendACK:
channel.write(buffer).awaitUninterruptibly();
Why should we block I/O thread to wait write operation?
Is it possible to receive duplicate message, if won't wait?
Or Netty is smart enough to not block in such case?

Filebeat cannot get message from Windows Apache Logs

Hello,

I have configured filebeat on windows to monitor php Apache logs by following this document: http://docs.graylog.org/en/3.1/pages/sidecar.html#first-start

Here is filebeat configuration:

fields_under_root: true
fields.collector_node_id: ${sidecar.nodeName}
fields.gl2_source_collector: ${sidecar.nodeId}

output.logstash:
hosts: ["My_GrayLogServer_IP:5044"]

path:
data: C:\Program Files\Graylog\sidecar\cache\filebeat\data
logs: C:\Program Files\Graylog\sidecar\logs
tags:

  • windows
    filebeat.config.inputs:
    type: log
    enabled: true
    paths:
    • C:\Wamp\apache2\logs*

The collector is now in RUNNING state but still cannot get messages. Below is filebeat log:

2020-01-03T15:25:37.002+0700 INFO instance/beat.go:544 Home path: [C:\Program Files\Graylog\sidecar] Config path: [C:\Program Files\Graylog\sidecar] Data path: [C:\Program Files\Graylog\sidecar\cache\filebeat\data] Logs path: [C:\Program Files\Graylog\sidecar\logs]
2020-01-03T15:25:37.004+0700 INFO instance/beat.go:551 Beat UUID: 50824907-0a65-4a49-8176-a102b5159eb1
2020-01-03T15:25:37.004+0700 INFO [beat] instance/beat.go:768 Beat info {"system_info": {"beat": {"path": {"config": "C:\Program Files\Graylog\sidecar", "data": "C:\Program Files\Graylog\sidecar\cache\filebeat\data", "home": "C:\Program Files\Graylog\sidecar", "logs": "C:\Program Files\Graylog\sidecar\logs"}, "type": "filebeat", "uuid": "50824907-0a65-4a49-8176-a102b5159eb1"}}}
2020-01-03T15:25:37.005+0700 INFO [beat] instance/beat.go:777 Build info {"system_info": {"build": {"commit": "e193f6d68b25b7ddbe3a3ed8d60bc07fea1ef800", "libbeat": "6.4.2", "time": "2018-09-26T12:41:59.000Z", "version": "6.4.2"}}}
2020-01-03T15:25:37.005+0700 INFO [beat] instance/beat.go:780 Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":3,"version":"go1.10.3"}}}
2020-01-03T15:25:37.009+0700 INFO [beat] instance/beat.go:784 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-12-31T15:38:56.47+07:00","hostname":"WEBAPP03","ips":["fe80::61ca:a665:166c:f850/64","MyClientIP/24","MyClientIP2/24","::1/128","127.0.0.1/8","fe80::5efe:a2e:1fc9/128","fe80::5efe:a2e:1fca/128","2001:0:2851:782c:307d:ac4:2103:576/64","fe80::307d:ac4:2103:576/64"],"kernel_version":"10.0.14393.2248 (rs1_release.180427-1804)","mac_addresses":["00:15:5d:64:17:01","00:00:00:00:00:00:00:e0","00:00:00:00:00:00:00:e0"],"os":{"family":"windows","platform":"windows","name":"Windows Server 2016 Standard","version":"10.0","major":10,"minor":0,"patch":0,"build":"14393.2248"},"timezone":"+07","timezone_offset_sec":25200,"id":"a0a101f7-9dd2-4ca6-98d1-ff4eee40ae53"}}}
2020-01-03T15:25:37.009+0700 INFO instance/beat.go:273 Setup Beat: filebeat; Version: 6.4.2
2020-01-03T15:25:37.010+0700 INFO pipeline/module.go:98 Beat name: WEBAPP03
2020-01-03T15:25:37.010+0700 INFO instance/beat.go:367 filebeat start running.
2020-01-03T15:25:37.010+0700 INFO [monitoring] log/log.go:114 Starting metrics logging every 30s
2020-01-03T15:25:37.010+0700 INFO registrar/registrar.go:134 Loading registrar data from C:\Program Files\Graylog\sidecar\cache\filebeat\data\registry
2020-01-03T15:25:37.010+0700 INFO registrar/registrar.go:141 States Loaded from registrar: 0
2020-01-03T15:25:37.010+0700 WARN beater/filebeat.go:371 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-01-03T15:25:37.010+0700 INFO crawler/crawler.go:72 Loading Inputs: 0
2020-01-03T15:25:37.010+0700 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
2020-01-03T15:25:37.010+0700 INFO cfgfile/reload.go:141 Config reloader started
2020-01-03T15:25:37.011+0700 INFO cfgfile/reload.go:196 Loading of config files completed.
2020-01-03T15:26:07.094+0700 INFO [monitoring] log/log.go:141 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109,"time":{"ms":109}},"total":{"ticks":109,"time":{"ms":109},"value":109},"user":{"ticks":0}},"info":{"ephemeral_id":"7b26b161-b4d8-43e2-8e4d-5da2e2f00355","uptime":{"ms":30037}},"memstats":{"gc_next":4194304,"memory_alloc":2044032,"memory_total":3635056,"rss":18432000}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":3}}}}}
2020-01-03T15:26:37.011+0700 INFO [monitoring] log/log.go:141 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109},"total":{"ticks":109,"value":109},"user":{"ticks":0}},"info":{"ephemeral_id":"7b26b161-b4d8-43e2-8e4d-5da2e2f00355","uptime":{"ms":60037}},"memstats":{"gc_next":4194304,"memory_alloc":2092400,"memory_total":3683424,"rss":1871872}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}
2020-01-03T15:27:07.011+0700 INFO [monitoring] log/log.go:141 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109},"total":{"ticks":109,"value":109},"user":{"ticks":0}},"info":{"ephemeral_id":"7b26b161-b4d8-43e2-8e4d-5da2e2f00355","uptime":{"ms":90038}},"memstats":{"gc_next":4194304,"memory_alloc":2124960,"memory_total":3715984,"rss":-73728}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}
2020-01-03T15:27:37.013+0700 INFO [monitoring] log/log.go:141 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109},"total":{"ticks":140,"time":{"ms":31},"value":140},"user":{"ticks":31,"time":{"ms":31}}},"info":{"ephemeral_id":"7b26b161-b4d8-43e2-8e4d-5da2e2f00355","uptime":{"ms":120037}},"memstats":{"gc_next":4194304,"memory_alloc":2160256,"memory_total":3751280,"rss":266240}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}

I spent 3 days to fix this issue but no solution. Coul you please tell me what have done wrong?

Thanks a lot,
Van Dat

Support back pressure protocol

According to elastic, "Filebeat uses a backpressure-sensitive protocol when sending data to Logstash or Elasticsearch to account for higher volumes of data. If Logstash is busy crunching data, it lets Filebeat know to slow down its read."
More info here:
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html#backpressure-persistent-queue

I searched through the graylog-plugin-beats code and didn't find any reference to it, and my testing indicates its not supported.

This would be useful to prevent Graylog from having its journal maxed out and having it crash if filebeats starts shipping more logs than Graylog can handle.

Move into graylog2-server

Check if anything is using the fully qualified class names before changing the packages. We might need migrations to fix this.

Examples:

  • Cluster config
  • Other MongoDB database objects
  • Config file settings.

Also move all open issues to the new repo.

Add support for parsing journalbeat

Add support for journalbeat input (https://github.com/mheese/journalbeat)

Example message:

{
  "@realtime_timestamp": 1501612251261196,
  "@timestamp": "2017-08-01T18:30:51.261Z",
  "MESSAGE": "I0801 18:30:51.260699 25032 http.cpp:420] HTTP GET for /master/state from 10.129.199.60:36238 with User-Agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'",
  "PRIORITY": "6",
  "SYSLOG_FACILITY": "3",
  "SYSLOG_IDENTIFIER": "mesos-master",
  "_BOOT_ID": "ed665473a2054d85b2aab95a37b686f0",
  "_CAP_EFFECTIVE": "1fffffffff",
  "_CMDLINE": "/opt/mesosphere/packages/mesos--aaedd03eee0d57f5c0d49c74ff1e5721862cad98/bin/mesos-master",
  "_COMM": "mesos-master",
  "_EXE": "/opt/mesosphere/packages/mesos--aaedd03eee0d57f5c0d49c74ff1e5721862cad98/bin/mesos-master",
  "_GID": "0",
  "_HOSTNAME": "ip-10-129-198-152",
  "_MACHINE_ID": "a8a482f5bd664ae28475981927ae888d",
  "_PID": "25020",
  "_SELINUX_CONTEXT": "system_u:system_r:init_t:s0",
  "_SYSTEMD_CGROUP": "/system.slice/dcos-mesos-master.service",
  "_SYSTEMD_SLICE": "system.slice",
  "_SYSTEMD_UNIT": "dcos-mesos-master.service",
  "_TRANSPORT": "stdout",
  "_UID": "0",
  "beat": {
    "hostname": "ip-10-129-198-152",
    "name": "journalbeat",
    "version": "5.5.0"
  },
  "meta": {
    "cloud": {
      "availability_zone": "us-east-1c",
      "instance_id": "i-05cefc8e13714f64b",
      "machine_type": "m4.2xlarge",
      "provider": "ec2",
      "region": "us-east-1"
    }
  },
  "type": "journal"
}

Decoding nested JSON objects

When using Packetbeat to send messages directly to Graylog2, the nested JSON objects won't be decoded and would be seen as '[object Object],[object Object]' under search UI. Example input JSON message:

{
  "_index" : "graylog_5",
  "_type" : "message",
  "_id" : "572b9193-16df-11e6-8a3b-000c2942c251",
  "_version" : 1,
  "found" : true,
  "_source" : {
    "packetbeat_bytes_in" : 32,
    "packetbeat_method" : "QUERY",
    "packetbeat_type" : "dns",
    "packetbeat_responsetime" : 140,
    "packetbeat_query" : "class IN, type A, conn.skype.com",
    "gl2_remote_ip" : "172.16.220.1",
    "packetbeat_dns_question_name" : "conn.skype.com",
    "gl2_remote_port" : 65532,
    "packetbeat_dns_additionals_count" : 0,
    "packetbeat_dns_answers_count" : 2,
    "source" : "abs-MacBook-Pro.local",
    "type" : "dns",
    "gl2_source_input" : "572a39d0cdf3830902a406df",
    "packetbeat_dns_response_code" : "NOERROR",
    "packetbeat_direction" : "out",
    "packetbeat_client_ip" : "192.168.0.3",
    "packetbeat_dns_flags_recursion_allowed" : true,
    "packetbeat_dns_flags_truncated_response" : false,
    "packetbeat_dns_question_class" : "IN",
    "gl2_source_node" : "b6d4add1-2cfc-4fd1-b18d-0ad0478e00a8",
    "packetbeat_dns_flags_authoritative" : false,
    "packetbeat_status" : "OK",
    "packetbeat_client_port" : 60426,
    "timestamp" : "2016-05-10 18:45:16.558",
    "packetbeat_ip" : "192.168.0.1",
    "packetbeat_dns_op_code" : "QUERY",
    "packetbeat_bytes_out" : 83,
    "packetbeat_dns_flags_recursion_desired" : true,
    "packetbeat_transport" : "udp",
    "packetbeat_dns_authorities_count" : 0,
    "packetbeat_resource" : "conn.skype.com",
    "streams" : [ "572ae5c9cdf3830902a4bb7f" ],
    "packetbeat_dns_answers" : [ {
      "class" : "IN",
      "data" : "conn.skype.akadns.net",
      "name" : "conn.skype.com",
      "ttl" : 464,
      "type" : "CNAME"
    }, {
      "class" : "IN",
      "data" : "91.190.216.81",
      "name" : "conn.skype.akadns.net",
      "ttl" : 300,
      "type" : "A"
    } ],
    "message" : "-",
    "packetbeat_dns_question_type" : "A",
    "packetbeat_count" : 1,
    "name" : "MacBook-Pro.local",
    "packetbeat_dns_id" : 62527,
    "facility" : "packetbeat",
    "packetbeat_port" : 53
  }
}

packetbeat_dns_answers structure won't be decoded in this example.

Internal fields are prefixed with beats name

Special fields like gl2_source_collector should not be prefixed with the Beats name in order to keep functionality of the fields. This can be done for all gl2_ fields.

Winlogbeat snippet for reproduction:

fields:
  gl2_source_collector: foo
fields_under_root: true

this leads to a field called winlogbeat_gl2_source_collector.

Generic beats support broke existing setups

Since we merged #29, my existing Graylog setup with packetbeat breaks with index failures.

2018-02-15 19:37:46,228 WARN : org.graylog2.indexer.messages.Messages - Failed to index message: index=<testgraylog_2> id=<5067b300-127f-11e8-a7e7-02427ac964ae
> error=<{"type":"mapper_parsing_exception","reason":"failed to parse [status]","caused_by":{"type":"number_format_exception","reason":"For input string: \"OK\
""}}>

Before this PR, the packetbeat status field has been indexed as packetbeat_status field and now Graylog tries to index it as status. This fails in my setup because status is a long and not a string.

incorrectly typed topbeat percent fields

when using the plugin, trying to build queries / graphs on a few _p fields wasn't working as expected.

Upon looking at my index, these fields had the type "long":
"topbeat_swap_used_p" : {
"type" : "long"
},
"topbeat_fs_used_p" : {
"type" : "long"
},

These fields contain percent data, in the decimal form 0.86 (or 86%) and the other _p percent fields are correctly typed double. Is topbeat itself emitting the messages with the wrong type, or is this plugin incorrectly ingesting them? In the meantime I can do an extractor to convert it I suppose.

Beats plugin does not accept messages over TLS, shows "unknown_ca" in logs

Hello,

I am trying to send logs from filebeat/winlogbeat to Graylog running the beats plugin over TLS. If I try to send logs to the server the the following entry appears in the graylog-server log file:

2016-07-13T16:12:26.464Z DEBUG [SslHandler] [id: 0xb8fa4642, /141.201.13.205:40470 => /141.201.123.193:5044] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2016-07-13T16:12:26.464Z ERROR [NettyTransport] Error in Input [Beats/5768061314d1a847890cabef] (channel [id: 0xb8fa4642, /141.201.13.205:40470 => /141.201.123.193:5044])
javax.net.ssl.SSLException: Received fatal alert: unknown_ca
        at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1634) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.recvAlert(SSLEngineImpl.java:1800) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:1083) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907) ~[?:1.8.0_91]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781) ~[?:1.8.0_91]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) ~[?:1.8.0_91]
        at org.jboss.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1218) ~[graylog.jar:?]
        at org.jboss.netty.handler.ssl.SslHandler.decode(SslHandler.java:852) ~[graylog.jar:?]
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425) ~[graylog.jar:?]
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) ~[graylog.jar:?]
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) ~[graylog.jar:?]
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [graylog.jar:?]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [graylog.jar:?]
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [graylog.jar:?]
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [graylog.jar:?]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [graylog.jar:?]
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [graylog.jar:?]
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [graylog.jar:?]
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [graylog.jar:?]
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [graylog.jar:?]
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [graylog.jar:?]
        at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) [graylog.jar:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
2016-07-13T16:12:26.465Z DEBUG [NettyTransport] Could not handle message, closing connection: [id: 0xb8fa4642, /141.201.13.205:40470 => /141.201.123.193:5044] EXCEPTION: javax.net.ssl.SSLException: Received fatal alert: unknown_ca

If I try to access the input in a browser for example I see this message:

server:5044 uses an invalid security certificate.

The certificate is not trusted because it is self-signed.
The certificate is only valid for 0.0.0.0:null

Error code: SEC_ERROR_UNKNOWN_ISSUER
Peer's Certificate issuer is not recognized. 

HTTP Strict Transport Security: false 
HTTP Public Key Pinning: false 

Certificate chain: 
-----BEGIN CERTIFICATE----- 

-----END CERTIFICATE----- 

The certificate is from a real CA (Digicert) and works well in another setup (webserver). I also tried to add the root CA to the Java keystore but it didn't help.

Missing protocol version in error about unknown protocol version

Just upgraded to 1.1.2 and got error:

java.lang.Exception: Unknown beats protocol version: {}
        at org.graylog.plugins.beats.BeatsFrameDecoder.checkVersion(BeatsFrameDecoder.java:155) ~[?:?]

According to sources, there is missing format argument.
Also strange thing is that I've got this error only once... Maybe some network error.

Beats Input plugin receiveBufferSize configuration being set from the UI is not taking affect

Beats Input plugin receiveBufferSize configuration being set from the UI is not taking affect. I have set the receiveBufferSize to 1048576 from the UI. I am seeing the following message in the logs of the graylog-server.

2016-10-06 23:19:32,887 WARN : org.graylog2.plugin.inputs.transports.NettyTransport - receiveBufferSize (SO_RCVBUF) for input BeatsInput{title=Beats, type=org.graylog.plugins.beats.BeatsInput, nodeId=null} should be 1048576 but is 124928.
screen shot 2016-10-06 at 4 28 49 pm

Graylog version - 2.1.1
Beats Plugin version - 1.1.2

Attaching the screenshot of the configuration for the beats plugin.
I am attaching the screenshot of the configuration being set from the UI.

Possible issue with graylog Beats plugin when the compression level is set to 0

From @mmayur2016 on September 14, 2016 18:38

Expected Behavior

We are using filebeats 1.2.3 version to send data to graylog. I was trying to test how would filebeats perform based on different compression_level configurations set for logstash output in the filebeat.yml configuration.

The expected behaviour should have been that the logs should have been pushed with no errors. I am not sure if there is any setting on the Beats input plugin side that needs to be configured for the logs to get received with no exception.

Current Behavior

The current behavior is that we are seeing the following exception in the graylog server from the beats plugin side.

java.lang.IndexOutOfBoundsException: Readable byte limit exceeded: 295
    at org.jboss.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236) ~[graylog.jar:?]
    at org.graylog.plugins.beats.BeatsFrameDecoder.processBuffer(BeatsFrameDecoder.java:83) ~[?:?]
    at org.graylog.plugins.beats.BeatsFrameDecoder.decode(BeatsFrameDecoder.java:67) ~[?:?]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425) ~[graylog.jar:?]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:482) ~[graylog.jar:?]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:365) ~[graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) ~[graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.channelDisconnected(SimpleChannelHandler.java:199) [graylog.jar:?]
    at org.graylog2.plugin.inputs.util.ConnectionCounter.channelDisconnected(ConnectionCounter.java:49) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:120) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.channelDisconnected(SimpleChannelHandler.java:199) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:120) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelDisconnected(SimpleChannelUpstreamHandler.java:208) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:102) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [graylog.jar:?]
    at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396) [graylog.jar:?]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:360) [graylog.jar:?]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:81) [graylog.jar:?]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36) [graylog.jar:?]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.closeRequested(SimpleChannelHandler.java:334) [graylog.jar:?]
    at org.jboss.netty.channel.SimpleChannelHandler.handleDownstream(SimpleChannelHandler.java:260) [graylog.jar:?]

I am not sure what happens to these messages. If this is a configuration issue, I will close the issue.

Your Environment

Beats Input configuration:

bind_address: 0.0.0.0
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: <empty>
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: false
tls_key_file: <empty>
tls_key_password: ********
  • Graylog Version: 2.0.2
  • Elasticsearch Version:
  • MongoDB Version:
  • Operating System:
  • Browser version:

Copied from original issue: Graylog2/graylog2-server#2831

Beats input generates thousands of identical messages

Problem occured twice.
Graylog version: 2.1.1
Beats input: 1.1.2

Incoming messages raised to 100000+ /s
Graylog CPU usage 100%
From ganglia I can see exact moment, when problem occured.
image

Incomming traffic also increased
image

I suspect that root of problem is that Beats input produces messages without confirmation of Ack reveived by remote beat.
Possibly problem caused by #14

Support "docker" and "kubernetes" metadata field with new libbeat version

Currently, libbeat support adding docker metadata and kubernetes metadata as you can see from this document: https://github.com/elastic/beats/blob/master/libbeat/docs/processors-using.asciidoc#adding-docker-metadata

Right now, graylog beats plugins discard other all well-known root fields which will remove all of above metadata.

I didn't see any configuration on beats plugins to allow accepting additional field.

If there is no configuration available, we should support it.
I propose 2 ways to do this (I can help contribute):

  1. the plugins didn't support dynamic fields configuration, the plugins will add all datas from "docker" or "kubernetes" directly to the message (I think this is the easiest way)

  2. support dynamic fields configuration on UI, and maybe, with some mechanism to flatten nested values to the root (not sure if we should do this).

Winlogbeat not sending log to graylog

Hi !

I'm trying to use WinlogBeat with Graylog2. But after a couple of hour i give up. I was not able to link my windows7 client using winlogbeat (latest release 1.2.2) and my graylog server (version 2).

Here my configuration file for winlogbeat.yml:

winlogbeat:
  registry_file: C:/ProgramData/winlogbeat/.winlogbeat.yml
  event_logs:
    - name: Application
      ignore_older: 72h 
    - name: Security
    - name: System
output:
  logstash:
    hosts: ["192.168.232.123"]
    compression_level: 4
    loadbalance: false
    port: 5044
shipper:
logging:
  to_files: true
  files:
    path: C:/ProgramData/winlogbeat/Logs
    rotateeverybytes: 10485760 
  level: debug

Log file :

2016/05/03 15:03:21.098632 single.go:152: INFO send fail
2016/05/03 15:03:21.099609 single.go:159: INFO backoff retry: 1s
2016/05/03 15:03:22.100585 client.go:100: DBG  connect
2016/05/03 15:03:22.105468 client.go:146: DBG  Try to publish 14 events to logstash with window size 5
2016/05/03 15:03:41.015625 client.go:105: DBG  close connection
2016/05/03 15:03:41.021484 client.go:124: DBG  0 events out of 14 events sent to logstash. Continue sending ...
2016/05/03 15:03:41.024414 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.232.230:57675->192.168.232.123:5044: wsarecv
: Une tentative de connexion a échoué car le parti connecté n'a pas répondu convenablement au-delà d'une certaine durée ou une connexion établi
e a échoué car l'hôte de connexion n'a pas répondu.
2016/05/03 15:03:41.026367 single.go:152: INFO send fail
2016/05/03 15:03:41.027343 single.go:159: INFO backoff retry: 2s
2016/05/03 15:03:43.028320 client.go:100: DBG  connect
2016/05/03 15:03:43.033203 client.go:146: DBG  Try to publish 14 events to logstash with window size 2
2016/05/03 15:04:01.944335 client.go:105: DBG  close connection
2016/05/03 15:04:01.950195 client.go:124: DBG  0 events out of 14 events sent to logstash. Continue sending ...
2016/05/03 15:04:01.952148 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.232.230:57678->192.168.232.123:5044: wsarecv
: Une tentative de connexion a échoué car le parti connecté n'a pas répondu convenablement au-delà d'une certaine durée ou une connexion établi
e a échoué car l'hôte de connexion n'a pas répondu.
2016/05/03 15:04:01.955078 single.go:152: INFO send fail
2016/05/03 15:04:01.956054 single.go:159: INFO backoff retry: 4s
2016/05/03 15:04:05.957031 client.go:100: DBG  connect
2016/05/03 15:04:05.965820 client.go:146: DBG  Try to publish 14 events to logstash with window size 1
2016/05/03 15:04:24.865234 client.go:105: DBG  close connection
2016/05/03 15:04:24.872070 client.go:124: DBG  0 events out of 14 events sent to logstash. Continue sending ...
2016/05/03 15:04:24.874023 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.232.230:57679->192.168.232.123:5044: wsarecv

And finally here my input Beat from graylog :

recv_buffer_size: 1048576
port: 5044
tls_key_file:
tls_enable:
tls_key_password:
tcp_keepalive:
tls_client_auth_cert_file:
tls_client_auth: disabled
override_source:
bind_address: 0.0.0.0
tls_cert_file: 

Nothing in graylog server's log.
I am using the latest release of winlogbeat input from graylog marketplace.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.