Coder Social home page Coder Social logo

logspout-logstash's Introduction

GoDoc Go Report Card

logspout-logstash

A minimalistic adapter for github.com/gliderlabs/logspout to write to Logstash

Follow the instructions in https://github.com/gliderlabs/logspout/tree/master/custom on how to build your own Logspout container with custom modules. Basically just copy the contents of the custom folder and include:

package main

import (
  _ "github.com/looplab/logspout-logstash"
  _ "github.com/gliderlabs/logspout/transports/udp"
  _ "github.com/gliderlabs/logspout/transports/tcp"
)

in modules.go.

Use by setting a docker environment variable ROUTE_URIS=logstash://host:port to the Logstash server. The default protocol is UDP, but it is possible to change to TCP by adding +tcp after the logstash protocol when starting your container.

docker run --name="logspout" \
    --volume=/var/run/docker.sock:/var/run/docker.sock \
    -e ROUTE_URIS=logstash+tcp://logstash.home.local:5000 \
    localhost/logspout-logstash:v3.1

In your logstash config, set the input codec to json e.g:

input {
  udp {
    port  => 5000
    codec => json
  }
  tcp {
    port  => 5000
    codec => json
  }
}

Available configuration options

For example, to get into the Logstash event's @tags field, use the LOGSTASH_TAGS container environment variable. Multiple tags can be passed by using comma-separated values

  # Add any number of arbitrary tags to your event
  -e LOGSTASH_TAGS="docker,production"

The output into logstash should be like:

    "tags": [
      "docker",
      "production"
    ],

You can also add arbitrary logstash fields to the event using the LOGSTASH_FIELDS container environment variable:

  # Add any number of arbitrary fields to your event
  -e LOGSTASH_FIELDS="myfield=something,anotherfield=something_else"

The output into logstash should be like:

    "myfield": "something",
    "another_field": "something_else",

Both configuration options can be set for every individual container, or for the logspout-logstash container itself where they then become a default for all containers if not overridden there.

By setting the environment variable DOCKER_LABELS to a non-empty value, logspout-logstash will add all docker container labels as fields:

    "docker": {
        "hostname": "866e2ca94f5f",
        "id": "866e2ca94f5fe11d57add5a78232c53dfb6187f04f6e150ec15f0ae1e1737731",
        "image": "centos:7",
        "labels": {
            "a_label": "yes",
            "build-date": "20161214",
            "license": "GPLv2",
            "name": "CentOS Base Image",
            "pleasework": "okay",
            "some_label_with_dots": "more.dots",
            "vendor": "CentOS"
        },
        "name": "/ecstatic_murdock"

To be compatible with Elasticsearch, dots in labels will be replaced with underscores.

By setting INCLUDE_CONTAINERS you can specify a comma separated list of container names to only get logs from those containers. You can also set INCLUDE_CONTAINERS_REGEX to use regex to describe the containers to include.

Using Logspout-Logstash in a swarm

In a swarm, logspout is best deployed as a global service. To support this mode of deployment, the logstash adapter will look for the file /etc/host_hostname and, if the file exists and it is not empty, will configure the hostname field with the content of this file. You can then use a volume mount to map a file on the docker hosts with the file /etc/host_hostname in the container. The sample compose file below illustrates how this can be done:

version: "3"
services:
  logspout:
    image: localhost/logspout-logstash:latest
    volumes:
      # Logspout reads this in and attaches it to the log
      - /etc/hostname:/etc/host_hostname:ro
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      # IP and port for logstash host
      - ROUTE_URIS=logstash://host:port
      # Include all docker labels
      - DOCKER_LABELS=true
      # Add environment field to all logs sent to logstash
      - LOGSTASH_FIELDS=environment=${NODE_ENV}
    deploy:
      mode: global
      resources:
        limits:
          cpus: '0.20'
          memory: 256M
        reservations:
          cpus: '0.10'
          memory: 128M

Retrying

Two environment variables control the behaviour of Logspout when the Logstash target isn't available: RETRY_STARTUP causes Logspout to retry forever if Logstash isn't available at startup, and RETRY_SEND will retry sending log lines when Logstash becomes unavailable while Logspout is running. Note that RETRY_SEND will work only if UDP is used for the log transport and the destination doesn't change; in any other case RETRY_SEND should be disabled, restart and reconnect instead and let RETRY_STARTUP deal with the situation. With both retry options, log lines will be lost when Logstash isn't available. Set the environment variables to any nonempty value to enable retrying. The default is disabled.

This table shows all available configurations:

Environment Variable Input Type Default Value
LOGSTASH_TAGS array None
LOGSTASH_FIELDS map None
INCLUDE_CONTAINERS array None
DOCKER_LABELS any ""
RETRY_STARTUP any ""
RETRY_SEND any ""
DECODE_JSON_LOGS bool true

logspout-logstash's People

Contributors

amouat avatar bn0ir avatar clake1-godaddy avatar frederiknjs avatar hekaldama avatar iljaweis avatar joakim666 avatar masterada avatar maxekman avatar mrdiggles2 avatar mut3 avatar rchicoli avatar ricardojoaoreis avatar samber avatar vertoforce avatar zconnelly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logspout-logstash's Issues

High CPU Usage on dockerd

We are experiencing issues with High CPU usage on dockerd whenever we enable the logspout-logstash container.
More details on this issue report

TLS & Client Certs

Is it possible to use it with TLS and to use a client cert?
Any hints how to setup this?
With slef signed certs.

Add option to only pull logs from certain containers

Hey thanks so much for such an awesome tool. Would it be possible to add an environment variable like INCLUDE_CONTAINERS or REQUIRED_LABEL to only pull logs from certain containers?

I realize when running any other docker containers on the same server running logspout, the logs are grabbed by logspout too which isn't what I want.

I also realize it's possible to add a separate logstash filter to ignore all other images, but it'd be a more scalable solution for me to have the option on the log sending side.

logspout-logstash with 6.0.0 fails with a mapper parsing exception

I have just upgraded ELK to version 6.0.0 and then parsing docker logs with logspout-logstash stopped working.
The Logstash error is this

[2017-11-30T02:11:39,765][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.11.30", :_type=>"docker", :_routing=>nil}, #<LogStash::Event:0x6cf08450>], :response=>{"index"=>{"_index"=>"logstash-2017.11.30", "_type"=>"docker", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [_default_]: [include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"[include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."}}}}}

Is there a workaround?

Preserve JSON Number formating

When logging JSON numbers from docker, ex:{"value":1000000} the value is converted to scientific notation once it's gone through the Marshall/Unmarshalling. ie:{"value":1e+06}

While this is not technically incorrect, it prevents us from looking for specific values in the logs (ids) without manually converting them to scientific notation.

I tried to come up with a fix myself (https://stackoverflow.com/a/22346593/8125689 looks promising), but while the issue definitely happens with the Bekt/logspout-logstash image, I wasn't able to reproduce it in a test case ๐Ÿ˜•

PS: Loads of thanks for this project, we've been using it on production server for a while as part of our monitoring system and it does a pretty good job ๐Ÿ‘

More issues with field names with dots in them

Certain ill-mannered programs (I'm looking at you, docker-registry) emit JSON-formatted log entries, but use dot-delimited nesting in the key names, which breaks with ES 2.x. Since the JSON emitted by these programs isn't necessarily intended to be consumed by logstash, it's not necessarily their fault that they're producing logstash-incompatible JSON, but is instead logspout-logstash's responsibility to transmute the error-producing JSON into something that logstash will understand.

I can think of two options:

  1. Replace all dots in keys with underscores (quick, dirty, and within my extremely limited Go skills to implement); or
  2. Translate dot-encoded keys into a nested structure (far superior, but there's no way known I can implement that all by myself).

Would a PR to implement option 1 be merged, or should I wait for someone with better Go chops to implement 2?

add node to output (docker swarm mode 1.12)

Could you think of adding another field for the node (or hostname of the docker engine) the container is running on? This would be very helpful for docker swarm clusters.

Logstash doesn't receive messages from Logspout

I got everything up and running logspout is showing me the trace of my nginx access_log when running: curl http://127.0.0.1:8000/logs.

logstash is listening on the port 5044 I have run it on debug mode and try netcat it with udp and it's showing the calls on the log file:

echo nc "test" | nc -4u -w1 172.18.0.2 5044`

However it seems like logspout it's not sending anything to logstash for some reason and I can't figure out why that could be happening.

This is how my environment variables looks like on my logspout container:

ENV ROUTE_URIS=logstash://172.18.0.2:5044

Versionning

How can we know the latest version of logspout supported. And I like to know also if you have planify any release in future or you will keep only the master ?

All logs appearring as stderr in stream field

When I run my containers with the TTY set to none all of my logs are showing up in Logstash as coming from stderr. If I attach a TTY to the container then the logs don't show up at all.

image

Is there some configuration piece I'm missing?

quotes in fields using LOGSTASH_FIELDS environment variable

I wanted to add some custom fields to my logspout-generated events.

The Readme says:

You can also add arbitrary logstash fields to the event using the LOGSTASH_FIELDS container environment variable:

  # Add any number of arbitrary fields to your event
  -e LOGSTASH_FIELDS="myfield=something,anotherfield=something_else"

So my compose file:

...
logspout:
  environment:
    - ROUTE_URIS=logstash+tcp://logstash:5045
    - LOGSTASH_FIELDS="collector=logspout"
...

Problem is:
logspout-logstash doesn't remove the quotes for my field collector

Looking at the available fields for my index in elasticsearch:

[root@2c2dfff9f247 elasticsearch]# curl 'elasticsearch:9200/unbekannt-2018.01.31/_mapping/*?pretty'
{
  "unbekannt-2018.01.31" : {
    "mappings" : {
      "doc" : {
        "properties" : {
          "\"collector" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "@timestamp" : {
            "type" : "date"
          },
...

As the result in my logstash.conf I can't use the field e.g.
if [\"collector] {} or
if ["collector] {} or
if [collector] {}

Solution for the problem
In the code of this adapter quotes aren't removed.
Using the following environment variable
- LOGSTASH_FIELDS=collector=logspout
solves the problem

Please change the Readme or fix this in the code

don't see logstash.go

I followed the instruction and build the docker.
I run the docker and login to to docker.

I checked /erc dir..I see modules.go has my content ...I didn't find logstash.go
also under adapter dir I don't see logstash.

Am I missing anything?

Logging `null` causes container to crash

We have been seeing a bug when null shows up as a line in our docker logs logspout will crash and restart.

I'm pretty sure the error is caused here -- https://github.com/looplab/logspout-logstash/blob/master/logstash.go#L134-L143

How to Recreate

I was able to recreate this with the following script

package main;

import (
    "encoding/json"
    "fmt"
)

func main() {
    var data map[string]interface{}
    var err error
    m_data := []byte(`null`)

    if err = json.Unmarshal([]byte(m_data), &data); err != nil {
        data = make(map[string]interface{})
        data["message"] = m.Data
        fmt.Println("This doesn't get executed because no error is thrown")
    }

    data["docker"] = `foo`

}

Which matches the stack trace that we're seeing

panic: assignment to entry in nil map

goroutine 21 [running]:
panic(0x5638366bc2e0, 0xc42052e8c0)
/usr/lib/go/src/runtime/panic.go:500 +0x1a5
github.com/looplab/logspout-logstash.(*LogstashAdapter).Stream(0xc420106390, 0xc420100300)
/go/src/github.com/looplab/logspout-logstash/logstash.go:143 +0x47d
github.com/gliderlabs/logspout/router.(*RouteManager).route(0xc4200554c0, 0xc4200a6580)
/go/src/github.com/gliderlabs/logspout/router/routes.go:147 +0xb5
github.com/gliderlabs/logspout/router.(*RouteManager).Run.func1(0xc4200554c0, 0xc4200a6580)
/go/src/github.com/gliderlabs/logspout/router/routes.go:170 +0x37
created by github.com/gliderlabs/logspout/router.(*RouteManager).Run
/go/src/github.com/gliderlabs/logspout/router/routes.go:172 +0xf2

Brief Explanation

Basically it seems like json.Unmarshal is decoding null to nil (because just null is actually valid JSON)

Then instead of data being a map[] it is nil and when you next try to assign a value to a key you get the panic.

Fix

I believe the fix is to add || data == nil to the end of the if statement. (as you can confirm with the small script I included)

I think you can also add a test for this simply by copying one of the TestStream*s and changing the string it passes to null (here: https://github.com/looplab/logspout-logstash/blob/master/logstash_test.go#L79)

I'd be happy to submit the fix + test myself but I'm a bit of a noob with golang and having some trouble getting everything setup to hack on this repo ^_^

!! bad adapter

My modules.go contains:

package main

import (
  _ "github.com/looplab/logspout-logstash"
  _ "github.com/gliderlabs/logspout/transports/udp"
)

run with
docker run --volume=/var/run/docker.sock:/var/run/docker.sock -it vysakh/logspoutudp -e ROUTE_URIS='logstash://localhost:5000'

docker log report

# logspout v3.2-dev-custom by gliderlabs
# adapters: udp logstash raw
# options : persist:/mnt/routes
!! bad adapter:

Tags not Working

Hey there,

i don't know if i made a mistake, or not... but i passed the ENV-Variable LOGSTASH_TAGS but the Tags isn't assigned:

my docker_compose:

logspout-logstash:
  ports:
  - 9999:80/tcp
  environment:
    ROUTE_URIS: logstash+tcp://logstash01.int.com:6666
    LOGSTASH_TAGS: test,test2
  labels:
    io.rancher.scheduler.global: 'true'
    io.rancher.container.pull_image: always
  tty: true
  image: dockerregistry.int.com/foobar/logspout-logstash:latest
  volumes:
  - /var/run/docker.sock:/var/run/docker.sock
  stdin_open: true

and my logstash output:

{"message":"x.x.x.x - - [26/Sep/2016:18:45:38 +0000] \"GET / HTTP/1.0\" 200 6286 \"-\" \"-\" \"-\"","stream":"stdout","docker":{"name":"/r-buttons-microservice","id":"ddd2a2dc7c00714461826f43c19ad918261ae653efcd2c679d93a6758db95318","image":"dockerregistry.int.com/foobar/docker_image-buttons-microservice:latest","hostname":"ddd2a2dc7c00"},"tags":[],"@version":"1","@timestamp":"2016-09-26T18:45:38.368Z","host":"x.x.x.x","port":59889,"type":"docker_events"}

Can somebody look into ?

Thanks

Severity tag missing

I'm not seeing the 'severity' (e.g. DEBUG, INFO, ERROR, etc) tag being included in logspout-logstash messages. Is there a configuration option I'm missing or is this not supported yet?

LOGSTASH_TAGS doesn't work

hi,

LOGSTASH_TAGS doesn't work. getopt only return logspout container's env.

To get other containers' env, i think we need to use container.config.Env. But i'm new to logspout, could we do that?

Elasticsearch 2.0 doesn't support dot in the field name

Elasticsearch 2.0 doesn't support dot in the field name:

https://www.elastic.co/guide/en/elasticsearch/reference/current/_mapping_changes.html#_field_names_may_not_contain_dots

In logstash.go it has:

type LogstashMessage struct {
    Message  string `json:"message"`
    Name     string `json:"docker.name"`
    ID       string `json:"docker.id"`
    Image    string `json:"docker.image"`
    Hostname string `json:"docker.hostname"`
}

This will cause an issue either upgrading to or using Elasticsearch 2.0. I suggest renaming the dots with an underscore or using nested fields.

need a little bit more help "!! unable to find adapter: logstash"

I am getting a bad adapter error.

My Dockerfile contains:

FROM gliderlabs/logspout:master

My modules.go contains:

package main

import (
    _ "github.com/looplab/logspout-logstash"
)

My command line is:

docker run --name="logspout" --volume=/var/run/docker.sock:/tmp/docker.sock -e ROUTE_URIS='logstash://localhost:5143'  mystuff/logspout-logstash

The message is:

# logspout v3-master-custom by gliderlabs
# adapters: logstash
# options : persist:/mnt/routes
!! unable to find adapter: logstash

Any suggestions? Thanks.

cannot make it work

run with

docker run -d \
--restart=always \
--volume=/var/run/docker.sock:/tmp/docker.sock \
-e "ROUTE_URIS=logstash://host:5505" \
my-docker-registry/logspout-logstash:latest

it starts, but nothing goes to logstash
docker logs report

# logspout v3-master-custom by gliderlabs
# adapters: logstash raw tcp syslog
# options : persist:/mnt/routes
# jobs    : pump http[routes]:80
# routes  :
#   ADAPTER     ADDRESS         CONTAINERS      SOURCES OPTIONS
#   logstash    host:5505                         map[]

how to diagnose cause?

logstash+tcp adding extra comma to tags

I'm seeing an issue, I'm not quite sure what's up. I switched one of my environments to use the logstash+tcp functionality that recently got added and noticed that an extra comma is being added to my custom tags.

The other logspout container I'm using to forward logs isn't adding this comma which makes me believe it is the tcp functionality that is doing it. If the log doesn't hit a filter defined in my logstash config it doesn't append the extra comma.

Here's an example of what the tags column looks like:

image

I've tried restarting the logspout-logstash containers, reverting back to udp, and restart logstash completely but the problem still persists.

Apologies if this is not a logspout-logstash issue, I just wasn't sure where else to start since the only change I made was to use the tcp adapter.

Docker labels causes 400 error in logstash

Using RAW_FORMAT to include docker labels gives an error in logstash and document is written. Really want docker labels. Any idea how to fix this?

version: '3.3'
services:
  logspout:
    build: ./
    volumes:
      # Logspout reads this in and attaches it to the log
      - /etc/hostname:/etc/host_hostname:ro
      - '/var/run/docker.sock:/tmp/docker.sock'
    environment:
      # IP and port for logstash host
      ROUTE_URIS: "logstash://some.dns:6000"
      # Include all docker labels
      DOCKER_LABELS: "true"
      # Add environment field to all logs sent to logstash
      LOGSTASH_FIELDS: "environment=${NODE_ENV}"
      RETRY_STARTUP: "true"
      RAW_FORMAT: '{ "container" : "{{ .Container.Name }}", "labels": {{ toJSON .Container.Config.Labels }}, "source" : "{{ .Source }}", "message" : {{ toJSON .Data }} }'
    command: 'raw+udp://some.dns:6000'
    deploy:
      mode: global
      resources:
        limits:
          cpus: '0.20'
          memory: 256M
        reservations:
          cpus: '0.10'
          memory: 128M
    restart: on-failure
[2021-06-28T21:18:49,754][WARN ][logstash.outputs.elasticsearch][main][ec5863da584df97e7e09b88e20aac31b53950dfc42b9a358d52a766e775ca61f] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :routing=>nil}, {"@version"=>"1", "type"=>"dockerlog", "container"=>"/logspout_logspout_1", "labels"=>{"com.docker.compose.container-number"=>"1", "com.docker.compose.project.config_files"=>"docker-compose.yaml", "com.docker.compose.version"=>"1.27.4", "com.docker.compose.oneoff"=>"False", "com.docker.compose.service"=>"logspout", "com.docker.compose.project.working_dir"=>"/home/pchost/docker/logspout", "com.docker.compose.config-hash"=>"f4c841a27a87d76eee562fc3329790dd68a1001be5e7c2376ca86313c3d2d21f", "com.docker.compose.project"=>"logspout"}, "@timestamp"=>2021-06-28T21:18:49.591Z, "host"=>"192.168.1.232", "source"=>"stdout", "message"=>"#   ADAPTER\tADDRESS\t\t\tCONTAINERS\tSOURCES\tOPTIONS"}], :response=>{"index"=>{"_index"=>"logstash", "_type"=>"_doc", "_id"=>"s-99VHoB4-ZIuBnPNYth", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [labels.com.docker.compose.project] cannot be changed from type [text] to [ObjectMapper]"}}}}

No logging if ELK stack is not fully up

I was using a dockerhub image that is a self-contained ELK stack. I was trying to run the ELK stack and Logspout-Logstash from a single docker-compose.yml.

Seems that if the ELK stack is not FULLY up, Logspout-Logstash never starts logging. Restarting the logspout container does correct it. So, just noting that here, it would be nice if it would do re-tries or something. Unless perhaps it's an issue with my setup.

Here is my docker-compose file

elk:
  image: sebp/elk
  ports:
   - "5601:5601"
   - "9200:9200"
   - "5044:5044"
   - "5000:5000"
   - "5000:5000/udp"
  environment:
   - ES_HEAP_SIZE=12g
   - LS_HEAP_SIZE=12g
  volumes:
    - $HOME/dev/elk/logspout.conf:/etc/logstash/conf.d/logspout.conf
logspout:
  image: local/logspout
  container_name: logspout
  environment:
   - LOGSPOUT=ignore
   - ROUTE_URIS=logstash+tcp://<HOSTIP>:5000
  volumes:
   - /var/run/docker.sock:/tmp/docker.sock

Not possible to build the custom version

I've followed the instructions listed here and on the original logspout repository, but I appear to be having problems finding the logspout-logstash dependency (both locally and on CI)

The error I'm getting is:

modules.go:4:2: cannot find package "github.com/looplab/logspout-logstash" in any of:
	/go/src/github.com/gliderlabs/logspout/vendor/github.com/looplab/logspout-logstash (vendor tree)
	/usr/lib/go/src/github.com/looplab/logspout-logstash (from $GOROOT)
	/go/src/github.com/looplab/logspout-logstash (from $GOPATH)

I'm sadly not a go expert (or even beginner) so I'm a bit stumped.

$ go version
go version go1.10.3 linux/amd64
$ echo $GOPATH
/home/ant/go
$ cat modules.go 
package main

import (
	_ "github.com/looplab/logspout-logstash"
	_ "github.com/gliderlabs/logspout/healthcheck"
	_ "github.com/gliderlabs/logspout/adapters/raw"
	_ "github.com/gliderlabs/logspout/adapters/syslog"
	_ "github.com/gliderlabs/logspout/adapters/multiline"
	_ "github.com/gliderlabs/logspout/httpstream"
	_ "github.com/gliderlabs/logspout/routesapi"
	_ "github.com/gliderlabs/logspout/transports/tcp"
	_ "github.com/gliderlabs/logspout/transports/udp"
	_ "github.com/gliderlabs/logspout/transports/tls"
)

Logstash restart breaks logs flow

Hi,

I have configured logstash and logspout to send logs from machine A to machine B. Everything works fine, but restarting logstash breaks the logs flow. All logs are still visible using http output and logspout stdout says logstash: write udp: connection refused. That's expected: logstash is booting up. Restarting the logspout container makes it send the logs again.

How can I make logspout reconnect without restart?

Is there a way collecting multiline logs as a single event to logstash?

Is this possible to collect multiline logs such as java stack trace as an event and send to logstash?

Logstash multiline module doc says that

If you are using a Logstash input plugin that supports multiple hosts, such as the Beats input plugin input plugin, you should not use the multiline codec to handle multiline events. Doing so may result in the mixing of streams and corrupted event data. In this situation, you need to handle multiline events before sending the event data to Logstash.

So that I think that will be a wonderful thing if logspout can do this.

logstash: could not write:write tcp

I'm occasionally seeing the following error log in my logspout container, which causes it to exit.

logstash: could not write:write tcp 172.17.0.2:49194->192.168.1.178:5001: write: connection reset by peer

Port 5001 is the TCP port that my Logstash container is listening on. Any idea what is going on or if anything can be done? For now I am just using the docker run restart flag to restart the container if it dies.

Error building: ERROR: the correct import path is gopkg.in/check.v1

...
Executing busybox-1.24.2-r0.trigger
Executing ca-certificates-20160104-r2.trigger
OK: 401 MiB in 48 packages
# github.com/go-check/check
../../go-check/check/error.go:4: "ERROR: the correct import path is gopkg.in/check.v1 ... " evaluated but not used
The command '/bin/sh -c cd /src && ./build.sh "$(cat VERSION)-custom"' returned a non-zero code: 2

I get this error trying to build the image whit this modules:

package main

import (
  _ "github.com/looplab/logspout-logstash"
  _ "github.com/gliderlabs/logspout/adapters/raw"
        _ "github.com/gliderlabs/logspout/adapters/syslog"
        _ "github.com/gliderlabs/logspout/httpstream"
        _ "github.com/gliderlabs/logspout/routesapi"
        _ "github.com/gliderlabs/logspout/transports/tcp"
        _ "github.com/gliderlabs/logspout/transports/udp"
        _ "github.com/gliderlabs/logspout/transports/tls"
)

write: connection refused

Hi! I am running into an issue with error

# logspout v3.2-dev-custom by gliderlabs
# adapters: raw tcp logstash udp syslog
# options : persist:/mnt/routes
# jobs    : http[]:80 pump routes
# routes  :
#   ADAPTER     ADDRESS         CONTAINERS      SOURCES OPTIONS
#   logstash    0.0.0.0:5000                            map[]
2016/11/17 07:15:57 logstash: could not write:write udp 127.0.0.1:34146->127.0.0.1:5000: write: connection refused 

after i start the adapter as
sudo docker run --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock -e ROUTE_URIS=logstash://0.0.0.0:5000 c045f1a3472b

logstash 5.0 docker container is running with the log

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
07:04:11.987 [[main]-pipeline-manager] INFO  logstash.inputs.tcp - Automatically switching from json to json_lines codec {:plugin=>"tcp"}
07:04:11.987 [[main]<udp] INFO  logstash.inputs.udp - Starting UDP listener {:address=>"0.0.0.0:5000"}
07:04:12.003 [[main]-pipeline-manager] INFO  logstash.inputs.tcp - Starting tcp input listener {:address=>"0.0.0.0:5000"}
07:04:12.409 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["https://~hidden~:[email protected]:9243"]}}
07:04:12.410 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
07:04:13.612 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
07:04:13.705 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["fc4fba7c82d6102f5c1a224f0e9f2e9a.us-east-1.aws.found.io:9243"]}
07:04:13.710 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
07:04:13.717 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
07:04:13.802 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

I am trying to send my logs to the elastic cloud using logspout and logstash. Thanks for the help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.