Coder Social home page Coder Social logo

logstash-logback-encoder's People

Contributors

arcnor avatar bkirwi avatar brenuart avatar danielredoak avatar dependabot[bot] avatar fred84 avatar guw avatar haus avatar jorgheymans avatar jug avatar kyleprager avatar looztra avatar lusis avatar metacubed avatar mfriedenhagen avatar msymons avatar neilprosser avatar pascalschumacher avatar phasebash avatar philsttr avatar psmiraglia avatar rdesgroppes avatar robsonbittencourt avatar rooday-doordash avatar schup avatar sullis avatar tetv avatar vbehar avatar withccm avatar worldtiki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-logback-encoder's Issues

Usage on application server

Hi,

did you tried this awesome encoded on application server. I have today tried it on JBOSS EAP 6.1 and no success -> logback coul not be configured due to jboss exception : ClassNotFound on used appender.

tried to add jar file with logstasdh encoder :

  • to war file
  • to jboss modules

Is possible that there is conflict with inner jboss logging system ?

Add an encoder implementation to support logback-access

Logback-Access is a logback library that is used to produce Access logs from java servers such as tomcat and jetty. It appears from its source that it created a new interface called IAccessEvent

It would be good to create a special logstash-logback-access-encoder that could mimic the json_event as created in the logstash book (listing 5.5 from Chapter 5 Filtering Events with Logstash)

This is necessary because IAccessEvent shares no object hierarchy with ILoggingEvent...

4.0 does not reconnect to logstash

After we upgraded from 3.x to 4.0 we noticed that logstash started logging significantly less than it used to. After some preliminary investigation it looks like logstash appender now has troubles reconnecting to the logstash.
I cannot reproduce it with a local test (when I kill the server socket, it causes appender to reconnect in 30 sec) but in AWS environment where ELBs are used to load-balance connections to multiple logstash servers - it happens all the time.

Important observation is that there are sockets stuck in CLOSE_WAIT state on the client machine - that is socket that OS believes is already closed by server but client application (logstash appender) has not called close() on that socket yet.

$ sudo netstat -npt | grep 4560
tcp6       0      0 10.0.66.247:38380       10.0.140.204:4560       ESTABLISHED 1056/java       
tcp6       1      0 10.0.66.247:33733       10.0.154.223:4560       CLOSE_WAIT  1086/java       
tcp6       0      0 10.0.66.247:33735       10.0.154.223:4560       ESTABLISHED 1019/java       
tcp6       1      0 10.0.66.247:55303       10.0.160.250:4560       CLOSE_WAIT  1027/java       
tcp6       1      0 10.0.66.247:38382       10.0.140.204:4560       CLOSE_WAIT  1039/java       
tcp6       1      0 10.0.66.247:55306       10.0.160.250:4560       CLOSE_WAIT  1124/java       
tcp6       1      0 10.0.66.247:38386       10.0.140.204:4560       CLOSE_WAIT  1099/java 

We have 7 separate Java processes on that box and as you can see, 2 of them managed to connect (ESTABLISHED) while other 5 - failed (CLOSE_WAIT).
Problem is that these will never try to reconnect because (my assumption), the appender code did not detect the problem.

capturing contextName

I was wondering if there is any way to capture the LoggerContext name. We have multiple instances of the same app running against the same log file, so it would be nice to see what app was actually writing the log.

I could put that name of the LoggerContext as a property in the LoggerContext itself as below but it seems kind of a hack.

log.getLoggerContext().putProperty("contextName", log.getLoggerContext().getName());

Thanks

v4.2 does not reconnect to logstash in some cases

I have a problem with the LogstashTcpSocketAppender - it does not detect that the logstash service has been shut down. (This could be a duplicate of #76, but since I'm not sure, I'm creating a new issue.)

If I start my service using LogstashTcpSocketAppender without a logstash service running and then start the logstash service the logs (correctly) say:

18:14:00,786 |-WARN in net.logstash.logback.appender.LogstashTcpSocketAppender[LOGSTASH] - Log destination 10.0.0.48:4560: connection failed. Waiting 29999ms before attempting reconnection. java.net.ConnectException: Connection refused
...
18:14:30,786 |-WARN in net.logstash.logback.appender.LogstashTcpSocketAppender[LOGSTASH] - Log destination 10.0.0.48:4560: connection failed. Waiting 29999ms before attempting reconnection. java.net.ConnectException: Connection refused
...
18:19:00,788 |-INFO in net.logstash.logback.appender.LogstashTcpSocketAppender[LOGSTASH] - Log destination 10.0.0.48:4560: connection established.

However, when I then shut down the logstash service there is nothing in the logs. And I guess it is because LogstashTcpSocketAppender thinks the logstash service is till there.

One thing to add here is that I close the logstash service very abruptly - it runs in a docker container which is killed - so the socket might look like it is connected from the LogstashTcpSocketAppender side.

Another thing to add is that I run on AWS with my logstash service on EC2, my services deployed via Elastic Beanstalk, communicating directly with logstash over private IPs without ELB.

How can I help to debug this?

Add support for slf4j Markers

Logback has the concept of Markers (http://slf4j.org/api/org/slf4j/Marker.html)

I would like to add support to this encoder to put the Markers into the logstash event. I was wondering what the best way to do this would be.

Should all markers just be converted to logstash tags? Should they be added as a field which is a list of markers? Should we do both?

I would be happy to implement this and issue a pull request, but I wanted to check with you what you thought would be the best way to implement it.

Does logstash-logback-encoder work in gretty?

I've added a FileAppender to /resources/ logback.xml file in my war project.

encoder class="net.logstash.logback.encoder.LogstashEncoder"
image

I have no problem running the logstash-logback-encoder from a simple java/logback/slf4j project.
The result is a JSON/LogStash formatted log file.

When gradle run the unittest, before starting up farmRun (version 1.1.3), the LogstashEncoder work and the STASH16.log file is filled with JSON Logstash formatted data.

But when farmRun/Gretty start running, nothing is logged to the file. Not hints/trace when I added a console appender.

Any hints?

I've also used the logbackConfigFile. Works the same way.

Thanks!

Logstash codec for TCP: json_lines

I had problems using the TCP connector. Long messages didnโ€™t arrive (unparseable JSON), many short messages in a short time span didnโ€™t either (unparseable JSON, again).

The documentation here says to use the codec json. I have found that I had to use json_lines instead (with logstash 1.4.1).

Logmessage size limit

Hi,
i got some error messages which are quite large cause of appending the complete stack trace.
unfortunately there seems to be a size limit - max char count i successfully transmitted to logstash was 8704. bigger messages will be truncated which results in a malformed json object and
logstash cannot parse and store it (u will never notice unless u put logstash into debug mode)
according to the syslog protocol spec (http://tools.ietf.org/html/rfc5424#page-9) there is max message length.

would be nice if we could here find out where the size limit comes from and try to prevent malformed json, by for example create a warning/handling for bigger messages

Output @timestamp is using local timezone offset instead of UTC.

This is a fairly minor issue, but I'd like the generated @timestamp field to use UTC (zero offset).

The easiest approach would be to simply change the LogstashAbstractFormatter class as follows

From

java
protected static final FastDateFormat ISO_DATETIME_TIME_ZONE_FORMAT_WITH_MILLIS = FastDateFormat.getInstance("yyyy-MM-dd'T'HH:mm:ss.SSSZZ");


**To**

``` java```
protected static final FastDateFormat ISO_DATETIME_TIME_ZONE_FORMAT_WITH_MILLIS = FastDateFormat.getInstance("yyyy-MM-dd'T'HH:mm:ss.SSSZZ", TimeZone.getTimeZone("UTC"));

Alternatively, you could provide a "useUTCOffset" flag or something similar in the LogstashFormatter and LogstashLayout and allow the user to choose similar to the includeMdc flag. Thoughts?

I'd be happy to submit a pull request for either solution.

Confusing docs

I need to configure LogstashTcpSocketAppender with a LogstashEncoder? And how do I use LoggingEventAsyncDisruptorAppender? It's not clear in the readme.

Marker#and is not supported

net.logstash.logback.marker.LogstashMarker provides an and method, but it is not usable because the various derived Markers do not override the add method defined in slf4j-api. The add from org.slf4j.helpers.BasicMarker assumes there's a list of Markers, but that's not the case for the logstash markers, so add ends up adding extra fields to it's referenceList, but that list is not used when generating JSON at serialisation time.

So, either and should throw an UnsupportedException or the various add methods should be implemented. It does seem though that several of the logstash markers would have a difficult time implementing add in a consistent way.

Memory issue due to connection problems

Hi,

we faced a serious memory problem with the LogstashTcpSocketAppender in connection with Logback classic. It occurs in case of connection problems to the Logstash server. The reason is the queue holding references of log events waiting to be delivered. The queue stores by default up to 10000 ILoggingEvent instances. These log events hold references to arguments passed via formatted messages like LOG.warn("Error processing {}", myBigObject);. As you can imagine these argument objects can be some with real big objects trees linked.

Besides the amount of memory that is blocked with these objects it is also a garbage collection issue, because there is no chance to clean up these hard referenced object trees. In our case the application crashed at 2000 items in the queue allocating 6 GB heap!

Another issue is that if these argument objects are mutable, holding them for later processing will create log messages that do not reflect the state of the input object any more!

I think passing such objects to the log events as a user is ok since it is compliant with the interface ILoggingEvent, the appender is responsible to handle them with care.

This issue is also a problem of the original Logback ch.qos.logback.core.net.SocketAppender, I just want to make you aware of it!

My fix for us was to

  • encode events directly to byte arrays when they come in
  • introduce a socket timeout for sockets (I got the advice that this is always very important because there are situations where the socket really stucks and will never come back)

Below I attached my patch of LogstashTcpSocketAppender of (maven) release version v 3.5.

Best regards,
Axel

Index: trunk/proj/src/main/java/net/logstash/logback/appender/LogstashTcpSocketAppender.java
===================================================================
diff -u -N -r8288 -r8290
--- trunk/proj/src/main/java/net/logstash/logback/appender/LogstashTcpSocketAppender.java   (.../LogstashTcpSocketAppender.java)    (revision 8288)
+++ trunk/proj/src/main/java/net/logstash/logback/appender/LogstashTcpSocketAppender.java   (.../LogstashTcpSocketAppender.java)    (revision 8290)
@@ -14,11 +14,13 @@
 package net.logstash.logback.appender;

 import java.io.BufferedOutputStream;
+import java.io.ByteArrayOutputStream;
 import java.io.IOException;
 import java.io.OutputStream;
 import java.net.ConnectException;
 import java.net.InetAddress;
 import java.net.Socket;
+import java.net.SocketTimeoutException;
 import java.net.UnknownHostException;
 import java.util.concurrent.BlockingQueue;
 import java.util.concurrent.ExecutionException;
@@ -34,7 +36,6 @@
 import ch.qos.logback.classic.net.SocketAppender;
 import ch.qos.logback.classic.spi.ILoggingEvent;
 import ch.qos.logback.core.AppenderBase;
-import ch.qos.logback.core.CoreConstants;
 import ch.qos.logback.core.encoder.Encoder;
 import ch.qos.logback.core.net.DefaultSocketConnector;
 import ch.qos.logback.core.net.SocketConnector;
@@ -54,10 +55,16 @@
  * 
  * @author <a href="mailto:[email protected]">Mirko Bernardoni</a>
  * @since 11 Jun 2014 (creation date)
+ * 
+ * This is a patched version of the original in library  net.logstash.logback/logstash-logback-encoder/3.5. We faced a memory issue because the 
+ * blocking queue was holding (a large number of) object references in the parameters array of the LoggingEvent items in case of network problems.
+ * I implemented:
+ *   - serialization of the events directly to byte[] (free them for garbage collection)
+ *   - handling of socket timeout, this causes the fallback to the connection retry loop
  */
 public class LogstashTcpSocketAppender extends AppenderBase<ILoggingEvent>
         implements Runnable, SocketConnector.ExceptionHandler {
-    
+      
     private static final PreSerializationTransformer<ILoggingEvent> PST = new LoggingEventPreSerializationTransformer();

     /**
@@ -105,7 +112,7 @@

     private int queueSize = DEFAULT_QUEUE_SIZE;

-    private BlockingQueue<ILoggingEvent> queue;
+    private BlockingQueue<byte[]> queue;

     private String peerId;

@@ -211,11 +218,13 @@

         if (errorCount == 0) {
             encoder.start();
-            queue = new LinkedBlockingQueue<ILoggingEvent>(queueSize);
+            queue = new LinkedBlockingQueue<byte[]>(queueSize);
             peerId = "remote peer " + remoteHost + ":" + port + ": ";
             task = getContext().getExecutorService().submit(this);
             super.start();
         }
+        
+        addInfo("Started patched version of " + this.getClass());
     }

     /**
@@ -241,19 +250,31 @@
             return;

         try {
-            event.prepareForDeferredProcessing();
-            if (includeCallerData) {
-                event.getCallerData();
-            }
+            if (queue.remainingCapacity() > 0) {
+                event.prepareForDeferredProcessing();
+                if (includeCallerData) {
+                    event.getCallerData();
+                }

-            final boolean inserted = queue.offer(event,
-                    eventDelayLimit.getMilliseconds(), TimeUnit.MILLISECONDS);
-            if (!inserted) {
-                addInfo("Dropping event due to timeout limit of ["
-                        + eventDelayLimit + "] being exceeded");
+                // Serialize the event to bytes to make sure to not collect
+                // references to outdated classes in its internal parameters
+                // array
+                ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
+                encoderInit(outputStream);
+                this.encoder.doEncode(event);
+
+                final boolean inserted = queue.offer(outputStream.toByteArray(), eventDelayLimit.getMilliseconds(),
+                        TimeUnit.MILLISECONDS);
+                if (!inserted) {
+                    addInfo("Dropping event due to timeout limit of [" + eventDelayLimit + "] being exceeded");
+                }
+            } else {
+                addInfo("Dropping event due to full queue");
             }
         } catch (InterruptedException e) {
             addError("Interrupted while appending event to SocketAppender", e);
+        } catch (IOException e) {
+            addError("Error serializing the log event", e);
         }
     }

@@ -307,7 +328,7 @@
             return s;
         } catch (ExecutionException e) {
             return null;
-        }
+        } 
     }

     /**
@@ -320,26 +341,21 @@
         try {
             socket.setSoTimeout(acceptConnectionTimeout);
             outputStream = new BufferedOutputStream(socket.getOutputStream());
-            encoderInit(outputStream);
-            socket.setSoTimeout(0);
+
             addInfo(peerId + "connection established");
-            int counter = 0;
             while (true) {
-                ILoggingEvent event = queue.take();
-                this.encoder.doEncode(event);
+                byte[] event = queue.take();
+                outputStream.write(event);
                 outputStream.flush();
-                if (++counter >= CoreConstants.OOS_RESET_FREQUENCY) {
-                    // Failing to reset the object output stream every now and
-                    // then creates a serious memory leak.
-                    outputStream.flush();
-                    counter = 0;
-                }
             }
+        } catch (SocketTimeoutException ex) {
+            addWarn("Socket time out during event dispatch: " + ex);
         } catch (IOException ex) {
             addInfo(peerId + "connection failed: " + ex);
         } finally {
             if (outputStream != null) {
                 encoderClose(outputStream);
+                CloseUtil.closeQuietly(outputStream);
             }
             CloseUtil.closeQuietly(socket);
             socket = null;
@@ -467,7 +483,7 @@
      * @param acceptConnectionTimeout
      *            timeout value in milliseconds
      */
-    void setAcceptConnectionTimeout(int acceptConnectionTimeout) {
+    public void setAcceptConnectionTimeout(int acceptConnectionTimeout) {
         this.acceptConnectionTimeout = acceptConnectionTimeout;
     }

Using LoggingEventAsyncDisruptorAppender + includeCallerInfo doesn't work

The documentation simply states that you can use

<includeCallerInfo>true</includeCallerInfo>

Inside an encoder, but when wrapped with net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender it doesn't work.

Through some basic debugging I found that you must set the encoder with

<includeCallerInfo>true</includeCallerInfo>

And the net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender config with (note the different naming as well)

<includeCallerData>true</includeCallerData>

A complete sample config that works for me is

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<configuration debug="false" scan="true">
  <contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator"/>
  <appender name="logstash" class="net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender">
    <includeCallerData>true</includeCallerData>
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
      <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <includeCallerInfo>true</includeCallerInfo>
      </encoder>
    </appender>
  </appender>
  <root level="all">
    <appender-ref ref="logstash"/>
  </root>
</configuration>

Global custom fields with dynamic values

Apologies if this is wrog place. I'm new here.
for instance, we have the following custom conversion

that captures user name.

%d{HH:mm:ss.SSS} {%uid} [Thread-%t] %-5level SQL - %msg%n

the resulting json log message does not capture %uid. I've tried using the customFields option but it outputs static value only.

Any advice?

Thanks,
Pavan

Unable to get ERROR messages

Info and debug are coming through fine but error messages aren't being created. Could this be related to the incompatibility with logback 1.1.2?

remoteHost is resolved only once

TCP appender resolves hostname in its start() method and then just uses remoteAddress.

This way a running application cannot adapt to the changing circumstances like you are removing one server from DNS round robin for example. Or Amazon doing some reconfiguration in their ELBs (which, I assume can lead to change of CNAMEs to which ELB name resolves).

I would probably do DNS resolution in the openSocket() instead. Although this may lead to a successful start with an invalid hostname, I do not really see it as a problem - in the end, it will just log hostname resolution exception 5 times and will keep trying to resolve it every 30 seconds. It is not very much worse than not starting appender at all.

Using TCP Appenders with markers

I have a java program using the logstah encoder, I am using the FlieAppender it is working fine. I can additional fields using the Map based markers. and log lines are produced in JSON format compatible with logstash. Now I switch the logback.xml to do the TCP socket appender to send JSON log lines to SYSLOG server, I get the following error.
java.lang.TypeNotPresentException: Type ch.qos.logback.access.spi.IAccessEvent not present
at sun.reflect.generics.factory.CoreReflectionFactory.makeNamedType(CoreReflectionFactory.java:117)
at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:125)
at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
at sun.reflect.generics.visitor.Reifier.reifyTypeArguments(Reifier.java:68)
at sun.reflect.generics.visitor.Reifier.visitClassTypeSignature(Reifier.java:138)
at sun.reflect.generics.tree.ClassTypeSignature.accept(ClassTypeSignature.java:49)
at sun.reflect.generics.repository.ClassRepository.getSuperclass(ClassRepository.java:84)
at java.lang.Class.getGenericSuperclass(Class.java:696)
at com.sun.beans.TypeResolver.prepare(TypeResolver.java:308)
at com.sun.beans.TypeResolver.resolve(TypeResolver.java:185)
at com.sun.beans.TypeResolver.resolve(TypeResolver.java:218)
at com.sun.beans.TypeResolver.resolve(TypeResolver.java:169)
at com.sun.beans.TypeResolver.resolve(TypeResolver.java:218)
at com.sun.beans.TypeResolver.resolve(TypeResolver.java:169)
at com.sun.beans.TypeResolver.resolveInClass(TypeResolver.java:81)
at java.beans.FeatureDescriptor.getReturnType(FeatureDescriptor.java:370)
at java.beans.PropertyDescriptor.findPropertyType(PropertyDescriptor.java:661)
at java.beans.PropertyDescriptor.updateGenericsFor(PropertyDescriptor.java:636)
at java.beans.Introspector.addPropertyDescriptor(Introspector.java:595)
at java.beans.Introspector.addPropertyDescriptors(Introspector.java:604)
at java.beans.Introspector.getTargetPropertyInfo(Introspector.java:457)
at java.beans.Introspector.getBeanInfo(Introspector.java:418)
at java.beans.Introspector.getBeanInfo(Introspector.java:163)
at ch.qos.logback.core.joran.util.PropertySetter.introspect(PropertySetter.java:79)
at ch.qos.logback.core.joran.util.PropertySetter.getMethod(PropertySetter.java:393)
at ch.qos.logback.core.joran.util.PropertySetter.findAdderMethod(PropertySetter.java:203)
at ch.qos.logback.core.joran.util.PropertySetter.computeAggregationType(PropertySetter.java:177)
at ch.qos.logback.core.joran.action.NestedComplexPropertyIA.isApplicable(NestedComplexPropertyIA.java:61)
at ch.qos.logback.core.joran.spi.Interpreter.lookupImplicitAction(Interpreter.java:237)
at ch.qos.logback.core.joran.spi.Interpreter.getApplicableActionList(Interpreter.java:256)
at ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:144)
at ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:129)
at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:50)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:149)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:135)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:99)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:76)
at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:68)
at com.arrayentinsights.connecteddevices.ArrayentStats.main(ArrayentStats.java:61)
Caused by: java.lang.ClassNotFoundException: ch.qos.logback.access.spi.IAccessEvent
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)

My logback.xml is like this

host 10514
  <!-- encoder is required -->
  <encoder class="net.logstash.logback.encoder.LogstashEncoder" />

Any help appreciated

Using LogstashTcpSocketAppender with a non-existent/unavailable remote host causes application hang when shutdown

I plan to use LogstashTcpSocketAppender in my application to send logs to a remote Logstash server.

The problem I'm facing right now is: when the remote Logstash server is not running or has been stopped, my application won't be able to shutdown properly and the process will hang forever.

The fact is: when my application is shutting down, it tries to stop the ch.qos.logback.classic.LoggerContext, which in turns try to stop the LogstashTcpSocketAppender and hang up forever on net.logstash.logback.encoder.com.lmax.disruptor.dsl.Disruptor.shutdown().

I've also setup a very simple test app to illustrate the problem:

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>test</groupId>
    <artifactId>test</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.1.2</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.8</version>
        </dependency>
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>4.2</version>
        </dependency>
    </dependencies>

</project>

logback.xml

<configuration>

    <!-- log to console to show the logging msg -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%date [%thread] %-5level %logger{25} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- log to a non-existing remote host to show the problem -->
    <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <remoteHost>192.168.255.255</remoteHost>
        <port>4000</port>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>

    <root level="info">
        <appender-ref ref="console" />
        <appender-ref ref="stash" />
    </root>

</configuration>

Test.java

import ch.qos.logback.classic.LoggerContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Test {

    private static final Logger LOGGER = LoggerFactory.getLogger("testLogger");

    public static void main(String[] args) {
        LOGGER.info("Some logs here...");

        LoggerContext context = (LoggerContext) LoggerFactory.getILoggerFactory();

        // execute either one will hang up
//        context.reset();
        context.stop();
    }
}

Run method never gets called

I'm downloaded the source code, build the jar and deployed it on a IBM domino server.
Then I configured it to use the LogstashTcpSocketAppender.

But no logs are sent to logstash.

It seems to be an issue in the run loop. I've added some comments and it never enters the while loop. It seems to be interrupted before.

LogstashTcpSocketAppender debugging

I am trying to use LogstashTcpSocketAppender to send logs into logstash (short - ls), but for some reasons, this does not work. I use WireShark to see packets:
Java -> [SYN] -> ls
ls -> [SYN, ACK] -> Java
Java -> [ACK] -> ls
(After 7 sec.) Java -> [FIN, ACK] -> ls
Both are on the localhost and Ubuntu is used as OS. If I use telnet to check localhost connection, I see data passage in wireshark, i.e. logstash set-up right.

I want to see any errors and warning messages (if any), but see nothing in the console. To check output, I set up appender without port as still get no errors:

localhost

According to code, all errors goes via addError() function, that is declared in ch.qos.logback.core.spi.ContextAwareBase. How to see erorrs/warnings? What can be the issue in my case?

Thank you for help.

Update for post 1.2 Logstash minimalistic JSON.

1.2+ removed most @ fields and brought everything top level. Re: Jira LOGSTASH-675

"Here's my proposal of a minimal schema including only two required fields - version and timestamp.
{
"@timestamp": "2012-12-18T01:01:46.092538Z".
"@Version": 1,
}
Removes all other '@-named' fields: @source_host, @source, @source_path, @type, @tags, @message.
The previous '@fields' namespace is gone, all "event fields" are now top-level."

I would say that the encoder should to be updated to output the new format, otherwise use of the oldlogstashjson codec must be used.

Unstopped threads with net.logstash.logback.appender.LogstashTcpSocketAppender

Everything is working fine but some threads aren't stopped when the main return. How to stop them ?

public class App {
    public final static Logger LOG = LoggerFactory.getLogger(App.class);
    public static void main(String[] args) {
        LOG.info("msg x2");
    }
}
<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <remoteHost>127.0.0.1</remoteHost>
        <port>1324</port>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>
    <root level="all">
        <appender-ref ref="stash" />
    </root>
</configuration>

Renaming HOSTNAME field

I'd like existing 'HOSTNAME' field output by the LogstashTcpSocketAppender to be renamed to 'host'

I've seen the documentation for customizing field names, and I haven't figured out the incantation to change the hostname.

Is it supported?

Thanks!

missing "@" in message field in JBoss web app

This is a sample line in Jboss web app:

{"@timestamp":"2014-03-24T10:40:26.628-04:00","@Version":1,"message":"A logstash enconder test","logger_name":"com.test.delegate.DelegateOne","thread_name":"MSC service thread 1-1","level":"INFO","level_value":20000,"HOSTNAME":"L421X2","tags":null}

Appender configuration:

<appender name="STASH"
    class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${log.path}/${log.logstash.name}</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>${log.path}/%d{yyyyMMddHH}_${log.logstash.name}.gz
        </fileNamePattern>
        <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>

<logger name="com.test" level="DEBUG" additivity="false">
    <appender-ref ref="TEST" />
    <appender-ref ref="STASH" />
</logger>

Technical info:
-Ubuntu 13.10
-JBoss EAP 6.0
-logback 1.0.13 & slf4j 1.7.5
-logstash-logback-encoder 2.5
-Java 1.6.0_43

Using the same appender config (in the same machine) in a standalone web app work as expected:

{"@timestamp":"2014-03-24T11:05:20.451-04:00","@message":"Standalone test","@fields":{"logger_name":"com.test.Standalone","thread_name":"main","level":"INFO","level_value":20000,"caller_class_name":"com.test.Standalone","caller_method_name":"stdTest","caller_file_name":"Standalone.java","caller_line_number":96,"HOSTNAME":"L421X2"},"@tags":null}

Json arguments do not work as example

calling

logger.info(MarkerFactory.getMarker("JSON"), "Message {}", "{"field1":"value1","field2": "value2","field3": {"subfield1": "subvalue1"}}");

should produce a json blob with

"json_message": {
"field1": "value1",
"field2": "value2",
"field3": {
"subfield1": "subvalue1"
}
}

as the json field appended, instead it produces an array of strings instead of and object with fields

"json_message": [
"{"field1":"value1","field2":"value2","field3":{"subfield1":"subvalue1"}}"
]

Also the tests only appear to check that the json node is inserted with the correct name and not its sub fields. So either my understanding is wrong or the documentation is wrong on the front page also could be bug

AsyncDisruptorAppender won't shutdown when backlog can't be cleared

if the Disruptor has a backlog, the AsyncDisruptorAppender won't shutdown.

AsyncDisruptorAppender calls this.disruptor.shutdown(); without a timeout, so it will wait until the backlog is empty. If the logstash host is not reachable (and never will be), the backlog will never be empty, but the Disruptor will busy-wait (!) until it is.

So I think there should be a timeout, possibly configurable.

Disruptor wait strategy causing high CPU load

Since the logstash encoder uses a disruptor/ringbuffer for performance reasons, it constantly uses 20% CPU on our systems. Reason for this is the SleepingWaitStrategy which is used by the disruptor (Yourkit sample of 34 secs shows 9 sec in the disruptor, this is wall time I think, but the parkNanos(1) call is not really the greatest option on high resolution systems, and our application was doing nothing else) :

screen shot 2015-02-08 at 12 31 10

See http://grokbase.com/t/gg/graylog2/12bw0tv8vq/disruptor-and-cpu-usage for the same issue in graylog2.

I see the WaitStrategy can be set in AsyncDisruptorAppender, but this is probably not easily done from the logback config (is it?), and I am wondering if the default wait strategy shouldn't be different.

v4.2 with SSL is not pushing events continously

Hi,
we're trying to send our log events via an encrypted channel. The setup per se seems allright as events come accross occasionally, but not as real-time as with plain TCP.
So the behaviour after a while is this:
You generate log events. Nothing appears in ES (or also in the receiving logstash stdout). When we bounce the receiving logstash service, most of the time a whole bunch of old messages comes in. So the broken connection fixes it temporarily, but obviously this isn't a permanent solution.

The setup:

<appender name="LOGSTASH_ERROR" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
        <level>ERROR</level>
    </filter>   
    <remoteHost>${logstash-server}</remoteHost>
    <port>5000</port>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <Pattern>
            %d{HH:mm:ss.SSS} %X{cluster} %X{req.requestURI} %X{batchsize} %X{username} - %msg%n
        </Pattern>
    </encoder>
    <!-- Enable SSL using the JVM's default keystore/truststore -->
    <ssl>
        <trustStore>
            <location>classpath:pricefx-logstash.jks</location>
            <password>pricefx</password>
        </trustStore>
    </ssl>  
</appender> 

The receiving end:

input {
  tcp {
port => 5000
codec => json
type => "logs"
ssl_cert => "/etc/logstash/tls/logstash.crt"
ssl_key => "/etc/logstash/tls/logstash.key"
ssl_enable => true
data_timeout => 300
  }
}

I played already with a few config options, but due to the lack of a clear error message I am a bit stuck. Happy to provide logs or debug dumps or so if needed.

Cheers
Chris

NPE when MDCPropertyMap is null

I'm getting a NPE whenever ILoggingEvent.getMDCPropertyMap() returns null. Here's the offending code:

Map<String, String> mdc = event.getMDCPropertyMap();

        for (Entry<String, String> entry : mdc.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue();
            fieldsNode.put(key, value);
        }

We need to add a null check before iterating through mdc. All the other (built-in) encoders I've looked do null checking before iterating. Thanks!

ClassNotFoundException: ch.qos.logback.core.joran.spi.ElementSelector when used with logback-classic:1.1.2

Hi! Tried using logstash-logback-encoder in a basic spring-boot 1.2.0 application, but web deploy to tomcat failed with the following message:

Failed to instantiate [ch.qos.logback.classic.LoggerContext]
Reported exception:
java.lang.NoClassDefFoundError: ch/qos/logback/core/joran/spi/ElementSelector
        at ch.qos.logback.classic.joran.JoranConfigurator.addInstanceRules(JoranConfigurator.java:43)
        at ch.qos.logback.core.joran.GenericConfigurator.buildInterpreter(GenericConfigurator.java:116)
        at ch.qos.logback.core.joran.JoranConfiguratorBase.buildInterpreter(JoranConfiguratorBase.java:103)
Caused by: java.lang.ClassNotFoundException: ch.qos.logback.core.joran.spi.ElementSelector
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1711)
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1556)
        ... 55 more

This might be because spring platform / spring boot comes with logback-classic 1.1.2, something in which I have little choice. Would it be possible for you to support logback-classic:1.1.2?

Ability to set syslog port

There is no ability to set port of syslog, host only. Useful when logstash is running under unprivileged user and cannot listen default syslog port.

Missing response headers when using LogstashAccessTcpSocketAppender

I've been having this issue, which I can't see to sort out. I've narrowed it down to somewhere in my logback-access.xml, but with no debugging information I have no real idea where to do with this.

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="true">                                                                                                                                                                                                                  
  <statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener" />
  <appender name="stash" class="net.logstash.logback.appender.LogstashAccessTcpSocketAppender">
    <remoteHost>logstash.coldlight.corp</remoteHost>
    <port>4561</port>
    <encoder class="net.logstash.logback.encoder.LogstashAccessEncoder">
      <fieldNames>
        <fieldsRequestHeaders>@fields.request_headers</fieldsRequestHeaders>
        <fieldsResponseHeaders>@fields.response_headers</fieldsResponseHeaders>
      </fieldNames>
    </encoder>
  </appender>

  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>access.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>access.%d{yyyy-MM-dd}.log.zip</fileNamePattern>
    </rollingPolicy>
    <encoder>
      <pattern>combined</pattern>
    </encoder>
  </appender>
  <appender-ref ref="stash" />
</configuration>

This is my file, if I comment the part about tcp socket appender the server starts up fine, but other wise it dies miserably

INFO: Starting Servlet Engine: Apache Tomcat/7.0.57
10:24:05,336 |-INFO in ch.qos.logback.access.tomcat.LogbackValve[localhost] - filename property not set. Assuming [/home/scarman/software/tomcat/conf/logback-access.xml]
10:24:05,662 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [net.logstash.logback.appender.LogstashAccessTcpSocketAppender]
10:24:05,690 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [stash]
Jan 20, 2015 10:24:05 AM org.apache.catalina.core.ContainerBase startInternal
SEVERE: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost]]
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
    at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:300)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    at org.apache.catalina.core.StandardService.startInternal(StandardService.java:443)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:739)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:321)
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:455)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost]]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1137)
    at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:816)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 6 more
Caused by: org.apache.catalina.LifecycleException: Failed to start component [ch.qos.logback.access.tomcat.LogbackValve[localhost]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:185)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 9 more
Caused by: java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/util/ByteArrayBuilder
    at net.logstash.logback.encoder.LogstashAccessEncoder.<init>(LogstashAccessEncoder.java:34)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    at java.lang.Class.newInstance(Class.java:438)
    at ch.qos.logback.core.joran.action.NestedComplexPropertyIA.begin(NestedComplexPropertyIA.java:122)
    at ch.qos.logback.core.joran.spi.Interpreter.callBeginAction(Interpreter.java:275)
    at ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:147)
    at ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:129)
    at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:50)
    at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:149)
    at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:135)
    at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:99)
    at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:76)
    at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:68)
    at ch.qos.logback.access.tomcat.LogbackValve.startInternal(LogbackValve.java:138)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 11 more
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.core.util.ByteArrayBuilder
    at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 29 more

Jan 20, 2015 10:24:05 AM org.apache.catalina.startup.Catalina start
SEVERE: The required Server component failed to start so Tomcat is unable to start.
org.apache.catalina.LifecycleException: Failed to start component [StandardServer[8005]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:689)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:321)
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:455)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardService[Catalina]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:739)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 7 more
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina]]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
    at org.apache.catalina.core.StandardService.startInternal(StandardService.java:443)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 9 more
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1131)
    at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:300)
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
    ... 11 more

Does anyone have any insight into what is causing this?

Also my server.xml if that helps

<?xml version='1.0' encoding='utf-8'?>
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JasperListener" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>
  <Service name="Catalina">
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
    <Engine name="Catalina" defaultHost="localhost">
      <Host name="localhost"  appBase="webapps" unpackWARs="true" autoDeploy="true">
        <Valve className="ch.qos.logback.access.tomcat.LogbackValve"/>
      </Host>
    </Engine>
  </Service>
</Server> 

Include LogstashTcpSocketAppender example in the docs

The readme lacks a full example of using the LogstashTcpSocketAppender.

For example, how do I specify the hostname for the destination server? (Figured it out: < remoteHost>)

It would also be nice to see an example logstash.conf that accepts from the LogstashTcpSocketAppender and outputs to elastic search.

LogstashTcpSocketAppender debugging

I am trying to use LogstashTcpSocketAppender to send logs into logstash (short - ls), but for some reasons, this does not work. I use WireShark to see packets:
Java -> [SYN] -> ls
ls -> [SYN, ACK] -> Java
Java -> [ACK] -> ls
(After 7 sec.) Java -> [FIN, ACK] -> ls
Both are on the localhost and Ubuntu is used as OS. If I use telnet to check localhost connection, I see data passage in wireshark, i.e. logstash set-up right.

I want to see any errors and warning messages (if any), but see nothing in the console. To check output, I set up appender without port as still get no errors:

"< appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender" >
< RemoteHost >localhost</ RemoteHost >


</ appender >"

According to code, all errors goes via addError() function, that is declared in ch.qos.logback.core.spi.ContextAwareBase. How to see erorrs/warnings? What can be the issue in my case?

Thank you for help.

shaded libraries not removed from pom

Hi,

commons-io, commons-lang and jackson are shaded, however their dependencies remain in the release pom. Default shading behaviour (i thought) is to rewrite the pom and remove these dependencies. For some reason this is not happening.

Async AccessEvent logging to OutputStreamAppenders has some issues with Jetty

The problem occurs because the Jetty web server "recycles" org.eclipse.jetty.server.Request Objects, when the server is done with a given Request Object it calls org.eclipse.jetty.server.Request.recycle()

When using AccessEventAsyncDisruptorAppender, the async behavior creates a race condition when trying to read anything that the Request.recycle() method has cleared out, for example a HttpServletRequest attribute configured in a JsonEncoder as %requestAttribute{MY_PARAM}

I haven't found a way to configure Jetty to modify it's behavior to make this workable for myself while still using an async appender, so there might be no solution here.

At the very least the documentation should outline this

How to output logstash.logback.marker.Markers to console

I'm using the appendEntries() function in Markers._ to add custom fields to the json output. This works, but I'd also like this data outputted to the console with Logback's ConsoleAppender. Is this possible? The closest I've gotten is added a %marker tag to the ConsoleAppender's encoder pattern, but that just outputs LS_MAP_FIELDS.

Thanks,

Mike

No MDC is logged

When LogstashTcpSocketAppender.append() queues event for later processing by background thread, the MDC context is lost. It is because MDC context is stored in thread-local storage of the logging thread to which background thread does not have access.

Logback's ILoggingEvent has method called getMDCPropertyMap() which retrieves MDC from thread local data and caches in a private field so if you call that method for the second time - it won't need to go for thread local data again. This fact allows doing a quick-and-dirty fix for this issue by addind event.getMDCPropertyMap() as the first line of append(). This way, it will retrieve and cache the MDC while still in context of logging thread and that cached value will be later visible to the dispatchEvents(). (or if we know the type of the encoder used, we could ask the encoder if it needs the MDC and only do that forced caching if encoder needs these values)

Another workaround it to have another appended that will request MDC property map of the event before dispatchEvents sees it. (Like we can put a FileAppender writing to /dev/null as the first one before the logstash). But it does not seem nice either.

Pattern to replace sensitive information in the message

Hi,

I want to replace the sensitive information in the log using the layout pattern replace method. while i was using the rollingappender, i was able to use the pattern with %replace to do the same.

%d{HH:mm:ss.SSS} [%X{IPAddress}] [%thread] %-5level %logger{36} - %replace(%msg){'"sensitive":"[\w\s\S]+?"', '"sensitive":"**********"'}%n

Now i am using the net.logstash.logback.encoder.LogstashEncoder and not sure how to do the same. When i look at the json , the sensitive information is clearly visible.

please advise.

Regards
Kamal

Replacing line feed characters to another one

I have started use your library with another library's appender. Their socket appender is replacing linefeed character with a unicode character. However Jackson is replacing \n with \\n. Is it possible to configure this?

Custom Certificate for SSLLogstashTcpSocketAppender

Recently, the SSLLogstashTcpSocketAppender (https://github.com/logstash/logstash-logback-encoder/blob/master/src/main/java/net/logstash/logback/appender/SSLLogstashTcpSocketAppender.java) was added, however it seems that you are unable to set a keystore (or cert file) non programatically using an XML config, without having to mess with the global keystore (which would obviously break other SSL connections) for the java app.

Note that this may be a documentation issue (I am fairly new in regards to how to configure stuff non programatically with Java), however it seems that typically speaking, web apps that require you to configure a key for a specific connection require you to explicitly specify cert details for that connection (in this case, you would need a config specifically for SSLLogstashTcpSocketAppender where you specify your keytool, keypass etc etc). If its not a documentation issue, then the SSLLogstashTcpSocketAppender seems fairly useless for most use cases, as typically speaking people use self signed certificates to log over SSL.

There is also a stack overflow for this issue posted here http://stackoverflow.com/questions/26906029/setting-up-jetty-runner-with-custom-cert-in-docker-build-file

UDP Appender to have pattern configurability

Referencing #72 :
"I realize it's a little confusing. I'd like to make the UDP appender be configured just like the TCP appender (and most every other logback appender). Meaning, you would have to configure a layout/encoder within the appender. However, I can't change it due to backwards compatibility issues."

How about introducing a new UDP appender that extends LogstashSocketAppender with the above functionalities? I know that it hurts symmetry, but it would be really useful. We would like to customize the output with the power of CompositeJsonEncoder, yet not give up the speed of UDP.

If this is not in your roadmap, how would you suggest implementing this to remain inline and compatible with your future releases?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.