Coder Social home page Coder Social logo

manigandham / serilog-sinks-fastconsole Goto Github PK

View Code? Open in Web Editor NEW
18.0 4.0 9.0 116 KB

Serilog sink that writes to console with high-performance non-blocking output.

License: MIT License

C# 99.19% Batchfile 0.81%
serilog-sink serilog logging manigandham

serilog-sinks-fastconsole's Introduction

serilog-sinks-fastconsole's People

Contributors

astryia avatar manigandham avatar sungam3r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

serilog-sinks-fastconsole's Issues

Cannot provide ITextFormatter directly

With current project implementation, I can only provide different template using "outputTemplate". But for my project I need to provide ElasticsearchJsonFormatter directly.
It looks like this for regular Console output:
.WriteTo.Console(formatter: new ElasticsearchJsonFormatter(inlineFields: true, formatStackTraceAsArray: false))
It would be nice to be able to provide formatter directly, since "outputTemplate" does not really work for me

Provide Option 'restrictedToMinimumLevel'

  • useful for, e.g., logging only warnings to console and all debug stuff to file
  • standard console sink supports this:
let logger =
        LoggerConfiguration()
            .MinimumLevel.Debug()
            .WriteTo.File(logfileName, shared = true, restrictedToMinimumLevel = LogEventLevel.Debug)
            .WriteTo.Console(restrictedToMinimumLevel = LogEventLevel.Warning)
            .CreateLogger()

Combination With File Logger Results in No Output

No console output:

 let logger =
        LoggerConfiguration()
            .MinimumLevel.Debug()
            .Destructure.FSharpTypes()
            .WriteTo.File(logfileName, shared = true, restrictedToMinimumLevel = LogEventLevel.Debug)
            .WriteTo.FastConsole()
            .CreateLogger()

Also, no console output at all:

 let logger =
        LoggerConfiguration()
            .MinimumLevel.Debug()
            .Destructure.FSharpTypes()
            .WriteTo.FastConsole()
            .WriteTo.File(logfileName, shared = true, restrictedToMinimumLevel = LogEventLevel.Debug)
            .CreateLogger()

Console output only without File logger:

 let logger =
        LoggerConfiguration()
            .MinimumLevel.Debug()
            .Destructure.FSharpTypes()
            .WriteTo.FastConsole()
            .CreateLogger()

Normal console sink work in combination with file sink in both first and second scenario.

image

Incomplete log output

Description

The sink produces missing or partial console output when the current size of the log is less than the size of the buffer of the StreamWriter used to write to the console's stdout stream.

Expected behavior

Each log message should appear in the console in its entirety shortly after it has been generated by the application.

Actual behavior

The last line in the console output contains a partial log message. Generating more log messages completes the line.

Steps to reproduce

  1. Start the DemoApp
  2. Notice that no output appears in the console
  3. Send an HTTP GET request to http://localhost:5000
  4. Notice that the last line in the console output shows an incomplete log message
  5. Send another GET request to http://localhost:5000
  6. The incomplete log message now appears in its entirety

The screenshot below shows an example of how it might look like in step 4:

serilog-sinks-fastconsole-incomplete-log

Known workarounds

Generating a constant stream of log messages does not expose this bug.

Getting proofs on how fast this sink is.

Hey @manigandham,
First of all thank for great package. This is amazing work. But can I ask for some benchmarks in order to prove that this sink is faster that standard one? I think it will be useful to put it in the description and will answer many questions. Right now, everyone who wants to test this sink has to do it on its own, so it would be nice to have proofs in the package description.

P.S. personally, I have troubles to benchmark this package, because it behaves very strange in my benchmarks. It outputs too many log entries and sometimes getting stuck.

Add option to block if queue is full?

It would be nice to have an option to block when log queue is full to avoid losing log messages if blocking is preferred. Like this:

    public void Emit(LogEvent logEvent) {
        if (_options.blockWhenFull)
        {
            _writeQueue.Writer.WriteAsync(logEvent).GetAwaiter().GetResult();
        }
        else
        {
            _writeQueue.Writer.TryWrite(logEvent);
        }
    }

`BlockWhenFull=True` Produces a lot of threadpool threads fast when blocked

With BlockWhenFull=True, when a lot of tasks try to write logs simultaneously, and QueueLimit is exhausted, there is a massive and fast grow of the amount of threads.
It looks like new threads creation is not even being throttled by thread pool. (not sure though)

At the same time, this is not the case for Serilog.Sinks.Async (with the same settings) which uses BlockingCollection.

I think this line of code is the reason.

I also made some experiments which prove my suggestion:

using System.Collections.Concurrent;
using System.Diagnostics;
using System.Threading.Channels;

var lockObject = new object();
var blockingCollection = new BlockingCollection<int>(10);
var channel = Channel.CreateBounded<int>(new BoundedChannelOptions(10) {SingleReader = true});

var cts = new CancellationTokenSource();
cts.CancelAfter(30000);

while (!cts.IsCancellationRequested)
{
    Task.Run(() =>
    {
        //channel.Writer.WriteAsync(0).AsTask().GetAwaiter().GetResult(); // Result - 147 threads
        //blockingCollection.Add(0);  //Result - 52 threads
        //lock (lockObject) { channel.Writer.WriteAsync(0).AsTask().GetAwaiter().GetResult(); } //Result - 56 threads
        //var j = 1 + 1; // Result - 15 threads
    });
}
Console.WriteLine(Process.GetCurrentProcess().Threads.Count);

Why is this a problem? (just in case)

  1. Thread is an expensive OS resource with it's own pid, stack, context, etc
  2. Total amount of threads can be limited in some environments (OpenShift - 1024). Exceeding this limits can cause application crash.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.