Coder Social home page Coder Social logo

Comments (18)

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Hi Peter,

it is certainly possible and fairly easy to do both: to send traditional logs to µ/log, or to send µ/log events to traditional logging systems like Log4j/Logback/Timbre etc. Obviously, there is no point to do both at the same time otherwise you could create a spiralling loop between the two loggers.

I thought long about the possibility to include an appender for traditional log systems or a backend for SLF4J to publish existing logs into µ/log and I'm not too sure that there is value in doing so.
The essential problem with traditional loggers is that the fundamental logging unit is a message and that such message should contain the information, meaningful in a specific context, that the operator knows how to decode.

The fundamental problem of decoding the message into data-points that a machine can aggregate is the job that tools like Logstash and Splunk perform with various degree of efficiency and error. Instead, if you take the humans out of the picture, you should be able to quantitatively analyze and react to telemetry events without the need for decoding the message.

Certainly, µ/log can do the heavy lifting of collecting and shipping your traditional logs to ElasticSearch directly, but you still need to decode the message.
My recommendation is to use µ/log to add telemetry thinking that no human will be reading raw data. Therefore the process should be to instrument the system in such a way that you can send the data that you want to have, rather than collecting the data that you simply have.

To make a (contrived) example is like the people who migrate their datacenters to the cloud by simply lifting off the existing software and machines and expecting to be cheaper and faster. To really leverage the cloud you should design for the cloud. Similarly, to really leverage µ/log you should really design for data-first telemetry.

I personally keep the two systems separate as I don't want to pollute the good quality telemetry that I get from µ/log instrumentation with the flood and meaningless and had to analyse log message. I do consider the traditional logs some sort of technical debt.

I personally use this ElasticSearch appenders, and index the data separately.

However, if you want to write an appender that takes traditional logs and add them into µ/log, you could do this with two different approaches:

  1. pick a backend like Log4j or Logback and write an appender (for example Log4j2 appender)
  2. write a backed for SLF4j (a good starting point could be SLF4j-simple)
    Make sure that when you log events into µ/log, that all arguments are immutable Clojure values and not mutable java objects.

Either way, if you need to extract/decode the message and produce more meaningful data-events the approach I would take it would be to send the traditional logging to µ/log, the configure µ/log to send the logs to Kafka or Kinesis, have a streaming application that decodes the messages into datapoints and the index the data-datapoints into ElasticSearch

YourAPP -> SLF4j -> Log4J2 -> µ/log -> Kafka-topic -> streaming-decoder -> kafka-topic -> Kafka-connect -> ElasticSearch

Whether you choose to keep them separate and use the Lagback appender listed above, or you choose to unify them and write your own appender I would be interested in hearing your feedback on how it is working for you, and I would be glad to help.

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Hi Bruno,

I haven't forgotten about this, just too much work on my plate at this moment to work on this. It's on my todo list though :)

What I'd boil down your answer to:

  • you think of mulog output as something that can be easily scripted to search, aggregate, monitor, alert etc.
  • you think the textual logs of the traditional loggers don't provide enough value to include them

It's the second point that has no correct answer - for some users it might provide benefits, even if it would be considered an anti-pattern. For others it would just mess up their data.

In Elasticsearch it's relatively easy to find anything with a good query, but that's the human, non-scripted way. But once you find those queries by hand you can reuse them unless the log message changes.

If there was an SLF4J bridge I'd just try that out and see for myself. This way I'd need to implement it, for which I'd need to get acquainted with the SLF4J interfaces. Once the bridge is done it could be useful for others as well, though. I'll see if I find some time to tinker with it. Otherwise I'll just keep the setup as-is and try to slowly move my stuff into mulog.

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Hi Peter,

the value proposition of µ/log is to provide telemetry can be queried and extract values in ways other tools won't.
For example, if you instrument only your webservice request handlers with µ/log you could ask the following questions:

  • how many requests I've received in the last week
  • how many requests by day/hour/minute/second
  • how many requests by user over time
  • how many requests by endpoint overtime
  • how many requests were failures (4xx or 5xx)
  • of the error request, how many were for a specific endpoint
  • which user issued the failing requests
  • what do they have different than the successful requests
  • what's the latency distribution of the successful request vs the failed requests
  • which content-type/content-encoding was used
  • what's the distribution of the failures by host/jvm
  • what's the status JVM metrics (GC/memory/etc) of failing hosts during that time.
  • what's the repartition of the latencies between internal processing and external connections (db query, caches, etc)
    and much more. All this from 1 single good log instrumentation.

Elasticsearch is an amazing tool to slice and dice the data the way you need. Combined with the richness of the µ/log data is hardly comparable to the blunt log messages without contextual information (see examples here the don't mean anything)

I can understand that you can still draw benefit from old logging for a variety of reasons, I would say let's try to implement it and see what you get out of it. I would suggest to use a concrete logger rather than the facade (SLF4j).
if you pick a concrete logger like log4j or logback, in most of the cases you only need to implement 1 interface and 1-2 methods and record the data into µ/log.
The other advantage is that adding an appender won't affect what you are currently doing. So you can keep writing the logs into filesystem and have Filebeat sending over to Logstash and at the same time trying the new appender to µ/log without the one interfering with the other one. Then if it works the way you expect you can drop the console appender and just keep the µ/log appender.

With SLF4j is more complicated because you can't have more than 1 bridge active at the same time. So you will need to commit to your new implementation all at once. I think is riskier.

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Thanks. I understand the value proposition as I already partially implemented this on top of timbre. My logs are written as JSON lines with additional keys so I was already able to to reason about my logs better. The rest of the services are usually logging rich text which is parsed in logstash into structured data. Some services are planning to log JSON lines as well. mulog's way is far simpler and richer though, and a natural extension to what I was already bolting on top of timbre.

Thanks for the log4j/logback tip, I'll see what to pick then.

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Great, if you use Timbre adding µ/log appender is a "one-liner".

here an example

(ns timbre-mulog.core
  (:require [taoensso.timbre :as timbre]
            [com.brunobonacci.mulog :as u]
            [clojure.string :as str]))


;; init μ/log global context (maybe add hostname and pid)
(u/set-global-context! {:app-name "myapp" :version "0.2.0" :env "local"})
;; init μ/log publisher
(u/start-publisher! {:type :console :pretty? true})
;; init timbre
(timbre/set-config!
  {:level :debug
   :appenders
   {:println (timbre/println-appender {:stream :auto})
    ;; example μ/log appender
    :mulog {:enabled? true
            :fn (fn [{:keys [level ?ns-str vargs] :as d}]
                  (u/log ::log-event :level level :mulog/namespace ?ns-str :message (str/join " " vargs)))}
    }})

;; now when you use timbre log 
(timbre/info "Test" "message")

;; the console output will look like this
20-07-09 12:49:43 host.lan INFO [timbre-mulog.core:24] - Test message
{:mulog/event-name :timbre-mulog.core/log-event,
 :mulog/timestamp 1594298983538,
 :mulog/trace-id #mulog/flake "4X-LTcCx50c3x2YeJCNK6zMXnRLetnS9",
 :mulog/namespace "timbre-mulog.core",
 :app-name "myapp",
 :env "local",
 :level :info,
 :message "Test message",
 :version "0.2.0"}

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Yeah, I wanted to get rid of timbre if I'm to use mulog to get rid of yet another layer. It is getting a bit off topic so just briefly - mssql jdbc driver uses java.util.logging and I was unable to force it to log on debug level. The j.u.l -> slf4j -> timbre path just somehow didn't work for me. One needs a master's degree in java logging to get these things going :)

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

I see, so probably the easiest is to clone slf4j-timbre and replace timbre with µ/log

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

I will close this issue for now, If you need help with the SLF4J backend feel free to open another issue.
If you create a backend and you are happy to publish it as open-source, please put a link here, I would be happy to link it in the official documentation.

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Sure thing. I am preoccupied with other, more burning features right now, but I already chose some lower priority ones to solve with mulog so I'll definitely start using it. And once I start using it I'll revisit this single-logging-library conundrum

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Hi Bruno,

since I wanted to get a little bit more familiar with SLF4J anyway (since it takes a master's degree to log in java) I found this to be a good exercise.

https://gitlab.com/nonseldiha/slf4j-mulog

clj -Sdeps '{:deps {com.brunobonacci/mulog {:mvn/version "0.3.1"} nonseldiha/slf4j-mulog {:mvn/version "0.2.0"}}}'
(require '[com.brunobonacci.mulog :as u])
(u/start-publisher! {:type :console})
(def logger (org.slf4j.LoggerFactory/getLogger "TEST"))
(.info logger "hi")
=> {:mulog/trace-id #mulog/flake "4XMgpDmW0Dwjke6MNF2kLya4OAFAxPoh", :mulog/timestamp 1595942042220, :mulog/event-name :TEST/log, :mulog/namespace "org.slf4j.impl.mulog", :log/message "hi", :log/level "info"}

There's a clj namespace org.slf4j.impl.mulog with a var set-level! that sets the mulog-specific logging level one would like to get logged as an event:

(org.slf4j.impl.mulog/set-level! :debug) ; now logger.debug gets logged too

Default is :info.

Please let me know your thoughts, especially on the default info level and the keywords in the final log event (:log/level, :log/message and the event name). Once we clear up the details I will write up the README for the library.

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Hi @xificurC,

That's awesome!!!

as :mulog/namespace I would use the logger name as is (string).
as :mulog/event-name you could have a fix :log/message
and you could add :mulog/origin :slf4-mulog (this is a convention when events are added by a library)
:log/level it makes more sense to be a keyword.

Regarding the

(org.slf4j.impl.mulog/set-level! :debug) ; 

it makes sense to have it there too as some logging could be very noisy, but remember you can always filter it in µ/log as well as follow:

(u/start-publisher! 
  {:type :console
   ;; (f [events]) -> events
   :transform (partial filter (where :log/level :in? [:warning :error :critical]))})

this example uses where library, but you can write any filter predicate you want

when you have it ready let me know and I will add a link to it from µ/log documentation.

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Hi Bruno,

:mulog/namespace came from mulog, not me. If you're merging in the correct order I will be able to change it, otherwise not.

The rest I understand and will incorporate. Thank you for your feedback!

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Yes you can overwrite it!

from mulog.

xificurC avatar xificurC commented on September 4, 2024

good :) I'll probably follow up tomorrow then!

from mulog.

xificurC avatar xificurC commented on September 4, 2024

I incorporated the changes in 0.2.1 and will try to whip up the readme tomorrow. Will let you know.

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Hi Bruno, I finished the first readme, here's the link again - https://gitlab.com/nonseldiha/slf4j-mulog

Awaiting your review

from mulog.

BrunoBonacci avatar BrunoBonacci commented on September 4, 2024

Hi @xificurC,

that's great. If I can make an observation I would put the section named "So what should I do?" after you showed them how to use it, but your point is spot-on with the whole idea of µ/log.

Finally, although SLF4j has the ability to support MDC, clojure.logging doesn't, however, if you are using the with-context macro the context will be propagated to the logs of slf4j-mulog as well. So maybe there is no need to add support for it.

I'm adding a section in the readme for your slf4j-mulog, excellent work!

from mulog.

xificurC avatar xificurC commented on September 4, 2024

Hi Bruno,

I would put the section named "So what should I do?" after you showed them how to use it

done.

Finally, although SLF4j has the ability to support MDC, clojure.logging doesn't, however, if you are using the with-context macro the context will be propagated to the logs of slf4j-mulog as well. So maybe there is no need to add support for it.

I was thinking of the other way around actually. If someone's been using MDC or Markers in their java codebase and they'd like these concepts to propagate to mulog. Maybe that's the minority of the minority though :)

I'm adding a section in the readme for your slf4j-mulog, excellent work!

Thank you for your time, feedback and mention

from mulog.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.