git clone https://github.com/pinojs/pinojs
cd pinojs
npm install
npm start
pinojs / pino-socket Goto Github PK
View Code? Open in Web Editor NEW๐ฒ A transport for sending pino logs to network sockets
๐ฒ A transport for sending pino logs to network sockets
Currently, we instantiate a new Backoff
and BackoffStrategy
every time reconnect
is called via backoff.fibonacci
which is equivalent to new Backoff(new FibonacciBackoffStrategy(options))
.
pino-socket/lib/TcpConnection.js
Line 95 in 4d79b0e
This is problematic because when the socket is closed (for instance when we get a ECONNREFUSED
) the backoff strategy is instantiated again and, as a result, the backoff value will be the initial value (defeating the purpose of using an incremental backoff).
See #57 and the other other PRs.
I'm a bit strained with time so it would be awesome if somebody else could take a look.
pino-abstract-transport uses split2
to read NDJSON. However this module does not require NDJSON as it needs to be compatible with pino-syslog.
currently pino-socket is not handling backpressure over TCP, not a big deal, but memory might grow quite a bit.
I'm not sure what we should do in case, meaning at some point that data will be buffered up, and we probably do not want that to happen.
We should also be able to specify a "sliding window" of data we can buffer up when there is no connection.
Given the following code:
const pino = require('pino')
const tcpStream = pino.transport({
level: "info",
target: "pino-socket",
options: {
address: "localhost",
port: "13370",
mode: "tcp",
reconnect: true,
onSocketClose: () => { console.log('') }
}
})
tcpStream.on('error', () => { /* ignore */ })
const pinoStreams = [
{
level: "debug",
stream: pino.transport({
target: "pino/file"
})
},
{
stream: tcpStream
}
]
const logger = pino({
level: "info",
}, pino.multistream(pinoStreams))
logger.info("Keep alive!")
The following exception is thrown:
this[kPort].postMessage({
^
DOMException [DataCloneError]: () => { console.log('') } could not be cloned.
at new Worker (node:internal/worker:235:17)
at createWorker (/path/to/project/node_modules/thread-stream/index.js:50:18)
at new ThreadStream (/path/to/project/node_modules/thread-stream/index.js:209:19)
at buildStream (/path/to/project/node_modules/pino/lib/transport.js:21:18)
at Function.transport (/path/to/project/node_modules/pino/lib/transport.js:110:10)
at Object.<anonymous> (/path/to/project/index.js:3:25)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
I have the following test code:
'use strict';
const pino = require('pino');
const transports = pino.transport({
targets: [
{
target: 'pino-socket',
options: {
address: '127.0.0.1',
port: 5140,
mode: 'udp',
},
},
],
});
const logger = pino(transports);
logger.error('Test message');
If I start up netcat to dump the messages, I don't get anything:
$ nc -klu 5140
However, if I switch mode
to tcp
and use netcat to dump tcp messages, the log shows up:
$ nc -kl 5140
{"level":50,"time":1632855456630,"pid":216531,"hostname":"workstation","msg":"Test message"}
I'm using these versions:
"pino": "^7.0.0-rc.7",
"pino-socket": "^4.0.0",
Thanks!
The test suite should be converted to tap to be consistent with other organization modules.
As mentioned in #77, onSocketClose
callback function cannot be serialized across worker thread boundaries.
I think it would be better to emit an event on the stream with an optional Error:
outputStream.emit('socketClose', hadError && socketError)
To be consistent, we should probably do the same when we cannot reconnect:
retry.on('fail', (err) => {
outputStream.emit('reconnectFailure', err) // or socketReconnectFailure?
})
I'm trying to use pino-socket
as a transport to log to /dev/log
on my Ubuntu 22.04 system.
I'm trying to use pino-socket
as a transport to log to syslog via /dev/log
on my Ubuntu 22.04 system using node 18.
import pino from 'pino';
const logger = pino({
level: 'info',
transport: {
targets: [
{
level: 'trace',
target: 'pino-socket',
options: {
unixsocket: '/dev/log',
},
},
{ level: 'trace', target: 'pino-pretty', options: {} },
],
},
});
logger.info(`pino logger instantiated `);
export { logger };
Here's the output I see in my console:
[15:46:00.992] INFO (501708): pino logger instantiated
Error: send ECONNREFUSED
at doSend (node:dgram:716:16)
at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
at afterDns (node:dgram:662:5)
at Socket.send (node:dgram:672:5)
at Writable.write [as _write] (/home/ty/Documents/rtr/rtr-webapps/node_modules/pino-socket/lib/UdpConnection.js:28:16)
at doWrite (node:internal/streams/writable:411:12)
at clearBuffer (node:internal/streams/writable:572:7)
at onwrite (node:internal/streams/writable:464:7)
at processTicksAndRejections (node:internal/process/task_queues:82:21)
And of course no logs show up when I run tail -f /var/log/syslog | grep pino
winston-syslog
seems to workBut I would much rather use pino
import { config, createLogger, transports } from 'winston';
import { Syslog, SyslogTransportOptions } from 'winston-syslog';
const opt: SyslogTransportOptions = {
protocol: 'unix',
path: '/dev/log',
facility: 'user',
localhost: '',
app_name: 'my-great-app',
};
const Logger = createLogger({
levels: config.syslog.levels,
transports: [new transports.Console()],
});
const sysLogTransport = new Syslog(opt);
Logger.add(sysLogTransport);
Logger.info(`Logger instantiated `);
export { Logger };
I get this line in /var/log/syslog
:
May 10 15:59:32 my-ubuntu-4 my-great-app[504122]: {"level":"info","message":"Logger instantiated "}
I'm sure I'm doing something wrong. Could you please provide an example?
Since functions cannot be serialized across worker thread boundaries, the following won't work when using pino socket with multistream
/ThreadStream
:
const socketStream = pino.transport({
target: "pino-socket",
options: {
address: tcpAddress,
port: tcpPort,
mode: "tcp",
reconnect: true,
reconnectTries: 2,
backoffStrategy: new ExponentialStrategy({})
}
})
I think it might be useful to configure the backoff strategy using primitive data types.
The backoff
library provides two backoff strategies:
exponential
fibonacci
(default)Exponential backoff has the following options:
Fibonacci backoff has the following options:
I think we can keep the backoffStrategy
option when you want to pass a custom strategy and/or when you are not using pino socket with multistream
/ThreadStream
.
I propose to add the following options:
backoffStrategyName
(string): Name of the backoff strategy, either exponential
or fibonacci
(default fibonacci
)backoffStrategyOptions
(object): Backoff strategy optionsIf both backoffStrategyName
/backoffStrategyOptions
AND backoffStrategy
are defined then backoffStrategy
will prevail (i.e., backoffStrategyName
and backoffStrategyOptions
will be ignored).
It seems this library only works with IP addresses and not hostnames. Could we get Hostname/DNS support?
If we will try to send a UDP packet which is over the OS MTU limit, we will face the uncaught exception 'Error: send EMSGSIZE' from dgram.
To prevent this from happening, we could use some type of a custom limit, which can be defined manually in transport options. I created a draft of this limit in #90, but I also see a possible problem with this approach:
As can be seen in test case 'udp packet overflow - skip packet when using max packet size option', before sending another packet we either need to wait or to flush the transport for test to work correctly, otherwise no packet will be received at all, and test will fail on timeout. As I see, this is happening because multiple logs can be joined in one buffer and passed to socket.send() together(later processed internally by dgram?).
Therefore, if we are skipping one packet, this can theoretically cancel the send of others which was already joined by this time. So we need to know if there is a way to remove only the 'overflow' packet, and most importantly, to do it without modifying the logic outside of the pino-socket transport. We also must keep in mind that we are operating on a stream of data.
Testcase that reproduces this error - here
I would appreciate any suggestions on how we should handle this, so I can work on the optimal fix
Hi, we are trying to send our logs from a Lambda function directly to our Logstash.
From what I read the transport are designed to be used as separate thread for performance purpose.
In Lambda we only have one process.
So I was thinking that maybe this transport could be split in a pino destination or a transport.
I maybe completely mislead, but I don't see how else we could achieve our Lambda logging...
Any advice would be welcome.
Since the 4.0.0 version, ./node_modules/.bin/pino-socket -h
returns nothing
It's probably due to the following code fragment in psock.js
if (require.main === module) {
// used as cli
cli()
}
The if expression is always false because psock.js is never directly called from the CLI.
The goal is to avoid "losing" data/logs when the TCP server is unavailable (for a short period of time). We should configure a max size to prevent unsafe unbounded storage (i.e. exhausting the memory).
To give you an idea of what I'm after, here's a naive implementation:
write (data, encoding, callback) {
socket.write(data, encoding, (err) => {
if (err) {
// unable to write data, will try later when the server becomes available again
queue.push({data, encoding})
}
})
callback()
}
And in the reconnect/retry:
retry.on('ready', () => {
connect((err) => {
if (connected === false) {
return retry.backoff(err)
} else if (!queue.isEmpty()) {
const items = queue.items()
while (items.length > 0) {
const { data, encoding } = items.shift()
outputStream.write(data, encoding)
}
}
})
})
I may be reading this incorrectly but wanted to double-check with an issue before opening a PR.
Looking at the creation of the Writable stream in both the TCP and UDP connections I am wondering if the correct option is being used here:
https://github.com/pinojs/pino-socket/blob/master/lib/TcpConnection.js#L44
https://github.com/pinojs/pino-socket/blob/master/lib/UdpConnection.js#L17
It does not appear that close
is a valid option for a Writable stream.
https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_new_stream_writable_options
Are these meant to be destroy
or final
?
Thank you for any thoughts or clarifications.
var pino = require('pino')()
pino.info('hello world');
setTimeout(() => {
throw new Error('ohhhhh');
}, 2000);
// process.on('uncaughtException', (err) => {
// pino.fatal({ err });
// process.exit(1)
// });
pino:
node testPino.js | pino
[2016-08-10T09:06:22.675Z] INFO (6869 on adrianos-mbp.dom01.kaba.grp): hello world
[2016-08-10T09:06:24.679Z] FATAL (6869 on adrianos-mbp.dom01.kaba.grp):
err: {}
pino-socket:
this hangs forever (or until ^C)
node testPino.js | pino-socket -a localhost -p 1234 -m tcp -ne
Hello! I'm using pino-socket
to transport data to Logstash and everything works great except it seems to disconnect after about 15 minutes of no use. Enabling reconnect
fixed the issue of needing to reconnect after the 15 minutes, but it seems that the first message that triggers the reconnect never gets retried and never makes it to logstash. Any thoughts on how I can fix this?
pino v8.11.0
pino-socket v7.3.0
Pino Transport Config
const logstashTransport = pino.transport({
target: 'pino-socket',
options: {
address: lsHost,
port: lsPort,
mode: 'tcp',
reconnect: true,
recovery: true,
},
})
logstashTransport.on('socketError', (err: Error) => {
console.log('socketError', err)
})
logstashTransport.on('socketClose', (err: Error) => {
console.log('socketClose', err)
})
logstashTransport.on('open', (address: string) => {
console.log('open', address)
})
Logs
Feb 28 02:43:31 PM message: "END: [POST] /api/functions/example"
Feb 28 03:17:33 PM socketError Error: read ECONNRESET
Feb 28 03:17:33 PM at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Feb 28 03:17:33 PM socketClose Error: read ECONNRESET
Feb 28 03:17:33 PM at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Feb 28 03:17:33 PM open { address: 'some.ip.addr', family: 'IPv4', port: 43230 }
Feb 28 03:17:33 PM log.level: "info"
Feb 28 03:17:33 PM @timestamp: "2023-02-28T20:17:32.881Z"
Feb 28 03:17:33 PM process: {
Feb 28 03:17:33 PM "pid": 81
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM host: {
Feb 28 03:17:33 PM "hostname": "srv-123"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM ecs: {
Feb 28 03:17:33 PM "version": "1.6.0"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM service: {
Feb 28 03:17:33 PM "name": "render-server"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM event: {
Feb 28 03:17:33 PM "dataset": "render-server.log"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM trace: {
Feb 28 03:17:33 PM "id": "0b3fdd23de1b96bd9b0ef9b07dcd319c"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM transaction: {
Feb 28 03:17:33 PM "id": "bf0b2258bb3dc3f0"
Feb 28 03:17:33 PM }
Feb 28 03:17:33 PM message: "START: [POST] /api/functions/example"
Feb 28 03:17:34 PM log.level: "info"
Feb 28 03:17:34 PM @timestamp: "2023-02-28T20:17:34.022Z"
Feb 28 03:17:34 PM process: {
Feb 28 03:17:34 PM "pid": 81
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM host: {
Feb 28 03:17:34 PM "hostname": "srv-123"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM ecs: {
Feb 28 03:17:34 PM "version": "1.6.0"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM service: {
Feb 28 03:17:34 PM "name": "render-server"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM event: {
Feb 28 03:17:34 PM "dataset": "render-server.log"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM trace: {
Feb 28 03:17:34 PM "id": "0b3fdd23de1b96bd9b0ef9b07dcd319c"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM transaction: {
Feb 28 03:17:34 PM "id": "bf0b2258bb3dc3f0"
Feb 28 03:17:34 PM }
Feb 28 03:17:34 PM message: "END: [POST] /api/functions/example"
NOTE: The line after open
never makes it to Logstash but the next message (starting at Feb 28 03:17:34 PM) does.
It seems that pino-socket
can reconnect to a TCP server (using the reconnect
option) but if the TCP server wasn't available when the logger (multistream?) was initialized then the server won't receive any data.
I've created a simple reproduction case because it's hard to explain but relatively straightforward to reproduce: https://github.com/Mogztter/pino-socket-reconnect
My observations:
pino-socket
successfully reconnect (connect) to the TCP server and stops sending connect ECONNREFUSED 127.0.0.1:13370
as soon as the server is startedstream.write(data)
in multistream
is calledThreadStream.write
is called but writeSync
is not, we keep calling nextFlush
write
function defined on the outputStream
in TcpConnection
is not calledHopefully, I didn't miss something obvious ๐
It seems this tool blindly passes stdin to the UDP port without formatting messages for syslog - is that correct?
Syslog format: <PRIO>TIMESTAMP HOST MESSAGE
When we feed pino-socket output into logstash it is saying the format is invalid.
Article for use case
Fluentbit is very lightweight zero dependency log collector best use case is store edge of logs tag them and interval flush to next log level aggregation
How can i path throw logs to fluentd via pino:
Heres my file storage method:
import { createWriteStream, existsSync, mkdirSync } from 'fs';
import { tmpdir } from 'os';
import * as pino from 'pino';
export default () => {
const dir = `${tmpdir()}/logs`;
if (!existsSync(dir)) {
mkdirSync(dir);
}
const stream = createWriteStream(`${dir}/aasaam-app.log`, {
flags: 'a+',
});
const logger = pino(stream);
return logger;
};
For example above i can use input-tail plugin for fluentbit but if logs can directly into fluentbit tcp server will be awesome ๐
I need to do with script not using terminal piping and how to send json logs to fluentbit server:
https://fluentbit.io/documentation/0.14/input/tcp.html
I think this could possibly used with the DataDog TCP log API. One thing that would be required would be that I would need to prepend the API key before a log message.
Having a callback such as onBeforeWriteMessage(msg: Buffer)
with the return type being the message to send out (Buffer
?) would allow me to do this kind of manipulation.
Is it possible to exclude tests data from final package?
To decrease amount of loaded data and don't load test code with test data in production?
pino-socket exits gracefully even if the underlying node process exited with an error.
Is it possible to propagate the exit status so that we can recognize a fail externally?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.