Coder Social home page Coder Social logo

pino-socket's Introduction

pino.io project

Running Locally

git clone https://github.com/pinojs/pinojs
cd pinojs
npm install
npm start

pino-socket's People

Contributors

asc11cat avatar awwit avatar dependabot[bot] avatar eomm avatar fdawgs avatar ggrossetie avatar guid75 avatar jimmiehansson avatar jsumners avatar mcollina avatar theogravity avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pino-socket's Issues

Backoff strategy should be instantiated only once and reset when the connection is established again

Currently, we instantiate a new Backoff and BackoffStrategy every time reconnect is called via backoff.fibonacci which is equivalent to new Backoff(new FibonacciBackoffStrategy(options)).

const retry = backoff.fibonacci()

This is problematic because when the socket is closed (for instance when we get a ECONNREFUSED) the backoff strategy is instantiated again and, as a result, the backoff value will be the initial value (defeating the purpose of using an incremental backoff).

CI is failing

See #57 and the other other PRs.
I'm a bit strained with time so it would be awesome if somebody else could take a look.

backpressure on TCP

currently pino-socket is not handling backpressure over TCP, not a big deal, but memory might grow quite a bit.

I'm not sure what we should do in case, meaning at some point that data will be buffered up, and we probably do not want that to happen.

onSocketClose option cannot be used with ThreadStream (multistream)

Given the following code:

const pino = require('pino')

const tcpStream = pino.transport({
  level: "info",
  target: "pino-socket",
  options: {
    address: "localhost",
    port: "13370",
    mode: "tcp",
    reconnect: true,
    onSocketClose: () => { console.log('') }
  }
})

tcpStream.on('error', () => { /* ignore */ })
const pinoStreams = [
  {
    level: "debug",
    stream: pino.transport({
      target: "pino/file"
    })
  },
  {
    stream: tcpStream
  }
]

const logger = pino({
  level: "info",
}, pino.multistream(pinoStreams))

logger.info("Keep alive!")

The following exception is thrown:

    this[kPort].postMessage({
                ^

DOMException [DataCloneError]: () => { console.log('') } could not be cloned.
    at new Worker (node:internal/worker:235:17)
    at createWorker (/path/to/project/node_modules/thread-stream/index.js:50:18)
    at new ThreadStream (/path/to/project/node_modules/thread-stream/index.js:209:19)
    at buildStream (/path/to/project/node_modules/pino/lib/transport.js:21:18)
    at Function.transport (/path/to/project/node_modules/pino/lib/transport.js:110:10)
    at Object.<anonymous> (/path/to/project/index.js:3:25)
    at Module._compile (node:internal/modules/cjs/loader:1105:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)

Unable to get udp working

I have the following test code:

'use strict';

const pino = require('pino');
const transports = pino.transport({
  targets: [
    {
      target: 'pino-socket',
      options: {
        address: '127.0.0.1',
        port: 5140,
        mode: 'udp',
      },
    },
  ],
});
const logger = pino(transports);

logger.error('Test message');

If I start up netcat to dump the messages, I don't get anything:

$ nc -klu 5140

However, if I switch mode to tcp and use netcat to dump tcp messages, the log shows up:

$ nc -kl 5140
{"level":50,"time":1632855456630,"pid":216531,"hostname":"workstation","msg":"Test message"}

I'm using these versions:

    "pino": "^7.0.0-rc.7",
    "pino-socket": "^4.0.0",

Thanks!

Emit a socketClose event instead of defining onSocketClose callback

As mentioned in #77, onSocketClose callback function cannot be serialized across worker thread boundaries.

I think it would be better to emit an event on the stream with an optional Error:

 outputStream.emit('socketClose', hadError && socketError)

To be consistent, we should probably do the same when we cannot reconnect:

retry.on('fail', (err) => {
  outputStream.emit('reconnectFailure', err) // or socketReconnectFailure?
})

Unix socket example

I'm trying to use pino-socket as a transport to log to /dev/log on my Ubuntu 22.04 system.

Unix socket example

The Issue

I'm trying to use pino-socket as a transport to log to syslog via /dev/log on my Ubuntu 22.04 system using node 18.

import pino from 'pino';

const logger = pino({
  level: 'info',
  transport: {
    targets: [
      {
        level: 'trace',
        target: 'pino-socket',
        options: {
          unixsocket: '/dev/log',
        },
      },
      { level: 'trace', target: 'pino-pretty', options: {} },
    ],
  },
});

logger.info(`pino logger instantiated `);

export { logger };

Here's the output I see in my console:

[15:46:00.992] INFO (501708): pino logger instantiated 
Error: send ECONNREFUSED
    at doSend (node:dgram:716:16)
    at defaultTriggerAsyncIdScope (node:internal/async_hooks:464:18)
    at afterDns (node:dgram:662:5)
    at Socket.send (node:dgram:672:5)
    at Writable.write [as _write] (/home/ty/Documents/rtr/rtr-webapps/node_modules/pino-socket/lib/UdpConnection.js:28:16)
    at doWrite (node:internal/streams/writable:411:12)
    at clearBuffer (node:internal/streams/writable:572:7)
    at onwrite (node:internal/streams/writable:464:7)
    at processTicksAndRejections (node:internal/process/task_queues:82:21)

And of course no logs show up when I run tail -f /var/log/syslog | grep pino

winston-syslog seems to work

But I would much rather use pino

import { config, createLogger, transports } from 'winston';
import { Syslog, SyslogTransportOptions } from 'winston-syslog';

const opt: SyslogTransportOptions = {
  protocol: 'unix',
  path: '/dev/log',
  facility: 'user',
  localhost: '',
  app_name: 'my-great-app',
};

const Logger = createLogger({
  levels: config.syslog.levels,
  transports: [new transports.Console()],
});
const sysLogTransport = new Syslog(opt);
Logger.add(sysLogTransport);

Logger.info(`Logger instantiated `);
export { Logger };

I get this line in /var/log/syslog :

May 10 15:59:32 my-ubuntu-4 my-great-app[504122]: {"level":"info","message":"Logger instantiated "}

The Ask

I'm sure I'm doing something wrong. Could you please provide an example?

Allow to configure the backoff strategy using primitive data types

Since functions cannot be serialized across worker thread boundaries, the following won't work when using pino socket with multistream/ThreadStream:

const socketStream = pino.transport({
    target: "pino-socket",
    options: {
        address: tcpAddress,
        port: tcpPort,
        mode: "tcp",
        reconnect: true,
        reconnectTries: 2,
        backoffStrategy: new ExponentialStrategy({})
    }
})

I think it might be useful to configure the backoff strategy using primitive data types.
The backoff library provides two backoff strategies:

  • exponential
  • fibonacci (default)

Exponential backoff has the following options:

  • randomisationFactor: defaults to 0, must be between 0 and 1
  • initialDelay: defaults to 100 ms
  • maxDelay: defaults to 10000 ms
  • factor: defaults to 2, must be greater than 1

Fibonacci backoff has the following options:

  • randomisationFactor: defaults to 0, must be between 0 and 1
  • initialDelay: defaults to 100 ms
  • maxDelay: defaults to 10000 ms

I think we can keep the backoffStrategy option when you want to pass a custom strategy and/or when you are not using pino socket with multistream/ThreadStream.

I propose to add the following options:

  • backoffStrategyName (string): Name of the backoff strategy, either exponential or fibonacci (default fibonacci)
  • backoffStrategyOptions (object): Backoff strategy options

If both backoffStrategyName/backoffStrategyOptions AND backoffStrategy are defined then backoffStrategy will prevail (i.e., backoffStrategyName and backoffStrategyOptions will be ignored).

Support DNS

It seems this library only works with IP addresses and not hostnames. Could we get Hostname/DNS support?

Uncaught exception 'send EMSGSIZE'

If we will try to send a UDP packet which is over the OS MTU limit, we will face the uncaught exception 'Error: send EMSGSIZE' from dgram.

To prevent this from happening, we could use some type of a custom limit, which can be defined manually in transport options. I created a draft of this limit in #90, but I also see a possible problem with this approach:

As can be seen in test case 'udp packet overflow - skip packet when using max packet size option', before sending another packet we either need to wait or to flush the transport for test to work correctly, otherwise no packet will be received at all, and test will fail on timeout. As I see, this is happening because multiple logs can be joined in one buffer and passed to socket.send() together(later processed internally by dgram?).

Therefore, if we are skipping one packet, this can theoretically cancel the send of others which was already joined by this time. So we need to know if there is a way to remove only the 'overflow' packet, and most importantly, to do it without modifying the logic outside of the pino-socket transport. We also must keep in mind that we are operating on a stream of data.

Testcase that reproduces this error - here

I would appreciate any suggestions on how we should handle this, so I can work on the optimal fix

Possibility to use transport as a destination (Lambda context)

Hi, we are trying to send our logs from a Lambda function directly to our Logstash.
From what I read the transport are designed to be used as separate thread for performance purpose.
In Lambda we only have one process.
So I was thinking that maybe this transport could be split in a pino destination or a transport.
I maybe completely mislead, but I don't see how else we could achieve our Lambda logging...
Any advice would be welcome.

CLI seems broken

Since the 4.0.0 version, ./node_modules/.bin/pino-socket -h returns nothing

It's probably due to the following code fragment in psock.js

if (require.main === module) {
  // used as cli
  cli()
}

The if expression is always false because psock.js is never directly called from the CLI.

[Feature idea] Introduce a FIFO queue with a max size to store data when the TCP socket write is unsuccessful

The goal is to avoid "losing" data/logs when the TCP server is unavailable (for a short period of time). We should configure a max size to prevent unsafe unbounded storage (i.e. exhausting the memory).

To give you an idea of what I'm after, here's a naive implementation:

write (data, encoding, callback) {
  socket.write(data, encoding, (err) => {
    if (err) {
      // unable to write data, will try later when the server becomes available again
      queue.push({data, encoding})
    }
  })
  callback()
}

And in the reconnect/retry:

retry.on('ready', () => {
  connect((err) => {
    if (connected === false) {
      return retry.backoff(err)
    } else if (!queue.isEmpty()) {
      const items = queue.items()
      while (items.length > 0) {
        const { data, encoding } = items.shift()
        outputStream.write(data, encoding)
      }
    }
  })
})

Is `close` a valid Writable stream option?

I may be reading this incorrectly but wanted to double-check with an issue before opening a PR.

Looking at the creation of the Writable stream in both the TCP and UDP connections I am wondering if the correct option is being used here:

https://github.com/pinojs/pino-socket/blob/master/lib/TcpConnection.js#L44

https://github.com/pinojs/pino-socket/blob/master/lib/UdpConnection.js#L17

It does not appear that close is a valid option for a Writable stream.
https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_new_stream_writable_options

Are these meant to be destroy or final?

Thank you for any thoughts or clarifications.

hanging process on uncaughtException

var pino = require('pino')()

pino.info('hello world');

setTimeout(() => {
  throw new Error('ohhhhh');
}, 2000);

// process.on('uncaughtException', (err) => {
//   pino.fatal({ err });
//   process.exit(1)
// });

pino:

node testPino.js | pino
[2016-08-10T09:06:22.675Z] INFO (6869 on adrianos-mbp.dom01.kaba.grp): hello world
[2016-08-10T09:06:24.679Z] FATAL (6869 on adrianos-mbp.dom01.kaba.grp):
    err: {}

pino-socket:

this hangs forever (or until ^C)

node testPino.js | pino-socket -a localhost -p 1234 -m tcp -ne

Data lost on reconnect even with recovery enabled

Hello! I'm using pino-socket to transport data to Logstash and everything works great except it seems to disconnect after about 15 minutes of no use. Enabling reconnect fixed the issue of needing to reconnect after the 15 minutes, but it seems that the first message that triggers the reconnect never gets retried and never makes it to logstash. Any thoughts on how I can fix this?

pino v8.11.0
pino-socket v7.3.0

Pino Transport Config

const logstashTransport = pino.transport({
    target: 'pino-socket',
    options: {
        address: lsHost,
        port: lsPort,
        mode: 'tcp',
        reconnect: true,
        recovery: true,
    },
})

logstashTransport.on('socketError', (err: Error) => {
    console.log('socketError', err)
})
logstashTransport.on('socketClose', (err: Error) => {
    console.log('socketClose', err)
})
logstashTransport.on('open', (address: string) => {
    console.log('open', address)
})

Logs

Feb 28 02:43:31 PM      message: "END: [POST] /api/functions/example"
Feb 28 03:17:33 PM  socketError Error: read ECONNRESET
Feb 28 03:17:33 PM      at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Feb 28 03:17:33 PM  socketClose Error: read ECONNRESET
Feb 28 03:17:33 PM      at TCP.onStreamRead (node:internal/stream_base_commons:217:20)
Feb 28 03:17:33 PM  open { address: 'some.ip.addr', family: 'IPv4', port: 43230 }
Feb 28 03:17:33 PM      log.level: "info"
Feb 28 03:17:33 PM      @timestamp: "2023-02-28T20:17:32.881Z"
Feb 28 03:17:33 PM      process: {
Feb 28 03:17:33 PM        "pid": 81
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      host: {
Feb 28 03:17:33 PM        "hostname": "srv-123"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      ecs: {
Feb 28 03:17:33 PM        "version": "1.6.0"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      service: {
Feb 28 03:17:33 PM        "name": "render-server"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      event: {
Feb 28 03:17:33 PM        "dataset": "render-server.log"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      trace: {
Feb 28 03:17:33 PM        "id": "0b3fdd23de1b96bd9b0ef9b07dcd319c"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      transaction: {
Feb 28 03:17:33 PM        "id": "bf0b2258bb3dc3f0"
Feb 28 03:17:33 PM      }
Feb 28 03:17:33 PM      message: "START: [POST] /api/functions/example"
Feb 28 03:17:34 PM      log.level: "info"
Feb 28 03:17:34 PM      @timestamp: "2023-02-28T20:17:34.022Z"
Feb 28 03:17:34 PM      process: {
Feb 28 03:17:34 PM        "pid": 81
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      host: {
Feb 28 03:17:34 PM        "hostname": "srv-123"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      ecs: {
Feb 28 03:17:34 PM        "version": "1.6.0"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      service: {
Feb 28 03:17:34 PM        "name": "render-server"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      event: {
Feb 28 03:17:34 PM        "dataset": "render-server.log"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      trace: {
Feb 28 03:17:34 PM        "id": "0b3fdd23de1b96bd9b0ef9b07dcd319c"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      transaction: {
Feb 28 03:17:34 PM        "id": "bf0b2258bb3dc3f0"
Feb 28 03:17:34 PM      }
Feb 28 03:17:34 PM      message: "END: [POST] /api/functions/example"

NOTE: The line after open never makes it to Logstash but the next message (starting at Feb 28 03:17:34 PM) does.

Logs are not sent if the TCP server wasn't available when the logger was initialized

It seems that pino-socket can reconnect to a TCP server (using the reconnect option) but if the TCP server wasn't available when the logger (multistream?) was initialized then the server won't receive any data.

I've created a simple reproduction case because it's hard to explain but relatively straightforward to reproduce: https://github.com/Mogztter/pino-socket-reconnect

My observations:

  • pino-socket successfully reconnect (connect) to the TCP server and stops sending connect ECONNREFUSED 127.0.0.1:13370 as soon as the server is started
  • stream.write(data) in multistream is called
  • ThreadStream.write is called but writeSync is not, we keep calling nextFlush
  • the write function defined on the outputStream in TcpConnection is not called

Hopefully, I didn't miss something obvious ๐Ÿ˜…

Syslog Format Support

It seems this tool blindly passes stdin to the UDP port without formatting messages for syslog - is that correct?

Syslog format: <PRIO>TIMESTAMP HOST MESSAGE

When we feed pino-socket output into logstash it is saying the format is invalid.

Hot to send to fluentbit directly in application / no stdout piping

Article for use case
Fluentbit is very lightweight zero dependency log collector best use case is store edge of logs tag them and interval flush to next log level aggregation

How can i path throw logs to fluentd via pino:
Heres my file storage method:

import { createWriteStream, existsSync, mkdirSync } from 'fs';
import { tmpdir } from 'os';

import * as pino from 'pino';

export default () => {
  const dir = `${tmpdir()}/logs`;
  if (!existsSync(dir)) {
    mkdirSync(dir);
  }
  const stream = createWriteStream(`${dir}/aasaam-app.log`, {
    flags: 'a+',
  });
  const logger = pino(stream);
  return logger;
};

For example above i can use input-tail plugin for fluentbit but if logs can directly into fluentbit tcp server will be awesome ๐Ÿ˜„

I need to do with script not using terminal piping and how to send json logs to fluentbit server:
https://fluentbit.io/documentation/0.14/input/tcp.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.