Coder Social home page Coder Social logo

alibaba / ilogtail Goto Github PK

View Code? Open in Web Editor NEW
1.5K 31.0 342.0 30.37 MB

Fast and Lightweight Observability Data Collector

Home Page: https://ilogtail.gitbook.io/ilogtail-docs

License: Apache License 2.0

Makefile 0.10% Go 37.16% Dockerfile 0.16% C 1.20% Shell 0.80% HTML 0.03% Starlark 0.08% Python 0.03% CMake 1.43% C++ 58.87% Batchfile 0.13% TSQL 0.02%
observability aliyun cloud-native apm sls ebpf

ilogtail's Introduction

Alibaba iLogtail - Fast and Lightweight Observability Data Collector | 中文用户手册

ilogtail logo

iLogtail was born for observable scenarios and has many production-level features such as lightweight, high performance, and automated configuration, which are widely used internally by Alibaba Group and tens of thousands of external Alibaba Cloud customers. You can deploy it in physical machines, Kubernetes and other environments to collect telemetry data, such as logs, traces and metrics.

GitHub contributors GitHub stars GitHub issues GitHub license Coverity Scan Build Status Coverage Status Go Report Card

Abstract

The core advantages of iLogtail:

  • Support a variety of Logs, Traces, Metrics data collection, and friendly to container and Kubernetes environment support.
  • The resource cost of data collection is quite low, 5-20 times better than similar telemetry data collection Agent performance.
  • High stability, used in the production of Alibaba and tens of thousands of Alibaba Cloud customers, and collecting dozens of petabytes of observable data every day with nearly tens of millions deployments.
  • Support plugin expansion, such as collection, processing, aggregation, and sending modules.
  • Support configuration remote management and provide a variety of ways, such as SLS console, SDK, K8s Operator, etc.
  • Supports multiple advanced features such as self-monitoring, flow control, resource control, alarms, and statistics collection.

iLogtail supports the collection of a variety of telemetry data and transmission to a variety of different backends, such as SLS observable platform. The data supported for collection are mainly as follows:

  • Logs
    • Collect static log files
    • Dynamic collect the files when running with containerized environment
    • Dynamic collect Stdout when running with containerized environment
  • Traces
    • OpenTelemetry protocol
    • Skywalking V2 protocol
    • Skywalking V3 protocol
    • ...
  • Metrics
    • Node metrics
    • Process metrics
    • Gpu metrics
    • Nginx metrics
    • Support fetch prometheus metrics
    • Support transfer telegraf metrics
    • ...

Quick Start

For the complexity of C++ dependencies, the compilation of iLogtail requires you have docker installed. If you aim to build iLogtail from sources, you can go ahead and start with the following commands.

  1. Start with local
make
cp -r example_config/quick_start/* output
cd output
./ilogtail
# Now, ilogtail is collecting data from output/simple.log and outputing the result to stdout

HEAD

Documentation

Our official User Manual is located here:

Homepage

Download

Installation

Configuration

All Plugins

Getting Started

Developer Guide

Benchmark

Contribution

There are many ways to contribute:

Contact Us

You can report bugs, make suggestions or participate in discussions through Github Issues and Github Discussions, or contact us with the following ways:

Our Users

Tens of thousands of companies use iLogtail in Alibaba Cloud, IDC, or other clouds. More details please see here.

Licence

Apache 2.0 License

ilogtail's People

Contributors

7y-9 avatar abingcbc avatar alph00 avatar chaolee50 avatar co63oc avatar codingdancer avatar cyshallchan avatar dragonyang200 avatar evanljp avatar gsakun avatar haoruilee avatar henryzhx8 avatar hongweipeng avatar liangry avatar linrunqi08 avatar liuhaoyang avatar messixukejia avatar oldthreefeng avatar panawala avatar pj1987111 avatar qiansheng91 avatar quzard avatar shalousun avatar shunjiazhu avatar snakorse avatar takuka0311 avatar timchenxiaoyu avatar urnotsally avatar yonghua-sun avatar yyuuttaaoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ilogtail's Issues

[FEATURE]:add clickhouse flusher

Concisely describe the proposed feature

I would like to add a clickhouse flush plugin

Describe the solution you'd like (if any)

We can use the package github.com/ClickHouse/clickhouse-go to implement a clickhouse flush plugin.

[QUESTION]: 插件系统为何一路LogStoreConfig创建一套协程?

从官方分享的LogTail文档中看,LogTail的初衷是要解决Filebeat为每路采集文件都创建一套协程这种暴力做法,但从插件系统上看,目前每路LogStoreConfig基本都是一套协程(类似一个完整Pipeline的做法)。

这里想请教下,是否在golang的插件系统中性能已经不再重要了?能详细说明下C++部分和插件系统的关系吗?

[QUESTION]:Can you specify which version of ilogtail the usecases apply to?

I see the usecases are refering binary from release page. Is this the pure opensource version or the ilogtail-C version?

I'm confused because the configuration structure in the usecases are different from the ones in the setup section.

There is no input section in the configuration file in the kafka example and there is a config_server_address. So the configuration
is not for the opensource version right?

I want to test a plaintext log file. Which input plugin should I use?

[FEATURE]: Reload disabled configs after they have been stopped successfully

Code links:

Once the config has been added to disabled list, it can not be load again until stopped, which will cause such case:

  1. iLogtail begins to do config update.
  2. Unload stage: stop all configs, one of them (named cfg) timeout and add to disabled list.
  3. Load stage: load all configs, config cfg fails to load because it had been disabled.
  4. Config cfg has been stopped successfully and removed from the disabled list, but no change to load again.

Suggestion: iLogtail should handle such case and reload such "slow stop" configs.

need issue templates

We need some issue templates to create issues for different situations quickly.

for example:

  • Bug reports
  • Doubts or suggestions about docs
  • Suggestions for new features
  • Ask some questions
  • Contributor Guideline
  • ...

[DOC]:Missing key elements in plugin dev guides

In docs/zh/guides/How-to-write-input-plugins.md, plugin name must start with metric_ or service_ is not mentioned.
In plugins/processor/json/README.md, example config does not match example output. s_key should not be kept.
In docs/zh/guides/How-to-write-processor-plugins.md, plugin name must start with processor_ is not mentioned.
In docs/zh/guides/How-to-write-aggregator-plugins.md, 类似于 input 插件的 Input 接口 should be 类似于 input 插件的 Init 接口, and plugin name must start with aggregator_ is not mentioned. It is also unclear what is the purpose of AddWithWait method in LogGroupQueue.
In docs/zh/guides/How-to-write-flusher-plugins.md, plugin name must start with flusher_ is not mentioned. And the purpose of SetUrgent is unclear. Shouldn't Stop method be enough?
At last, none of these guides mentioned about the init function that must be implemented to register a plugin.

[FEATURE]: change policy of GTID checkpoint

Concisely describe the proposed feature
Now the policy of GTID checkpoint will bring in potential bugs:

// OnGTID reports the GTID of the following event (OnRow, OnDDL).
// So we can not update checkpoint here, just record GTID and update in OnRow.
//
// This strategy brings a potential problem, checkpoint will only be updated
// when OnRow is called, however, because of IncludeTables and ExcludeTables,
// calls to OnRow will be filtered. So, if plugin restarts before the next
// OnRow call comes, it will rerun from a old checkpoint.
// But this should be trivial for cases that valid data comes continuously.
func (sc *ServiceCanal) OnGTID(s mysql.GTIDSet) error {
// logger.Debug("OnGTID", s)
sc.xgidCounter.Add(1)
sc.nextRowEventGTID = s.String()
return nil
}

Describe the solution you'd like (if any)
consider the case that valid data is not coming continuously to update checkpoint

Additional comments

[FEATURE]: Drop labels for metrics

The iLogtail metrics labels is merged in fixed field called __labels__.

__labels__:cluster#$#sls-mall|endpoint#$#http-metrics|instance#$#192.168.32.71:10255|job#$#kubelet|le#$#0.256|namespace#$#kube-system|node#$#cn-beijing.192.168.32.71|service#$#kubelet|url#$#https://192.168.32.45:6443/api/v1/nodes/%7Bname%7D|verb#$#GET

we want to drop labels by processor.

[BUG]: Collect docker logs fail

Describe the bug
collect docker log files fails. Not cloud platform.

To Reproduce
Please post your configurations and environments to reproduce the bug.

start docker command:

docker run -v /var/run/docker.sock:/var/run/docker.sock  -v /:/logtail_host -p 18689:18689  -d aliyun/ilogtail:latest

plugin.json

curl 127.0.0.1:18689/loadconfig -X POST -d '[{"project":"e2e-test-project","logstore":"e2e-test-logstore","config_name":"test-case_0","logstore_key":1,"json_str":"{\n\"inputs\":[{\n \"type\": \"metric_docker_file\",\n \"detail\": {\n  \"LogPath\":\"/var/log/nginx/*-.log\"\n}\n}],\n \"processors\":[\n  {\n   \"type\":\"processor_default\"\n  }\n ],\n \"flushers\":[\n  {\n   \"type\":\"flusher_stdout\",\n   \"detail\":{\n    \"FileName\":\"quickstart.stdout\"\n   }\n  }\n ]\n}"}]'

The developer team will put a higher priority on bugs that can be reproduced. If you want a prompt reply, please keep your descriptions detailed.

  • Your ilogtail version:
    latest

  • Your platform:
    os: centos7
    docker : 20.10.11

  • Your configuration:

  • Your ilogtail logs:

2021-12-23 10:59:52 [INF] [metric_docker_file.go:242] [updateMapping] [test-case_0,e2e-test-logstore]	container mapping:added	source host path:/var/lib/docker/overlay2/fe45badd7e5641e7abcaaa2befa470c7f7cd57f45961e40dd4c55235340cc196/diff	destination container path:	destination log path:/logtail_host/var/lib/docker/overlay2/fe45badd7e5641e7abcaaa2befa470c7f7cd57f45961e40dd4c55235340cc196/diff/var/log/nginx/moa.log	id:612d1426c69f5e77ca05b65094c30aa7e30be502f68c3760b78cc955a5fb7010	name:/suspicious_ardinghelli
2021-12-23 10:59:52 [INF] [metric_docker_file.go:242] [updateMapping] [test-case_0,e2e-test-logstore]	container mapping:added	source host path:/var/lib/docker/overlay2/e4900386461373b874c1f24aeb01efee63fe29f57c8a23a172f216a1e74ac526/diff	destination container path:	destination log path:/logtail_host/var/lib/docker/overlay2/e4900386461373b874c1f24aeb01efee63fe29f57c8a23a172f216a1e74ac526/diff/var/log/nginx/moa.log	id:aed032f9731e03c661885d084066da69f622921cf76a00fd65bc9dcb94f9b0fd	name:/elegant_rhodes
2021-12-23 10:59:52 [INF] [metric_docker_file.go:221] [updateAll] [test-case_0,e2e-test-logstore]	update all:2
2021-12-23 10:59:52 [ERR] [metric_docker_file.go:224] [updateAll] [test-case_0,e2e-test-logstore]	AlarmType:DOCKER_FILE_MAPPING_ALARM	cmd:[123 34 65 108 108 67 109 100 34 58 91 123 34 73 68 34 58 34 54 49 50 100 49 52 50 54 99 54 57 102 53 101 55 55 99 97 48 53 98 54 53 48 57 52 99 51 48 97 97 55 101 51 48 98 101 53 48 50 102 54 56 99 51 55 54 48 98 55 56 99 99 57 53 53 97 53 102 98 55 48 49 48 34 44 34 80 97 116 104 34 58 34 47 108 111 103 116 97 105 108 95 104 111 115 116 47 118 97 114 47 108 105 98 47 100 111 99 107 101 114 47 111 118 101 114 108 97 121 50 47 102 101 52 53 98 97 100 100 55 101 53 54 52 49 101 55 97 98 99 97 97 97 50 98 101 102 97 52 55 48 99 55 102 55 99 100 53 55 102 52 53 57 54 49 101 52 48 100 100 52 99 53 53 50 51 53 51 52 48 99 99 49 57 54 47 100 105 102 102 47 118 97 114 47 108 111 103 47 110 103 105 110 120 47 109 111 97 46 108 111 103 34 44 34 84 97 103 115 34 58 91 34 95 99 111 110 116 97 105 110 101 114 95 110 97 109 101 95 34 44 34 115 117 115 112 105 99 105 111 117 115 95 97 114 100 105 110 103 104 101 108 108 105 34 44 34 95 99 111 110 116 97 105 110 101 114 95 105 112 95 34 44 34 49 55 50 46 49 55 46 48 46 50 34 44 34 95 105 109 97 103 101 95 110 97 109 101 95 34 44 34 97 108 105 121 117 110 47 105 108 111 103 116 97 105 108 58 108 97 116 101 115 116 34 93 125 44 123 34 73 68 34 58 34 97 101 100 48 51 50 102 57 55 51 49 101 48 51 99 54 54 49 56 56 53 100 48 56 52 48 54 54 100 97 54 57 102 54 50 50 57 50 49 99 102 55 54 97 48 48 102 100 54 53 98 99 57 100 99 98 57 52 102 57 98 48 102 100 34 44 34 80 97 116 104 34 58 34 47 108 111 103 116 97 105 108 95 104 111 115 116 47 118 97 114 47 108 105 98 47 100 111 99 107 101 114 47 111 118 101 114 108 97 121 50 47 101 52 57 48 48 51 56 54 52 54 49 51 55 51 98 56 55 52 99 49 102 50 52 97 101 98 48 49 101 102 101 101 54 51 102 101 50 57 102 53 55 99 56 97 50 51 97 49 55 50 102 50 49 54 97 49 101 55 52 97 99 53 50 54 47 100 105 102 102 47 118 97 114 47 108 111 103 47 110 103 105 110 120 47 109 111 97 46 108 111 103 34 44 34 84 97 103 115 34 58 91 34 95 99 111 110 116 97 105 110 101 114 95 110 97 109 101 95 34 44 34 101 108 101 103 97 110 116 95 114 104 111 100 101 115 34 44 34 95 99 111 110 116 97 105 110 101 114 95 105 112 95 34 44 34 49 55 50 46 49 55 46 48 46 51 34 44 34 95 105 109 97 103 101 95 110 97 109 101 95 34 44 34 110 103 105 110 120 58 108 97 116 101 115 116 34 93 125 93 125]	error:execute cmd error -1
2021-12-23 11:04:52 [INF] [docker_center.go:1117] [func1] docker fetch all:start
2021-12-23 11:04:52 [INF] [docker_center.go:1125] [func1] docker fetch all:stop
2021-12-23 11:09:52 [INF] [docker_center.go:1117] [func1] docker fetch all:start
2021-12-23 11:09:52 [INF] [docker_center.go:1125] [func1] docker fetch all:stop

Additional comments

[BUG]: field is missing

配置文件中使用了processor_split_log_regex插件后,发现日志中丢失了文件路径__tag__:__path__字段

[QUESTION]: 如何本地进行测试

问题:想在本地测试一下iLogtail数据重复以及数据丢失的情况,因为接入SLS后会产生费用,是否有什么办法在本地测试?

[BUG]: Logtail missing some log when there are mixed runc and rund runtime

Describe the bug
When there are mixed runc and rund runtime in k8s, logs of containers of runc cannot be collected.

To Reproduce
Please post your configurations and environments to reproduce the bug.

The developer team will put a higher priority on bugs that can be reproduced. If you want a prompt reply, please keep your descriptions detailed.

  • Your ilogtail version: 1.0.29

  • Your platform:

  • Your configuration:

  • Your ilogtail logs:

Additional comments
If possible, please also consider attaching the output of ilogtail diagnose tool. This produces detailed environment information and hopefully helps us diagnose faster.

If you have local commits (e.g. compile fixes before you reproduce the bug), please make sure you first make a PR to fix the build errors and then report the bug.

[QUESTION]:请问如何设置kafka客户端版本

在ilogtail-1.0.28采集配置文件中配置了flusher_kafka,配置如下:
"flushers": [ { "type": "flusher_kafka", "detail": { "Brokers": [ "172.16.0.202:15386" ], "Topic": "logtail-flusher-kafka2" } } ]
服务端kafka为3.0.0时可以正常发送,当kafka换成0.10.0.1时无法正常发送,错误如下:
[2022-04-20 19:17:07.167815] [warning] [018314] /build/logtail/plugin/LogtailPlugin.cpp:259 process raw log V2 error:kafka_output_/logs/sysm-data-invilid/ result:-1
[2022-04-20 19:17:07.388856] [warning] [018314] /build/logtail/plugin/LogtailPlugin.cpp:259 process raw log V2 error:kafka_output_/logs/sysm-data-invilid/ result:-1
[2022-04-20 19:17:07.388918] [warning] [018314] /build/logtail/plugin/LogtailPlugin.cpp:259 process raw log V2 error:kafka_output_/logs/sysm-data-invilid/ result:-1

[BUG]: Logtail cause container of containerd holds for a long time before exit

Describe the bug
A clear and concise description of what the bug is, ideally within 20 words.

To Reproduce
Please post your configurations and environments to reproduce the bug.

The developer team will put a higher priority on bugs that can be reproduced. If you want a prompt reply, please keep your descriptions detailed.

  • Your ilogtail version:

  • Your platform:

  • Your configuration:

  • Your ilogtail logs:

Additional comments
If possible, please also consider attaching the output of ilogtail diagnose tool. This produces detailed environment information and hopefully helps us diagnose faster.

If you have local commits (e.g. compile fixes before you reproduce the bug), please make sure you first make a PR to fix the build errors and then report the bug.

[FEATURE]: Log to Metrics support

Concisely describe the proposed feature

For NGINX or SLB log, we can convert log to metrics to reduce the data size,How about to implement a feature to convert raw log to metrics.

[FEATURE]: Prometheus scrape support multi instances

Concisely describe the proposed feature

Prometheus scrape plugin only supports one instance, so we can't scrape different metrics to different metricstore.

Describe the solution you'd like (if any)

I think Prometheus scrape plugin should support multi instances.

Additional comments
Add any other context or screenshots about the feature request here.
For example, the ideal input and output logs.

[QUESTION]:How to collect trace data from skywalking and flush the data to the default collector?

I use the following config file, if I add the flusher_grpc, ilogtail will panic

{
  "inputs": [
    {
      "detail": {
        "Address": "localhost:11801"
      },
      "type": "service_skywalking_agent_v3"
    }
  ],
  "processors": [
    {
      "type": "processor_default"
    }
  ],
  "flushers": [
    {
      "type": "flusher_stdout",
      "detail": {
        "FileName": "quickstart_1.stdout"
      }
    },
    {
      "type": "flusher_grpc",
      "detail": {
        "Address": "localhost:11800"
      }
    }
  ]
}
load config ./global.json bin/sw.json ./default_flusher.json
load log config /private/var/folders/lt/gstgm3fd0jj5hlcz0ryw6fxm0000gn/T/GoLand/plugin_logger.xml 
panic: protobuf tag not enough fields in JVMMetricCollection.state: 

goroutine 57 [running]:
github.com/gogo/protobuf/proto.(*unmarshalInfo).computeUnmarshalInfo(0xc0004b8be0)
        /Users/tuhao/go/pkg/mod/github.com/gogo/[email protected]/proto/table_unmarshal.go:341 +0x219e
github.com/gogo/protobuf/proto.(*unmarshalInfo).unmarshal(0xc0004b8be0, {0xc000c90240}, {0xc00070a480, 0x226, 0x226})
        /Users/tuhao/go/pkg/mod/github.com/gogo/[email protected]/proto/table_unmarshal.go:138 +0x8c
github.com/gogo/protobuf/proto.(*InternalMessageInfo).Unmarshal(0xc000092240, {0xe7af328, 0xc000c90240}, {0xc00070a480, 0x226, 0x226})
        /Users/tuhao/go/pkg/mod/github.com/gogo/[email protected]/proto/table_unmarshal.go:63 +0x165
github.com/gogo/protobuf/proto.(*Buffer).Unmarshal(0xc0000392f0, {0xe7af328, 0xc000c90240})
        /Users/tuhao/go/pkg/mod/github.com/gogo/[email protected]/proto/decode.go:424 +0x3a5
github.com/gogo/protobuf/proto.Unmarshal({0xc00070a480, 0x226, 0x226}, {0xe7af328, 0xc000c90240})
        /Users/tuhao/go/pkg/mod/github.com/gogo/[email protected]/proto/decode.go:342 +0x213
github.com/alibaba/ilogtail/pkg/protocol.Codec.Unmarshal({}, {0xc00070a480, 0x226, 0x226}, {0x64e96e0, 0xc000c90240})
        /Users/tuhao/dev/golang/ilogtail/pkg/protocol/sls_logs.pb.helper.go:38 +0xdb
google.golang.org/grpc.(*Server).processUnaryRPC.func2({0x64e96e0, 0xc000c90240})
        /Users/tuhao/go/pkg/mod/google.golang.org/[email protected]/server.go:1274 +0x128
github.com/alibaba/ilogtail/plugins/input/skywalkingv3/skywalking/network/language/agent/v3._JVMMetricReportService_Collect_Handler({0x639fca0, 0xc00041e930}, {0x691f940, 0xc0000392c0}, 0xc000c901e0, 0x0)
        /Users/tuhao/dev/golang/ilogtail/plugins/input/skywalkingv3/skywalking/network/language/agent/v3/JVMMetric_grpc.pb.go:70 +0x95
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0004a76c0, {0x6955140, 0xc0001c2480}, 0xc000138ea0, 0xc00030a2d0, 0x7d10ee0, 0x0)
        /Users/tuhao/go/pkg/mod/google.golang.org/[email protected]/server.go:1297 +0x14c9
google.golang.org/grpc.(*Server).handleStream(0xc0004a76c0, {0x6955140, 0xc0001c2480}, 0xc000138ea0, 0x0)
        /Users/tuhao/go/pkg/mod/google.golang.org/[email protected]/server.go:1626 +0x85e
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        /Users/tuhao/go/pkg/mod/google.golang.org/[email protected]/server.go:941 +0x11d
created by google.golang.org/grpc.(*Server).serveStreams.func1
        /Users/tuhao/go/pkg/mod/google.golang.org/[email protected]/server.go:939 +0x345
Exiting.

[QUESTION]: Build iLogtail project failed

Envoriment:
Centos 7
go version go1.17.5 linux/amd64

Openration:

  1. Pull iLogtail code
  2. build the project with command: make vendor && make build
  3. get build like below, could you help check the reason?
    =============================================
    root@node2:~/workingcui/iLogtail/git/ilogtail(main)]make vendor && make build
    rm -rf license_coverage.txt
    ...
    rm -rf vendor
    go mod vendor
    ./external/sync_vendor.py
    execute cmd cp -r /root/workingcui/iLogtail/git/ilogtail/external/github.com /root/workingcui/iLogtail/git/ilogtail/external/../vendor/: 0
    execute cmd cp -r /root/workingcui/iLogtail/git/ilogtail/external/README.md /root/workingcui/iLogtail/git/ilogtail/external/../vendor/: 0
    rm -rf license_coverage.txt
    ...
    rm -rf find_licenses
    ./scripts/build.sh vendor default
    Linux

vendor/github.com/coreos/go-systemd/sdjournal/journal.go:27:33: fatal error: systemd/sd-journal.h: No such file or directory
// #include <systemd/sd-journal.h>
^
compilation terminated.
make: *** [build] Error 2

=============================================
image

[FEATURE]: support collect clinet ip in input/syslog

Concisely describe the proposed feature

In syslog collector it won't collect client IP:

type parseResult struct {
hostname string
program string
priority int
facility int
severity int
time time.Time
content string
// RFC5424
procID *string
msgID *string
structuredData *map[string]map[string]string
}

If add client IP content, it will be able to distinguish different clients while collecting from multiple devices.

Describe the solution you'd like (if any)
Add new field in syslog collector.

[QUESTION]: 写kafka的flusher没有找到重试的地方?

通读iLogTail中写kafka的没有重试的地方。即使是写SLS也没有重试。想请教下这里对失败重试的设计是如何的?
flusher_kafka.go中对于失败只是简单的打印一条错误日志。是只要写不进,就不管了吗?此时日志就丢了?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.