Coder Social home page Coder Social logo

sumologic / sumologic-otel-collector Goto Github PK

View Code? Open in Web Editor NEW
35.0 19.0 35.0 8 MB

Sumo Logic Distribution for OpenTelemetry Collector

License: Apache License 2.0

Makefile 1.74% Go 87.09% Shell 6.88% Dockerfile 0.20% C# 2.66% PowerShell 1.31% Jinja 0.07% Ruby 0.05%
opentelemetry open-telemetry sumologic sumo-logic otel-collector

sumologic-otel-collector's Introduction

Sumo Logic Distribution for OpenTelemetry Collector

Default branch build

Sumo Logic Distribution for OpenTelemetry Collector is a Sumo Logic-supported distribution of the OpenTelemetry Collector. It is a single agent to send logs, metrics and traces to Sumo Logic.

Our aim is to extend and not to replace the OpenTelemetry Collector.

In order to learn more, pleasee see purpose of Sumo Logic Distribution for OpenTelemetry Collector

Supported OS and architectures

Linux MacOS Windows
amd64 (x86_64) amd64 (x86_64) amd64 (x86_64)
arm64 arm64 (Apple M1)

Components

This section lists the components that are included in Sumo Logic Distribution for OpenTelemetry Collector.

The highlighted components are delivered by Sumo Logic.

The rest of the components in the table are pure upstream OpenTelemetry components.

The ⚠️ strikethrough ⚠️ components are deprecated.

Receivers Processors Exporters Extensions Connectors
active_directory_ds attributes awss3 asapclient forward
active_directory_inv batch carbon awsproxy count
aerospike cascading_filter debug basicauth routing
apache cumulativetodelta file bearertokenauth servicegraph
awscloudwatch deltatorate kafka db_storage spanmetrics
awscontainerinsight experimental_metricsgeneration loadbalancing docker_observer
awsecscontainermetrics filter ⚠️ logging ⚠️ ecs_observer
awsfirehose groupbyattrs otlp ecs_task_observer
awsxray groupbytrace otlphttp file_storage
azureeventhub k8s_tagger prometheus headerssetter
bigip k8sattributes sumologic health_check
carbon logstransform syslog host_observer
chrony memory_limiter nop http_forwarder
cloudflare metric_frequency jaegerremotesampling
cloudfoundry metricstransform k8s_observer
collectd probabilistic_sampler ⚠️ memory_ballast ⚠️
couchdb redaction oauth2client
datadog remotetap oidc
docker_stats resource pprof
elasticsearch resourcedetection sigv4auth
expvar routing sumologic
filelog schema zpages
filestats source
flinkmetrics span
fluentforward
googlecloudpubsub sumologic
googlecloudspanner ⚠️ sumologic_schema ⚠️
haproxy sumologic_syslog
hostmetrics tail_sampling
httpcheck transform
iis
influxdb
jaeger
jmx
journald
k8s_cluster
k8s_events
k8sobjects
kafka
kafkametrics
kubeletstats
loki
memcached
mongodb
mongodbatlas
mysql
nginx
nop
nsxt
opencensus
oracledb
otlp
otlpjsonfile
podman_stats
postgresql
prometheus_simple
prometheus
pulsar
purefa
purefb
rabbitmq
raw_k8s_events
receiver_creator
redis
riak
saphana
sapm
signalfx
skywalking
snowflake
snmp
solace
splunk_hec
sqlquery
sqlserver
sshcheck
statsd
syslog
tcplog
telegraf
udplog
vcenter
wavefront
windowseventlog
windowsperfcounters
zipkin
zookeeper

sumologic-otel-collector's People

Contributors

aboguszewski-sumo avatar amdprophet avatar andrzej-stencel avatar c-kruse avatar ccressent avatar dependabot[bot] avatar dmolenda-sumo avatar drduke1 avatar echlebek avatar eddieeldridge avatar fguimond avatar git-johnson avatar gourav2906 avatar igorzi84 avatar jspaleta avatar kkujawa-sumo avatar mat-rumian avatar pdelewski avatar perk-sumo avatar pmalek avatar pmatyjasek-sumo avatar pmm-sumo avatar portertech avatar ppawelecsumo avatar rnishtala-sumo avatar sumo-drosiek avatar sumoanema avatar sumologic-sanyaku-apps avatar swiatekm-sumo avatar wolodija avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sumologic-otel-collector's Issues

Drop custom changes in filterprocessor

Drop custom changes in filterprocessor, according to deprecation message:

2022/07/13 08:26:12 proto: duplicate proto type registered: jaeger.api_v2.PostSpansRequest
2022/07/13 08:26:12 proto: duplicate proto type registered: jaeger.api_v2.PostSpansResponse
2022-07-13T08:26:12.060+0200    info    service/telemetry.go:103        Setting up own telemetry...
2022-07-13T08:26:12.060+0200    info    service/telemetry.go:138        Serving Prometheus metrics      {"address": ":8888", "level": "basic"}
2022-07-13T08:26:12.061+0200    info    pipelines/pipelines.go:341      Component is under development. {"kind": "exporter", "data_type": "logs", "name": "logging", "stability": "in development"}
2022-07-13T08:26:12.061+0200    warn    [email protected]/filter_processor_logs.go:221
*********************************************************************************************************************************************************
***    Support for "expr" language is deprecated and is going to be dropped soon. Please see the migration document:                                  ***
***    https://github.com/SumoLogic/sumologic-otel-collector/blob/v0.55.0-sumo-0/docs/Upgrading.md#filter-processor-drop-support-for-expr-language.   ***
*********************************************************************************************************************************************************

K8s Tagger processor - missing attributes in case of filter by node name

Environment:

  • EKS 1.21
  • OTel Collector 0.52.0

Action:

  • Configure Collector agent pod to run in K8s as described here

Result:
image

Expected result:
image

OTel Distro log:

2022-07-04T09:17:21.434Z info kube/owner.go:91 Staring K8S resource informers {"kind": "processor", "name": "k8s_tagger", "pipeline": "metrics", "#infomers": 6}
W0704 09:17:21.456225 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.ReplicaSet: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
E0704 09:17:21.456267 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
W0704 09:17:21.456886 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Namespace: field label not supported: spec.nodeName
E0704 09:17:21.456913 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: field label not supported: spec.nodeName
W0704 09:17:21.457062 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.ReplicaSet: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
E0704 09:17:21.457122 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
W0704 09:17:21.457253 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Deployment: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
E0704 09:17:21.457322 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Deployment: failed to list *v1.Deployment: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
W0704 09:17:21.457623 1 reflector.go:324] k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Deployment: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"
E0704 09:17:21.457687 1 reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Deployment: failed to list *v1.Deployment: "spec.nodeName" is not a known field selector: only "metadata.name", "metadata.namespace"

Metrics aren't being exported to Sumologic

I'm new to OpenTelemetry and SumoLogic so I'm sure the issue is on my side I'm just having trouble pinning down what that issue is. I've verified that logs are sent to SumoLogic as they show up in a _collector=... query.

I'm pretty certain otelcol-sumo is receiving metrics since logging is logging information about them to the console. I can't find any metrics in my SumoLogic account though. I'm using the same _collector=... query in a metrics search and it's empty. I've waited a couple hours to try to make sure it isn't just a delay in the data appearing. Do I have something configured incorrectly? Is there additional logging I can turn on to find more information?

config.yaml

exporters:
  sumologic:
    sending_queue:
      enabled: true
      persistent_storage_enabled: true
  logging:
    loglevel: debug

extensions:
  file_storage:
    directory: /etc/otelcol-sumo/storage
  sumologic:
    install_token: $TOKEN
    collector_name: $COLLECTORNAME

receivers:
  otlp:
    protocols:
      grpc:
        tls:
          cert_file: $CERTFILE
          key_file: $CERTKEY

service:
  extensions: [sumologic, file_storage]
  pipelines:
    logs:
      receivers: [otlp]
      exporters: [sumologic]
    metrics:
      receivers: [otlp]
      exporters: [sumologic, logging]
    traces:
      receivers: [otlp]
      exporters: [sumologic]

otelcol-sumo console logs for metrics

2022-08-02T21:06:11.996Z        info    service/collector.go:215        Starting otelcol-sumo...        {"Version": "v0.56.0-sumo-0", "NumCPU": 2}
2022-08-02T21:06:11.996Z        info    service/collector.go:128        Everything is ready. Begin running and processing data.
2022-08-02T21:06:33.659Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:06:33.659Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:06:33.4374684 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:07:33.536Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:07:33.537Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:07:33.4415283 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:08:33.534Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:08:33.534Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:08:33.4385703 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:09:33.548Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:09:33.548Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:09:33.4513333 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:10:33.544Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:10:33.545Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:10:33.4484113 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:11:33.535Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:11:33.535Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:11:33.4402284 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-08-02T21:12:33.534Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "#metrics": 1}
2022-08-02T21:12:33.534Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource labels:
     -> service.name: STRING(MyProduct)
     -> environment: STRING(development)
     -> telemetry.sdk.name: STRING(opentelemetry)
     -> telemetry.sdk.language: STRING(dotnet)
     -> telemetry.sdk.version: STRING(1.3.0.519)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope OpenTelemetry.Instrumentation.AspNetCore 1.0.0.0
Metric #0
Descriptor:
     -> Name: http.server.duration
     -> Description: measures the duration of the inbound HTTP request
     -> Unit: ms
     -> DataType: Histogram
     -> AggregationTemporality: AGGREGATION_TEMPORALITY_CUMULATIVE
HistogramDataPoints #0
Data point attributes:
     -> http.flavor: STRING(HTTP/2)
     -> http.host: STRING(localhost:5001)
     -> http.method: STRING(GET)
     -> http.scheme: STRING(https)
     -> http.status_code: STRING(200)
StartTimestamp: 2022-08-02 21:04:33.4550738 +0000 UTC
Timestamp: 2022-08-02 21:12:33.4377587 +0000 UTC
Count: 4
Sum: 2796.478800
ExplicitBounds #0: 0.000000
ExplicitBounds #1: 5.000000
ExplicitBounds #2: 10.000000
ExplicitBounds #3: 25.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 75.000000
ExplicitBounds #6: 100.000000
ExplicitBounds #7: 250.000000
ExplicitBounds #8: 500.000000
ExplicitBounds #9: 1000.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 0
Buckets #4, Count: 1
Buckets #5, Count: 1
Buckets #6, Count: 0
Buckets #7, Count: 1
Buckets #8, Count: 0
Buckets #9, Count: 0
Buckets #10, Count: 1
        {"kind": "exporter", "data_type": "metrics", "name": "logging"}
^C2022-08-02T21:13:23.155Z      info    service/collector.go:159        Received signal from OS {"signal": "interrupt"}
2022-08-02T21:13:23.155Z        info    service/collector.go:231        Starting shutdown...
2022-08-02T21:13:23.155Z        info    pipelines/pipelines.go:118      Stopping receivers...
2022-08-02T21:13:23.187Z        info    pipelines/pipelines.go:125      Stopping processors...
2022-08-02T21:13:23.188Z        info    pipelines/pipelines.go:132      Stopping exporters...
2022-08-02T21:13:23.188Z        info    extensions/extensions.go:56     Stopping extensions...

Document api_base_url values

Until we have the redirection implemented so users do not have to supply it, we should document each URL customers would use.

Register collector if invalid collector credentials were found on startup

In case collector detects invalid locally stored collector credentials then it should try registering itself with the API (perhaps with a configuration option allowing to retain current behavior which is to exit with an error).

This can happen in a number of scenarios, e.g.:

  • when collector gets removed from collector management page
  • when collector gets removed because it was defined as ephemeral and not data was sent for 12h
  • when collector credentials file gets corrupted/changed in any way

Originally posted by @sumo-drosiek in #119 (comment)

Send X-Sumo-Host, X-Sumo-Category and X-Sumo-Name headers with metrics

For reference, with fluentd-output-sumologic we send metrics with the following headers and attributes

--> POST /receiver HTTP/1.1
--> accept: */*
--> content-encoding: gzip
--> x-sumo-host: 
--> x-sumo-client: k8s_2.2.0-dev.0
--> content-length: 695
--> user-agent: HTTPClient/1.0 (2.8.3, ruby 2.6.7 (2021-04-05))
--> host: receiver-mock.receiver-mock:3000
--> date: Tue 28 Sep 2021 12:02:16 GMT
--> x-sumo-category: 
--> x-sumo-name: 
--> content-type: application/vnd.sumologic.prometheus

coredns_cache_hits_total{
_origin="kubernetes"
cluster="kubernetes-pmalek-vagrant-otc-collector-1"
container="coredns"
deployment="coredns"
endpoint="http-metrics"
instance="10.1.34.69:9153"
job="coredns"
namespace="kube-system"
node="sumologic-kubernetes-collection"
pod_labels_k8s-app="kube-dns"
pod_labels_pod-template-hash="588fd544bf"
pod="coredns-588fd544bf-xwxzc"
prometheus_replica="prometheus-collection-kube-prometheus-prometheus-0"
prometheus_service="collection-kube-prometheus-coredns"
prometheus="sumologic/collection-kube-prometheus-prometheus"
replicaset="coredns-588fd544bf"
server="dns://:53"
service="collection-kube-prometheus-coredns_kube-dns"
type="denial"
} 31105.0 1632830530313

wheres with OT distro we do the following (using sumologicexporter and prometheus metric format):

--> POST /receiver HTTP/1.1
--> content-encoding: gzip
--> x-sumo-client: otelcol
--> accept-encoding: gzip
--> content-length: 1142
--> host: receiver-mock.receiver-mock:3000
--> user-agent: Go-http-client/1.1
--> content-type: application/vnd.sumologic.prometheus
coredns_cache_hits_total{
_collector="kubernetes-pmalek-vagrant-otc-1"
_origin="kubernetes"
_sourceCategory="kubernetes/kube/system/coredns/588fd544bf"
_sourceHost="undefined"
_sourceName="kube-system.coredns-588fd544bf-xwxzc.undefined"
cluster="microk8s"
container="coredns"
deployment="coredns"
endpoint="http-metrics"
host="collection-sumologic-otelcol-metrics-0"
instance="10.1.34.69:9153"
job="coredns"
namespace="kube-system"
pod_labels_k8s-app="kube-dns"
pod_labels_pod-template-hash="588fd544bf"
pod="coredns-588fd544bf-xwxzc"
prometheus_replica="prometheus-collection-kube-prometheus-prometheus-0"
prometheus_service="collection-kube-prometheus-coredns"
prometheus="sumologic/collection-kube-prometheus-prometheus"
replicaset="coredns-588fd544bf"
server="dns://:53"
service="collection-kube-prometheus-coredns_kube-dns"
type="denial"
} 30717 1632829330313

One might note that the with otelcol we do not send the source related headers. We do that with fluentd but those are set to empty.

This can be a nice addition/feature which could be for instance used in Data Volume App where the user is only allowed to filter based on source related metadata.

cc: @frankreno

Document running as a systemd service

Notably, systemd services are often run as users without a home folder, which causes the extension to not save local credentials. At minimum we should make it clear in the documentation that the user needs a home folder, or the credential store location should be explicitly set to some other location.

[sourceprocessor,sumologicexporter] Remove source templates

Source templates are implemented in two places:

  • sourceprocessor (with addidional support for pod_name and possibility to be replaced by pod annotations)
  • sumologicexporter

This is against DRY and to resolve it I propose the following:

  • remove source templates from sourceprocessor
  • add ability to define source templates using attribute in sumologicexporter with fallback to current approach

With such solution, sourceprocessor will be able to prepare template based on pod annotations and exporter will take care of rest.

sumologicexporter: carbon2 exporter doesn't serialize data point attributes

It seems that carbon2 data format in sumologicexporter doesn't take data point attributes into account.

i.e. the data point is being serialized here:

switch record.metric.DataType() {
case pdata.MetricDataTypeGauge:
dps := record.metric.Gauge().DataPoints()
nextLines = make([]string, 0, dps.Len())
for i := 0; i < dps.Len(); i++ {
nextLines = append(nextLines, carbon2NumberRecord(record, dps.At(i)))
}
case pdata.MetricDataTypeSum:
dps := record.metric.Sum().DataPoints()
nextLines = make([]string, 0, dps.Len())
for i := 0; i < dps.Len(); i++ {
nextLines = append(nextLines, carbon2NumberRecord(record, dps.At(i)))
}
// Skip complex metrics
case pdata.MetricDataTypeHistogram:
case pdata.MetricDataTypeSummary:
}
and here
switch dataPoint.ValueType() {
case pdata.MetricValueTypeDouble:
return fmt.Sprintf("%s %g %d",
carbon2TagString(record),
dataPoint.DoubleVal(),
dataPoint.Timestamp()/1e9,
)
case pdata.MetricValueTypeInt:
return fmt.Sprintf("%s %d %d",
carbon2TagString(record),
dataPoint.IntVal(),
dataPoint.Timestamp()/1e9,
)
}
but its attributes are never taken into account as in e.g. prometheus formatter:
// doubleValueLine returns prometheus line with given value
func (f *prometheusFormatter) doubleValueLine(name string, value float64, dp dataPoint, attributes pdata.AttributeMap) string {
return f.doubleLine(
name,
f.tags2String(attributes, dp.Attributes()),
value,
dp.Timestamp(),
)
}
// uintValueLine returns prometheus line with given value
func (f *prometheusFormatter) uintValueLine(name string, value uint64, dp dataPoint, attributes pdata.AttributeMap) string {
return f.uintLine(
name,
f.tags2String(attributes, dp.Attributes()),
value,
dp.Timestamp(),
)
}

Update documentation regarding systemd logs

Update documentation about systemd logs. It should focus on permissions. See the following snippets for more context

sudo -u opentelemetry --shell /usr/bin/bash
bash-5.1$ echo $HOME
/home/opentelemetry
bash-5.1$ journalctl -u sensu-agent
Hint: You are currently not seeing messages from other users and the system.
      Users in groups 'adm', 'systemd-journal', 'wheel' can see all messages.
      Pass -q to turn off this notice.
No journal files were opened due to insufficient permissions.
sudo usermod -a -G wheel opentelemetry
sudo -u opentelemetry --shell /usr/bin/bash
bash-5.1$ groups
opentelemetry wheel
bash-5.1$ journalctl -u sensu-agent -n 10
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon sensu-agent[661517]: {"level":"warning","msg":"Error reading from socket: read udp 127.0.0.1>
Aug 10 22:13:38 carbon systemd[1]: sensu-agent.service: Deactivated successfully.
Aug 10 22:13:38 carbon systemd[1]: Stopped sensu-agent.service - The Sensu Agent process..

Fix modules and git tags so that plugins are importable

We didn't really cover the use case of using our plugins externally, i.e. that wasn't tested. In order to support this we'll need to do 2 things:

  • add module directory prefixes tags as part of our release process so that users (like yourself) can pull them into the build, this is due to golang/go#34055 being accepted but not implemented 😞 . So in order for the above to work with our exporter's line uncommented we'd need a tag like this pkg/exporter/sumologicexporter/v0.0.43-beta.0 which could then be used like so in the builder config:

    exporters:
      - gomod: "github.com/SumoLogic/sumologic-otel-collector/pkg/exporter/sumologicexporter v0.0.43-beta.0"
    
  • point to proper versions of our internal dependencies that we rely on (e.g. extension <=> exporter), I'm talking about this line for instance

Related thread on slack: https://sumodojo.slack.com/archives/G011R8ZMUEB/p1640018349031800

Metadata for pods with specific names is not added

Metadata for pods with specific names is not added by k8sprocessor

For example when pod name is collection-sumologic-otelcol-logs-1 then some metadata (e.g. statefulset) are not available in Sumo.

There is constant list of pod names patterns to ignore, see this:

	// TODO: move these to config with default values
	podNameIgnorePatterns = []*regexp.Regexp{
		regexp.MustCompile(`jaeger-agent`),
		regexp.MustCompile(`jaeger-collector`),
		regexp.MustCompile(`otel-collector`),
		regexp.MustCompile(`otel-agent`),
		regexp.MustCompile(`collection-sumologic-otelcol`),
	}

and podNameIgnorePatterns is used in shouldIgnorePod see here.

Unable to restart agent when stopped

I have encountered a reproducible error that occurs when I stop and attempt to restart the collector, as follows:

2022-07-27T22:21:27.944Z        info    adapter/receiver.go:54  Starting stanza receiver        {"kind": "receiver", "name": "filelog", "pipeline": "logs"}                                                                          
Error: cannot start pipelines: start stanza: read known files from database: stat: invalid argument
2022/07/27 22:21:27 collector server run finished with error: cannot start pipelines: start stanza: read known files from database: stat: invalid argument

Here's my config.yaml file:

extensions:
  file_storage:
    directory: /var/lib/otelcol-sumo/file_storage
  sumologic:
    install_token: ${SUMO_OTC_INSTALL_TOKEN}
    collector_credentials_directory: /var/lib/otelcol-sumo
    collector_name: ${HOSTNAME}
    collector_category: mission-s3m
    collector_description: Sumo Logic OTC Distro Demo
    collector_fields:
      mission: s3m
    clobber: true
    ephemeral: true

receivers:
  filelog:
    include_file_name: false
    include_file_path_resolved: true
    start_at: end
    include:
      - /tmp/sumologic-otc-example.log

exporters:
  logging:
    loglevel: info
  sumologic:

service:
  extensions: [file_storage, sumologic]
  pipelines:
    logs:
      receivers: [filelog]
      exporters: [sumologic, logging]

For more information about my environment, please see this document, which is a slightly modified from the Sumo Logic OTC Distro installation and configuration guides.

Define configuration via globbed paths

Problem

It's often desirable to split OT configuration into multiple files. In particular, this makes configuration management much easier if pipelines are largely independent. If a user wants to collect telemetry data from logically separate sources, the following is a natural way of organizing configuration:

./
├─ shared.yaml
├─ integrations/
│  ├─ nginx.yaml
│  ├─ mysql.yaml
│  ├─ hostmetrics.yaml

This is currently possible by passing multiple config file to OT's command line, like so:

otelcol --config shared.yaml -config integrations/nginx.yaml -config integrations/mysql.yaml -config integrations/hostmetrics.yaml

The above works, but is awkward to maintain and requires changes to the otel invocation wherever new files are added or removed. It's natural to want to do the following instead:

otelcol -config shared.yaml --config integrations/*.yaml

but this doesn't work with the built-in file provider.

Solution: The Glob config provider

OT core has an extensible configuration handling subsystem , which, in particular, allows defining configuration providers for different URI schemas.

I propose that we add a configuration provider for paths defined by glob expressions. This would be used like so:

otelcol -config shared.yaml -config "glob:integrations/*.yaml"

This would be a pretty straightforward addition without a need to patch upstream, and the necessary functionality is either covered by upstream (configuration handling and merging), or by libraries.

Behaviour

The configuration files would be loaded and merged using OT's default deep merging semantics, in some deterministic order - I am inclined to simply use alphabetical order here.

It'd be possible to supply multiple glob sources - in that case, each source would be resolved separately, and then they'd be merged in the order they were provided.

Hot reloading

We could hot reload the configuration whenever any of the files change. This can be a bit complicated, so it'd have to be on a best-effort basis with some caveats around symlinks and network file systems. It's not entirely clear to me whether this is necessary or even a good idea.

Open questions

  • Do we want hot reloading?
  • What should happen if some of the matched files are invalid? We could either error completely or print a warning and skip the invalid files.

Stability level of sumologic extension is not defined

Seeing an error message logged out when starting otelcol-sumo version 0.56.0-sumo-0 with info-level logging enabled:

2022-07-27T22:57:32.063Z        info    components/components.go:30     Stability level of component is undefined       {"kind": "exporter", "data_type": "logs", "name": "sumologic", "stability": "undefined"}

I suspect we're missing an attribute in our extension to set the stability level. Somehow, seeing "undefined" here is even worse than "alpha" or "experimental". Given that we've announced the general availability of our OTC distribution and we are offering commercial support for our distribution (and exporter extension) to our customers, it would be great if we could set this to "beta" or better. 😊

I got this message on an amd64 linux host running Debian Linux.

Revisit pkg/processor/cascadingfilterprocessor/processor_test.go after bump to 0.28.0

#107 bumps the OT upstream version dependency to 0.28.0 which introduced the following breaking change:

Move BigEndian helper functions in tracetranslator to an internal package

this in turn makes it impossible to use the helpers e.g. tracetranslator.UInt64ToSpanID() because they were moved to an internal package.

Those tests need to be revisited and reenabled (remove t.Skip() calls) and adjusted for this change.

Make it easier for new users to configure their first (example) OTC pipeline

Related: #586

The Sumo Logic OTC Distro Configuration documentation is pretty good, but I was able to spend over 30 minutes reading through the docs without being instructed to actually start the collector with a functional configuration. We should offer users a complete example configuration that moves them one step closer to configuring real workloads using Sumo Logic with the Sumo Logic OTC Distro.

If I could suggest a minor revision, I'd make the following changes to the Configuration documentation:

  1. Add a new "Example configuration" heading (above the "Basic configuration" heading, or in place of it).

    This section should provide a complete config.yaml example that users can copy and paste without modifications, and this configuration should provide a completely functional pipeline so that users can then start the collector and observe data is being delivered to their Sumo Logic account.

    I would suggest that this example configuration use a filelog receiver that is configured to read log data from a non-standard log file (e.g. /tmp/sumologic-otc-example.log).

    Why use a non-standard log file? Two reasons: 1) users might not be comfortable sending actual log data (yet), perhaps for fear of leaking sensitive data, or for other reasons. 2) choosing the right log file is tricky – we would want to avoid a system log file that could potentially be inactive at the moment the user is testing OTC (otherwise they might think something isn't working), and we might want to avoid a noisy log file like /var/log/messages because you don't necessarily want new users to drink directly from the fire hose. A known fake log file sets good expectations for the user – they wouldn't expect anything to happen without their intervention, and they get to control exactly when data is written to the file and thus get to see exactly how fast the collector sends their data to Sumo Logic!

  2. Use environment variables!

    When I saw that the Configuration documentation provided an example config.yaml file with a placeholder for my installation token (e.g. install_token: <token>), I started wondering if the OpenTelemetry Collector config file supported environment variables. It turns out that it does!

    "The use and expansion of environment variables is supported in the Collector configuration."

    Source: https://opentelemetry.io/docs/collector/configuration/#configuration-environment-variables

    If we would provide example config.yaml files with configuration parameters that reference environment variables, it's one less chance for users to misconfigure their collector – they can just copy and paste and set one or more environment variables:

    extensions:
      sumologic:
        install_token: ${SUMOLOGIC_INSTALL_TOKEN}
  3. Instruct users to verify the OTC pipeline by visiting their Sumo Logic account and starting a LiveTail, then run a command to append data to the configured log file. For example:

    echo $(date) ${hostname} INFO: Hello, Sumo Logic OpenTelemetry Collector Distro! >> /tmp/sumologic-otc-example.log
    

    Do they see the message in LiveTail? SUCCESS!! 🎉

From here there are a number of ways the Configuration documentation could guide users to take very productive next steps:

  • Configure service management for OTC (e.g. systemd unit files)
  • Configure log monitoring (e.g. get them to hook up OTC pipelines for actual log files)
  • Configure metrics collection (e.g. provide a simple example to enable host monitoring)
  • Configure tracing

/cc @pmm-sumo

Consistent order of configuration sections in docs/examples

It would be helpful for new users if we would organize configuration documentation and/or examples logically.

It has been helpful in my own exploration of OpenTelemetry to arrange my configuration files in the same order that the data flows, i.e. receivers => processors => exporters => [service.]pipelines. In this same model, I place extensions as modules to be loaded before configuring the observability pipelines.

---
# /etc/otelcol-sumo/config.yaml
extensions: {} # enable OT extensions

receivers: {} # collect data
processors: {} # filter, aggregate/correlate, and enrich data
exporters: {} # send data

service: {} # daemon runtime instructions

In most examples in our configuration documentation we organize these sections in order of exporters, extensions, receivers, and service (with no empty processors section). In other examples we use other orders, which is inconsistent at best and confusing for new users at worst.

It would be great if we restructured the examples in our documentation to consistently order configuration sections. 👌 🙏

Research if changes in builder config are necessary after deprecation of includeCore flag

Given open-telemetry/opentelemetry-collector#4087 was merged we might need to adjust builder config in order to get rid of this warning:

2021-12-29T10:23:30.822+0100    WARN    builder/config.go:107   IncludeCore is deprecated. Starting from v0.41.0, you need to include all components explicitly.
go.opentelemetry.io/collector/cmd/builder/internal/builder.(*Config).Validate
        go.opentelemetry.io/collector/cmd/builder/internal/builder/config.go:107
go.opentelemetry.io/collector/cmd/builder/internal.Command.func1
        go.opentelemetry.io/collector/cmd/builder/internal/command.go:42
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/[email protected]/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/[email protected]/command.go:974
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/[email protected]/command.go:902
main.main
        go.opentelemetry.io/collector/cmd/builder/main.go:26
runtime.main
        runtime/proc.go:255

docs/Configuration.md#command-line-configuration-options describes non existing flags

Configuration.md#command-line-configuration-options is outdated. It describes flags that no longer exit.

This section of documentation states:

Usage:
  otelcol-sumo [flags]

Flags:
      --add-instance-id             Flag to control the addition of 'service.instance.id' to the collector metrics. (default true)
      --config string               Path to the config file
  -h, --help                        help for otelcol-sumo
      --log-format string           Format of logs to use (json, console) (default "console")
      --log-level Level             Output level of logs (DEBUG, INFO, WARN, ERROR, DPANIC, PANIC, FATAL) (default info)
      --log-profile string          Logging profile to use (dev, prod) (default "prod")
      --mem-ballast-size-mib uint   Flag to specify size of memory (MiB) ballast to set. Ballast is not used when this is not specified. default settings: 0
      --metrics-addr string         [address]:port for exposing collector telemetry. (default ":8888")
      --metrics-level Level         Output level of telemetry metrics (none, basic, normal, detailed) (default basic)
      --metrics-prefix string       Prefix to the metrics generated by the collector. (default "otelcol")
      --set stringArray             Set arbitrary component config property. The component has to be defined in the config file and the flag has a higher precedence. Array config properties are overridden and maps are joined, note that only a single (first) array property can be set e.g. -set=processors.attributes.actions.key=some_key. Example --set=processors.batch.timeout=2s (default [])
  -v, --version                     version for otelcol-sumo

But actually the otelcol-sumo flags differs:

$ otelcolbuilder/cmd/otelcol-sumo --help
Usage:
  otelcol-sumo [flags]

Flags:
      --config -config=file:/path/to/first --config=file:path/to/second   Locations to the config file(s), note that only a single location can be set per flag entry e.g. -config=file:/path/to/first --config=file:path/to/second. (default [])
      --feature-gates Flag                                                Comma-delimited list of feature gate identifiers. Prefix with '-' to disable the feature.  '+' or no prefix will enable the feature.
  -h, --help                                                              help for otelcol-sumo
      --set stringArray                                                   Set arbitrary component config property. The component has to be defined in the config file and the flag has a higher precedence. Array config properties are overridden and maps are joined, note that only a single (first) array property can be set e.g. -set=processors.attributes.actions.key=some_key. Example --set=processors.batch.timeout=2s (default [])
  -v, --version                                                           version for otelcol-sumo

Move metadata attribute translation from sumologicexporter to sumologicschemaprocessor

Currently, our exporter takes care of translating attribute names from the OpenTelemetry schema to the Sumo schema used by apps. See the definitions here.

However, exporters shouldn't modify data, as this causes race conditions and memory corruption when the same records are multiplexed into more than one exporter. Originally, the intent of this feature was to make it easier for users to send data with correct metadata to Sumo without having to configure multiple components, but in hindsight, this wasn't worth the problems it causes.

We've since introduced a separate processor for the purpose of translating metadata to what the Sumo backend and apps expect. The attribute translation should live in this processor instead.

This would be a breaking change and the fix should come with migration instructions.

Add a nice arch/OS support matrix

Right now we only have instructions for all the different OS/arch combinations in the README. We should have that information organized in a support matrix so it's easy for users to tell at first glance what exactly is supported.

Make it easier to validate Sumo Logic OTC Distro installation

The Sumo Logic OTC Distro Installation documentation in this repository is pretty great. Installation of the OpenTelemetry Collector is easy thanks to statically compiled binaries — just download the binary for your platform (operating system + arch), add the executable bit, add the binary to your $PATH, and run a command to verify which version you have installed (i.e. otelcol-sumo --version). Nice and easy!

However, in my experience that's where the ease-of-use ended, and the confusion started. The second to last step in the installation documentation linked me to the Configuration documentation (should I start configuring the collector now, or finish the install?), both of which skip over the critical step of obtaining an installation token and verifying that the collector can communicate with the Sumo Logic platform.

If I could suggest a minor revision, I'd make a few small changes to the Installation documentation:

  1. Add a link from the Installation documentation to the installation token documentation, and encourage users to configure an installation token and set it as an environment variable in their shell; e.g.:

    export SUMOLOGIC_INSTALL_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    
  2. Add the minimum viable CLI arguments to run otelcol-sumo such that it will successfully connect to Sumo Logic account.

    I'm assuming this will be possible via some combination of --set flags, for example:

    $ otelcol-sumo \
      --set exporters.sumologic \
      --set extensions.sumologic.install_token ${SUMOLOGIC_INSTALL_TOKEN} \
      --set ...
    
  3. Instruct the user to verify that they have successfully installed otelcol-sumo by visiting the Collection page in their Sumo Logic account and confirming that the collector is now registered. Success! 🎉

    Once verified they should be able to stop the collector (e.g. via Ctrl-C).

  4. Make the last instruction in the installation documentation be a link to the Configuration documentation. 5.

/cc @pmm-sumo

U

[k8sprocessor] implicit dependencies between owner metadata

OT distro version: 0.0.57-beta.0

Steps to reproduce

  1. Configure k8s_tagger processor with owner_lookup_enabled: true and with extract metadata section that includes deploumentName but does not include replicaSetName, and with cronJobName but without jobName.
processors:
  k8s_tagger:
    owner_lookup_enabled: true
    extract:
      metadata:
       - deploymentName
       - cronJobName
  1. Prepare a Kubernetes cluster that includes a Deployment and a CronJob.
  2. Run the collector in a Kubernetes cluster with log and/or metrics collection from pods, observe the metadata on the logs and/or metrics.

Actual result

  • Data from pods that belong to a Deployment is not tagged with deployment name
  • Data from pods that belong to a CronJob is not tagged with the cronjob's name

Expected result

  • Data from pods that belong to a Deployment is tagged with deployment name k8s.deployment.name
  • Data from pods that belong to a CronJob is tagged with the cronjob's name k8s.cronjob.name

How to fix

This is because the Pods that belong to a Deployment actually belong to a ReplicaSet, and that ReplicaSet belongs to the Deployment. With the above configuration, currently the ReplicaSet information is not retrieved from k8s API server, which it should be when the Deployment extraction is defined.

Same with CronJob - Job - Pod hierarchy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.