Coder Social home page Coder Social logo

connectrpc / otelconnect-go Goto Github PK

View Code? Open in Web Editor NEW
117.0 14.0 25.0 452 KB

OpenTelemetry tracing and metrics for Connect.

Home Page: https://connectrpc.com

License: Apache License 2.0

Makefile 2.60% Go 97.40%
golang grpc metrics opentelemetry rpc tracing prometheus connectrpc

otelconnect-go's Introduction

otelconnect

Build Report Card GoDoc

connectrpc.com/otelconnect adds support for OpenTelemetry tracing and metrics collection to Connect servers and clients.

For more on Connect, OpenTelemetry, and otelconnect, see the Connect announcement blog post and the observability documentation on connectrpc.com.

An example

package main

import (
	"context"
	"fmt"
	"log"
	"net/http"

	"connectrpc.com/connect"
	"connectrpc.com/otelconnect"
	// Generated from your protobuf schema by protoc-gen-go and
	// protoc-gen-connect-go.
	pingv1 "connectrpc.com/otelconnect/internal/gen/observability/ping/v1"
	"connectrpc.com/otelconnect/internal/gen/observability/ping/v1/pingv1connect"
)

func main() {
	mux := http.NewServeMux()
	otelInterceptor, err := otelconnect.NewInterceptor()
	if err != nil {
		log.Fatal(err)
	}

	// otelconnect.NewInterceptor provides an interceptor that adds tracing and
	// metrics to both clients and handlers. By default, it uses OpenTelemetry's
	// global TracerProvider and MeterProvider, which you can configure by
	// following the OpenTelemetry documentation. If you'd prefer to avoid
	// globals, use otelconnect.WithTracerProvider and
	// otelconnect.WithMeterProvider.
	mux.Handle(pingv1connect.NewPingServiceHandler(
		&pingv1connect.UnimplementedPingServiceHandler{},
		connect.WithInterceptors(otelInterceptor),
	))

	http.ListenAndServe("localhost:8080", mux)
}

func makeRequest() {
	otelInterceptor, err := otelconnect.NewInterceptor()
	if err != nil {
		log.Fatal(err)
	}

	client := pingv1connect.NewPingServiceClient(
		http.DefaultClient,
		"http://localhost:8080",
		connect.WithInterceptors(otelInterceptor),
	)
	resp, err := client.Ping(
		context.Background(),
		connect.NewRequest(&pingv1.PingRequest{}),
	)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(resp)
}

Configuration for internal services

By default, instrumented servers are conservative and behave as though they're internet-facing. They don't trust any tracing information sent by the client, and will create new trace spans for each request. The new spans are linked to the remote span for reference (using OpenTelemetry's trace.Link), but tracing UIs will display the request as a new top-level transaction.

If your server is deployed as an internal service, configure otelconnect to trust the client's tracing information using otelconnect.WithTrustRemote. With this option, servers will create child spans for each request.

Reducing metrics and tracing cardinality

By default, the OpenTelemetry RPC conventions produce high-cardinality server-side metric and tracing output. In particular, servers tag all metrics and trace data with the server's IP address and the remote port number. To drop these attributes, use otelconnect.WithoutServerPeerAttributes. For more customizable attribute filtering, use otelconnect.WithFilter.

Status

Unary Streaming Client Streaming Handler
Metrics
Tracing

Ecosystem

Support and Versioning

otelconnect supports:

Legal

Offered under the Apache 2 license.

otelconnect-go's People

Contributors

akshayjshah avatar charithe avatar chrispine avatar cyinma avatar dependabot[bot] avatar dragon3 avatar drice-buf avatar emcfarlane avatar gvacaliuc avatar jhump avatar jipperinbham avatar joshcarp avatar nicksnyder avatar pkwarren avatar rubensf avatar smallsamantha avatar smaye81 avatar tochemey avatar ytnsym avatar zchee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

otelconnect-go's Issues

Excessive span.AddEvent for request/response size

We just switched to using the module and have been very pleased with the auto-instrumented requests but have also seen the number of events we send to our provider (honeycomb) increase quite a bit. Because their usage is based on number of events, when calling span.AddEvent to attach the request and response, a single request between two services is now sending 5 events instead of 2.

Is there a specific otel convention for using AddEvent to include the request/response sizes? Could we have an option to include them as attributes on the main request span?

example doesn't work

	cs := myconsumer.NewConsumerServer()
	api := http.NewServeMux()
	api.Handle(myconsumerv1connect.NewMyServiceHandler(cs, connect.WithInterceptors(
		otelconnect.NewInterceptor())))

gives me the following error:

cannot use otelconnect.NewInterceptor() (value of type error) as connect.Interceptor value in argument to connect.WithInterceptors: error does not implement connect.Interceptor (missing method WrapStreamingClient)

Server spans incorrectly marked as errors for client faults

Per Otel spec for gRPC only errors indicating server fault (like INTERNAL) must result in the span being marked as errors.
https://opentelemetry.io/docs/specs/semconv/rpc/grpc/#grpc-status

Errors indicating a client fault (like INVALID_ARGUMENT) must not result in spans being marked as errors. This makes sense: for instance, user errors should not trigger alerts.

Otelconnect incorrectly reports all failed RPCs as errors here:
https://github.com/connectrpc/otelconnect-go/blob/main/interceptor.go#L357-L368

Baggage propagation

Opentelemetry interceptors should propagate baggage, currently connect-opentelemetry-go does not

Typo in docs

for docs linked.
https://connectrpc.com/docs/go/observability/#enabling-opentelemetry-for-connect

missing an s on WithInterceptor func should be WithInterceptors

having the pkgs to import also makes it a little easier to integrate with.

import (
"connectrpc.com/connect"
"connectrpc.com/otelconnect"
)

func main() {
    mux := http.NewServeMux()
    mux.Handle(pingv1connect.NewPingServiceHandler(
        &pingv1connect.UnimplementedPingServiceHandler{},
        connect.WithInterceptors(
            otelconnect.NewInterceptor(),
        ),
    ))

    http.ListenAndServe("localhost:8080", mux)
}

func makeRequest() {
    client := pingv1connect.NewPingServiceClient(
        http.DefaultClient,
        "http://localhost:8080",
        connect.WithInterceptors(
            otelconnect.NewInterceptor(),
        ),
    )
    resp, err := client.Ping(
        context.Background(),
        connect.NewRequest(&pingv1.PingRequest{}),
    )
    if err != nil {
        log.Fatal(err)
    }
    log.Print(resp)
}

Problems updating dependencies

There seems to be a problem when trying the following command in projects that import this library. This is the case for all projects I've tried so far.

go get -u ./...

github.com/<user>/<project> imports
	github.com/bufbuild/connect-opentelemetry-go imports
	go.opentelemetry.io/otel/metric/global: cannot find module providing package go.opentelemetry.io/otel/metric/global

The issue seems to be that the upstream opentelemetry.io library moved the metric directory in an update and then moved it back in another. Creating a conflict somewhere.

Can you please take a look and maybe update the dependencies of this library so it pulls in the correct sdk version please?

Setting otelconnect.WithTrustRemote() does not add child spans

I have a connect client and another connect server and both instances have the interceptor for otelgrpc. Both of the services have the following setup:

	otelInterceptor, err := otelconnect.NewInterceptor(otelconnect.WithTrustRemote())
	if err != nil {
		logger.Fatal().Err(err).Send()
	}

	interceptors := connect.WithInterceptors(
		otelInterceptor,
		logging.NewInterceptor(logger),
		validateInterceptor,
	)
	// -------------------------------------------------------
	organizationAuth := vanguard.NewService(authapiconnect.NewOrganizationAuthApiHandler(api, interceptors))

With jaeger, the traces are split. The docs mentioned that traces will be split between instances by default unless otelconnect.WithTrustRemote() is added. But adding this option didn't fix the issue.

Could this also be because i'm using vanguard?

Add custom tag to request metric

The service is currently developed using connectrpc, and then plugged into the package to get the metrics, like rpc_server_request_size_bytes_bucket{otel_scope_name="connectrpc.com/otelconnect",otel_scope_version="semver:0.6.0-dev",rpc_grpc_status_code="0",rpc_method="GetProject",rpc_service="projectService",rpc_system="grpc",le="0"} 0

but i want to read specific fields from the request header and add them to metrics (example: orgId), I wonder if this operation is supported?

the init code:

	exporter, err := prometheus.New()
	if err != nil {
		log.Fatalf("unable to create promethe expoter")
	}
	// add custom label to all metrics

	meterProvider := metric.NewMeterProvider(metric.WithReader(exporter),
		metric.WithResource(
			resource.NewWithAttributes(
				semconv.SchemaURL,
				semconv.ServiceNameKey.String("your-service-name"),
				attribute.String("environment", "demo"),
			),
		),
	)
	otelInterceptor, err := otelconnect.NewInterceptor(
		otelconnect.WithMeterProvider(meterProvider),
		otelconnect.WithoutServerPeerAttributes(),
	)

Trace context isn't propagated through the interceptor

Hello,

I'm trying to receive a trace which seems to be picked up by the otel package, however, its missed by the Interceptor which then provides a new trace ID to the context.

I've added this to my project before creating the interceptor:

tc := propagation.NewCompositeTextMapPropagator(
  propagation.TraceContext{},
  propagation.Baggage{},
)
otel.SetTextMapPropagator(tc)

I'm passing a traceparent header and a tracestate header in my http request, from a Nodejs server.

The headers are recieved in my go server in the req.Headers interface. And when I step through the otel/propagation/trace_context.go extract function, the header is found and the correct IDs are returned but then not picked up by the interceptor.

Any guidance would be helpful!
Thanks

How to get metrcis?

I take an example from the documentation, but I can't figure out how I can get metrics or traces

Make span available in http.Request Context

I don't know fi this is an otelconnect issue or a general connectrpc issue (I'm new to both).

I have a mix of interceptors (including this one) and http.Handlers. It'd be very helpful if this interceptor added the request span to the http.Request context.

Right now to have tracing in the http.Handler middleware I have to use otelhttp which adds another span to the trace, or do similar work to "recapture" the traceparent from the incoming headers.

Data Race: concurrent access of error state during duplexHTTPCall

I'm having difficulty determining the provenance of this particular race, but adjacent to the closeSpan call alongside a send func (similar to #170):

WARNING: DATA RACE
Read at 0x00c00fc5b408 by goroutine 34631:
  connectrpc.com/connect.(*Error).Unwrap()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/error.go:210 +0x38
  errors.is()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/errors/wrap.go:63 +0x24c
  errors.Is()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/errors/wrap.go:50 +0xfc
  connectrpc.com/connect.IsNotModifiedError()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/error.go:264 +0x68
  connectrpc.com/otelconnect.statusCodeAttribute()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/attributes.go:110 +0x5e4
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.2()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:234 +0x120

Previous write at 0x00c00fc5b408 by goroutine 34378:
  connectrpc.com/connect.NewError()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/error.go:133 +0xcc
  connectrpc.com/connect.wrapIfContextError()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/error.go:306 +0x114
  connectrpc.com/connect.(*duplexHTTPCall).makeRequest()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/duplex_http_call.go:310 +0x2f8
  connectrpc.com/connect.(*duplexHTTPCall).Send.gowrap1()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/duplex_http_call.go:108 +0x38

Goroutine 34631 (running) created at:
  context.(*afterFuncCtx).cancel.func1()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:348 +0x44
  sync.(*Once).doSlow()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/sync/once.go:74 +0x150
  sync.(*Once).Do()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/sync/once.go:65 +0x58
  context.(*afterFuncCtx).cancel()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:347 +0x174
  context.(*cancelCtx).cancel()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:558 +0x348
  context.(*cancelCtx).cancel()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:558 +0x348
  context.(*cancelCtx).cancel()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:558 +0x348
  context.WithCancel.func1()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/context/context.go:237 +0x78
  main.run.func1()
      main.go:53 +0x20c

Goroutine 34378 (finished) created at:
  connectrpc.com/connect.(*duplexHTTPCall).Send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/duplex_http_call.go:108 +0x33c
  connectrpc.com/connect.(*envelopeWriter).write()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/envelope.go:212 +0x88
  connectrpc.com/connect.(*envelopeWriter).Write()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/envelope.go:157 +0x37c
  connectrpc.com/connect.(*envelopeWriter).marshalAppend()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/envelope.go:195 +0x564
  connectrpc.com/connect.(*envelopeWriter).Marshal()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/envelope.go:143 +0x114
  connectrpc.com/connect.(*connectStreamingClientConn).Send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/protocol_connect.go:589 +0x54
  connectrpc.com/connect.(*errorTranslatingClientConn).Send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/protocol.go:206 +0x7c
  connectrpc.com/otelconnect.(*streamingState).send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:102 +0x68
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.5()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:262 +0x10c
  connectrpc.com/otelconnect.(*streamingClientInterceptor).Send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/payloadinterceptor.go:34 +0xa8
  connectrpc.com/connect.(*ClientStreamForClient[ ... ]).Send()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/client_stream.go:69 +0x110

Fix streaming concurrency issue

On this line there is a race condition, because createSpanOnce.Do(createSpan) doesn't ensure that Do returns and initializes before the span is actually created.
This means that the span can be nil in the case where OnClose and Send are called simultaneously.

should compression be measured?

The following is a trace of an API request. Note the "uncovered" time at the end. If I turn compression off, this uncovered time shrinks by a quite a bit (around 2 ms total)
image

I'd love to see exactly what's taking this response that much longer. It seems like ~10K of data should compress in faster than 2 ms, but I'm actually not sure how long that takes.

Clean up `net.peer.{ip,port,name}` attribute handling

Let's follow up on https://github.com/bufbuild/connect-opentelemetry-go/pull/37#issuecomment-1358073237. We shouldn't use the net.peer.ip attribute at all, and servers and clients should both emit net.peer.name and net.peer.port by default. This is high cardinality, but matches the OTel specification.

As @joshcarp notes on #37, the AttributeFilter option introduced in #41 lets handlers strip the client port and IP from metrics to reduce cardinality. This seems likely to be a common request, so we should export an option like WithLowerCardinality that does this automatically.

Data Race: Closing a stream accesses streamingState data without acquiring a lock

When a stream receive is processing concurrently with a call to close, a race can occur accessing the counters and attribute list on the streamingState as the closeSpan func does not acquire a lock before accessing the data:

WARNING: DATA RACE
Read at 0x00c010498438 by goroutine 34631:
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.2()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:234 +0x100

Previous write at 0x00c010498438 by goroutine 34630:
  connectrpc.com/otelconnect.(*streamingState).receive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:85 +0x230
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.4()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:258 +0xe0
  connectrpc.com/otelconnect.(*streamingClientInterceptor).Receive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/payloadinterceptor.go:30 +0xa8
  connectrpc.com/connect.receiveUnaryMessage[ ... ]()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/connect.go:419 +0x22c
  connectrpc.com/connect.receiveUnaryResponse[ ... ]()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/connect.go:376 +0xe4
  connectrpc.com/connect.(*ClientStreamForClient[ ... ]).CloseAndReceive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/client_stream.go:82 +0x1d4
Similar races
WARNING: DATA RACE
Write at 0x00c010498420 by goroutine 34631:
  connectrpc.com/otelconnect.(*streamingState).addAttributes()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:73 +0x238
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.2()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:235 +0x2b8

Previous read at 0x00c010498420 by goroutine 34630:
  connectrpc.com/otelconnect.(*streamingState).receive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:97 +0x628
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.4()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:258 +0xe0
  connectrpc.com/otelconnect.(*streamingClientInterceptor).Receive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/payloadinterceptor.go:30 +0xa8
WARNING: DATA RACE
Read at 0x00c01056c540 by goroutine 34631:
  runtime.slicecopy()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/runtime/slice.go:325 +0x0
  go.opentelemetry.io/otel/metric.WithAttributes()
      /Users/rodaine/pkg/mod/go.opentelemetry.io/otel/[email protected]/instrument.go:366 +0xb8
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.2()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:242 +0x504

Previous write at 0x00c01056c540 by goroutine 34630:
  runtime.slicecopy()
      /Users/rodaine/pkg/mod/golang.org/[email protected]/src/runtime/slice.go:325 +0x0
  connectrpc.com/otelconnect.(*streamingState).addAttributes()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:73 +0x20c
  connectrpc.com/otelconnect.(*streamingState).receive()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/streaming.go:89 +0x460
  connectrpc.com/otelconnect.(*Interceptor).WrapStreamingClient.func1.4()
      /Users/rodaine/pkg/mod/connectrpc.com/[email protected]/interceptor.go:258 +0xe0
  connectrpc.com/otelconnect.(*streamingClientInterceptor).Receive()

Fix should be for closeSpan to acquire the lock fwict.

Let handlers decide whether to trust remote spans

With the changes in #41, I think we're currently assuming that handlers are internal: if the client has already created a trace, we add new spans to it. (After the holiday break, everything's a little hazy - I could be wrong about this!)

This is fine for RPCs within a trusted system, but it's not the right approach for external-facing servers. External-facing handlers should instead create a new span and attach any span sent by the client as a link (using trace.LinkFromContext). We should look at otelgrpc and otelhttp to make sure we get this right.

I lean toward making the external API behavior the default, since it's safest, but I'm not sure.

Beta Release

Once all the remaining issues are completed this repo is ready for a beta release

Migrate off deprecated span attributes

The following attributes have been deprecated:

  • net.peer.name -> replaced with server.address on client spans and client.address on server spans.
  • net.peer.port -> replaced with server.port on client spans and client.port on server spans.
  • http.status_code -> replaced with http.response.status_code.

Empty metrics description causes Prometheus exporter to fail

This is somewhat a strange case. We have an application where there's a legacy gRPC component that is instrumented using the Otel gRPC instrumentation and a newer component that uses Connect and is instrumented with otelconnect-go. As expected, both instrumentations produce rpc_client_* metrics that are nearly identical. However, because the Connect instrumentation does not add descriptions to the metrics, the Prometheus exporter takes issue with that and fails complaining that the descriptions don't match. Here's a trimmed down example without all the labels cluttering the error message:

collected metric rpc_client_duration_milliseconds ... has help "Measures the duration of inbound RPC." but should have ""

This seems to come from a consistency check in the Prometheus client. Unfortunately, it's doing the check on the metric family rather than on individual metrics so having the rpc_system: connect label doesn't help.

I am not sure what's the best solution for this is. Copying the descriptions from the Otel gRPC instrumentation seems to be quickest fix but if they ever change them upstream, the problem would surface again 🤷🏽‍♂️

Add logging support

The logs are now in beta support, which is probably good enough to look into supporting this.

References

Package godocs
Stability

I am currently on a tight schedule, but if nobody picks this up once I have some time I can try and contribute it myself

trace not recording using WithTrustRemote

I'm running into a weird issue setting up otelconnect. When I add the WithTrustRemote option to the interceptor, traces are not exported. When I remove the option, traces are exported and I can see the parent trace successfully marked as a trace.Link

My client is passing a traceparent header that looks like this:

newHeaders.set('traceparent', `00-${activeSpan.spanContext().traceId}-${activeSpan.spanContext().spanId}-00`)

And my server setup is almost identical to the docs:

otelInterceptor := connect_go.WithInterceptors(
    otelconnect.NewInterceptor(otelconnect.WithTrustRemote()),
)

// register rpc handlers
rpc.handle(prodRPC.NewProductionServiceHandler(productionService, otelInterceptor))

func setupOTelSDK(ctx context.Context, serviceName, serviceVersion string, logger *slog.Logger) (shutdown func(context.Context) error, err error) {
	var shutdownFuncs []func(context.Context) error

	// shutdown calls cleanup functions registered via shutdownFuncs.
	// The errors from the calls are joined.
	// Each registered cleanup will be invoked once.
	shutdown = func(ctx context.Context) error {
		var err error
		for _, fn := range shutdownFuncs {
			err = errors.Join(err, fn(ctx))
		}
		shutdownFuncs = nil
		return err
	}

	// handleErr calls shutdown for cleanup and makes sure that all errors are returned.
	handleErr := func(inErr error) {
		logger.Info(fmt.Sprintf("OTEL ERROR: %s", inErr.Error()))
		err = errors.Join(inErr, shutdown(ctx))
	}

	// Set up resource.
	res, err := newResource(serviceName, serviceVersion)
	if err != nil {
		handleErr(err)
		return
	}

	// Set up propagator.
	prop := newPropagator()
	otel.SetTextMapPropagator(prop)

	// Set up trace provider.
	tracerProvider, err := newTraceProvider(res)
	if err != nil {
		handleErr(err)
		return
	}
	shutdownFuncs = append(shutdownFuncs, tracerProvider.Shutdown)
	otel.SetTracerProvider(tracerProvider)

	return
}

func newResource(serviceName, serviceVersion string) (*resource.Resource, error) {
	return resource.Merge(resource.Default(),
		resource.NewWithAttributes(semconv.SchemaURL,
			semconv.ServiceName(serviceName),
			semconv.ServiceVersion(serviceVersion),
		))
}

func newPropagator() propagation.TextMapPropagator {
	return propagation.NewCompositeTextMapPropagator(
		propagation.TraceContext{},
		propagation.Baggage{},
	)
}

func newTraceProvider(res *resource.Resource) (*trace.TracerProvider, error) {
	traceExporter, err := otlptracegrpc.New(context.Background())
	if err != nil {
		return nil, err
	}

	traceProvider := trace.NewTracerProvider(
		trace.WithBatcher(traceExporter,
			// Default is 5s. Set to 1s for demonstrative purposes.
			trace.WithBatchTimeout(time.Second)),
		trace.WithResource(res),
	)
	return traceProvider, nil
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.