Coder Social home page Coder Social logo

grpc-ecosystem / go-grpc-middleware Goto Github PK

View Code? Open in Web Editor NEW
6.1K 85.0 670.0 13.88 MB

Golang gRPC Middlewares: interceptor chaining, auth, logging, retries and more.

License: Apache License 2.0

Go 96.98% Makefile 2.94% Shell 0.09%
grpc golang middleware generic-functions library logging authentication retries testing interceptor

go-grpc-middleware's Introduction

Go gRPC Middleware

go Go Report Card GoDoc Apache 2.0 License Slack

This repository holds gRPC Go Middlewares: interceptors, helpers and utilities.

Middleware

gRPC Go has support for "interceptors", i.e. middleware that is executed either on the gRPC Server before the request is passed onto the user's application logic, or on the gRPC client either around the user call. It is a perfect way to implement common patterns: auth, logging, tracing, metrics, validation, retries, rate limiting and more, which can be a great generic building blocks that make it easy to build multiple microservices easily.

Especially for observability signals (logging, tracing, metrics) interceptors offers semi-auto-instrumentation that improves consistency of your observability and allows great correlation techniques (e.g. exemplars and trace ID in logs). Demo-ed in examples.

This repository offers ready-to-use middlewares that implements gRPC interceptors with examples. In some cases dedicated projects offer great interceptors, so this repository skips those, and we link them in the interceptors list.

NOTE: Some middlewares are quite simple to write, so feel free to use this repo as template if you need. It's ok to copy some simpler interceptors if you need more flexibility. This repo can't support all the edge cases you might have.

Additional great feature of interceptors is the fact we can chain those. For example below you can find example server side chain of interceptors with full observabiliy correlation, auth and panic recovery:

	grpcSrv := grpc.NewServer(
		grpc.ChainUnaryInterceptor(
			// Order matters e.g. tracing interceptor have to create span first for the later exemplars to work.
			otelgrpc.UnaryServerInterceptor(),
			srvMetrics.UnaryServerInterceptor(grpcprom.WithExemplarFromContext(exemplarFromContext)),
			logging.UnaryServerInterceptor(interceptorLogger(rpcLogger), logging.WithFieldsFromContext(logTraceID)),
			selector.UnaryServerInterceptor(auth.UnaryServerInterceptor(authFn), selector.MatchFunc(allButHealthZ)),
			recovery.UnaryServerInterceptor(recovery.WithRecoveryHandler(grpcPanicRecoveryHandler)),
		),
		grpc.ChainStreamInterceptor(
			otelgrpc.StreamServerInterceptor(),
			srvMetrics.StreamServerInterceptor(grpcprom.WithExemplarFromContext(exemplarFromContext)),
			logging.StreamServerInterceptor(interceptorLogger(rpcLogger), logging.WithFieldsFromContext(logTraceID)),
			selector.StreamServerInterceptor(auth.StreamServerInterceptor(authFn), selector.MatchFunc(allButHealthZ)),
			recovery.StreamServerInterceptor(recovery.WithRecoveryHandler(grpcPanicRecoveryHandler)),
		),

This pattern offers clean and explicit shared functionality for all your gRPC methods. Full, buildable examples can be found in examples directory.

Interceptors

This list covers known interceptors that users use for their Go microservices (both in this repo and external). Click on each to see extended examples in examples_test.go (also available in pkg.go.dev)

All paths should work with go get <path>.

Auth

Observability

Client

Server

Filtering Interceptor

Prerequisites

  • Go: Any one of the three latest major releases are supported.

Structure of this repository

The main interceptors are available in the subdirectories of the interceptors directory e.g. interceptors/validator, interceptors/auth or interceptors/logging.

Some interceptors or utilities of interceptors requires opinionated code that depends on larger amount of dependencies. Those are places in providers directory as separate Go module, with separate versioning. For example providers/prometheus offer metrics middleware (there is no "interceptor/metrics" at the moment). The separate module, might be a little bit harder to discover and version in your go.mod, but it allows core interceptors to be ultra slim in terms of dependencies.

The interceptors directory also holds generic interceptors that accepts Reporter interface which allows creating your own middlewares with ease.

As you might notice this repository contains multiple modules with different versions (Go Module specifics). Refer to versions.yaml for current modules. We have main module of version 2.x.y and providers modules of lower versions. Since main module is v2, it's module path ends with v2:

go get github.com/grpc-ecosystem/go-grpc-middleware/v2/<package>

For providers modules and packages, since they are v1, no version is added to the path e.g.

go get github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus

Changes compared to v1

go-grpc-middleware v1 was created near 2015 and became a popular choice for gRPC users. However, many have changed since then. The main changes of v2 compared to v1:

  • Path for separate, multiple Go modules in "providers". This allows to add in future specific providers for certain middlewares if needed. This allows interceptors to be extended without the dependency hell to the core framework (e.g. if use some other metric provider, do you want to import prometheus?). This allows greater extensibility.
  • Loggers are removed. The interceptors/logging got simplified and writing adapter for each logger is straightforward. For convenience, we will maintain examples for popular providers in interceptors/logging/examples, but those are meant to be copied, not imported.
  • grpc_opentracing interceptor was removed. This is because tracing instrumentation evolved. OpenTracing is deprecated and OpenTelemetry has now a superior tracing interceptor.
  • grpc_ctxtags interceptor was removed. Custom tags can be added to logging fields using logging.InjectFields. Proto option to add logging field was clunky in practice and we don't see any use of it nowadays, so it's removed.
  • One of the most powerful interceptor was imported from https://github.com/grpc-ecosystem/go-grpc-prometheus (repo is now deprecated). This consolidation allows easier maintenance, easier use and consistent API.
  • Chain interceptors was removed, because grpc implemented one.
  • Moved to the new proto API (google.golang.org/protobuf).
  • All "deciders", so functions that decide what to do based on gRPC service name and method (aka "fullMethodName") are removed (!). Use github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/selector interceptor to select what method, type or service should use what interceptor.
  • No more snake case package names. We have now single word meaningful package names. If you have collision in package names we recommend adding grpc prefix e.g. grpcprom "github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus".
  • All the options (if any) are in the form of <package_name>.With<Option Name>, with extensibility to add more of them.
  • v2 is the main (default) development branch.

For Maintainers: Release Process

This assumes we want to release minor version of any module:

  1. Understand what has been change and what groups within versions has to be updated.
  2. Update group version on v2 branch accordingly.
  3. Create new tag for each module that has to be released. For the main module github.com/grpc-ecosystem/go-grpc-middleware/v2 the tag has no prefix (e.g. v2.20.1). For providers (sub modules), the tag version has to have form e.g. providers/<provider/v1.2.3. See https://github.com/golang/go/wiki/Modules#faqs--multi-module-repositories for details.
  4. Once all tags are pushed, draft and create release on GitHub page, mentioning all changed tags in the title. Use auto-generation of notes and remove those that are not relevant for users (e.g. fixing docs).

License

go-grpc-middleware is released under the Apache 2.0 license. See the LICENSE file for details.

go-grpc-middleware's People

Contributors

adam-26 avatar aimuz avatar amenzhinsky avatar ash2k avatar bufdev avatar bwplotka avatar dependabot[bot] avatar devnev avatar dmitris avatar domgreen avatar drewwells avatar iamrajiv avatar jkawamoto avatar johanbrandhorst avatar kartlee avatar khasanovbi avatar marcwilson-g avatar metalmatze avatar mkorolyov avatar nvx avatar ogimenezb avatar olivierlemasle avatar peczenyj avatar rahulkhairwar avatar stanhu avatar surik avatar takp avatar tegk avatar xsam avatar yashrsharma44 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-grpc-middleware's Issues

Question about cancelling stream methods

When I have custom code in stream interceptor right after calling handler(srv, wrapped). When client cancels context, the code after the handler is not executed.
Is this grpc error, or middeware error?

Downstream interceptors are not re-executed on retry

Any interceptors chained after the retry interceptor are not re-executed in subsequent retry attempts.
For example if we have:

grpc_middleware.ChainUnaryClient(
  grpc_retry.UnaryClientInterceptor()
  grpc_prometheus.UnaryClientInterceptor)

and want the grpc_prometheus interceptor to see and time each retry independently then currently it will intercept only the first attempt.

(This was possibly a regression in 5d4723c)

I have a pull request, with tests, in:
#100

grpc_auth: AuthFuncOverride without dummy interceptor

Currently to use ServiceAuthFuncOverride one needs to introduce some dummy interceptor:

func dummyInterceptor(ctx context.Context) (context.Context, error) {
	return ctx, nil
}

...

s := grpc.NewServer(
		grpc.StreamInterceptor(grpc_auth.StreamServerInterceptor(dummyInterceptor)),
		grpc.UnaryInterceptor(grpc_auth.UnaryServerInterceptor(dummyInterceptor)),
	)

Is there better way to do this?

Duplicate peer.address in zap logger output

Hi,

I'm using the zap logging interceptor with the ctxtags interceptor. Running into an issue where the peer.address key appears twice.

Wondering if anyone else has run into this issue?

2017-12-24T22:46:05.822-0800	INFO	zap/server_interceptors.go:40	finished unary call	{"peer.address": "[::1]:63312", "grpc.start_time": "2017-12-24T22:46:05-08:00", "system": "grpc", "span.kind": "server", "grpc.service": "...", "grpc.method": "...", "peer.address": "[::1]:63312", "grpc.code": "OK", "grpc.time_ms": 398.6860046386719}
        ...
	logger, err := zap.NewDevelopment()
	if err != nil {
		log.Fatalf("failed to initialize zap logger: %v", err)
	}

	grpc_zap.ReplaceGrpcLogger(logger)

	kaParams := keepalive.ServerParameters{
		MaxConnectionIdle: 60 * time.Minute,
		Time:              60 * time.Minute,
	}

	s := grpc.NewServer(
		grpc.KeepaliveParams(kaParams),
		grpc_middleware.WithUnaryServerChain(
			grpc_ctxtags.UnaryServerInterceptor(),
			grpc_zap.UnaryServerInterceptor(logger),
			grpc_recovery.UnaryServerInterceptor(),
		),
	)
        ...

grpc_logrus multithreading concerns

In the sample code it is encouraged to create a log entry, store it in the context and use it in the rest of the request to log. As far as I can tell this is not thread safe. What is thought process behind this example? I would like to use this issue as a place to have a discussion on logging in this manner.

grpc_logrus: Use FieldLogger

Haven't checked the code yet, but I saw in the documentation that the logrus middleware awaits a logrus.Entry. Wouldn't it be better to rely on the logrus.FieldLogger interface?

build a proxy to NATs with this

I was checking out https://github.com/mwitkow/grpc-proxy, because i want to pass the incoming GRPC messages onto NATS - it makes Microservices so much easier i find.
However, in the Issues, the problem was that the GRPC team would not accept the customisation you did so that you could get access to the binary[] of data in the stream.

So, would the new Interceptors allow this Proxying to be achieved ?

support other language client?

As the title asked, I have a server written in go, but the client may be C#, so can those middlewares support C# gRPC?

Client Logging Interceptor defaults behave incorrectly

The client variants of the logging interceptors were added in #33. Following this change, logging interceptors do not behave correctly in their default configurations i.e. without specifying Options as overrides

Logrus (server & client interceptors):

Unless WithLevels(...) is specified as an option, calls to the logging interceptor panic, as o.levelFunc is nil

Zap (client interceptor only):

Unless WithLevels(...) is specified as an option, calls to the logging interceptor are logged at the incorrect levels as the default is the DefaultCodeToLevel for both server and client interceptors

go get gives compilation errors

go get is giving errors: does anybody else get the same error?

% go get github.com/grpc-ecosystem/go-grpc-middleware/retry
# github.com/grpc-ecosystem/go-grpc-middleware/util/metautils
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:21: undefined: metadata.FromIncomingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:33: undefined: metadata.FromOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:69: undefined: metadata.NewOutgoingContext
../../../../grpc-ecosystem/go-grpc-middleware/util/metautils/nicemd.go:76: undefined: metadata.NewIncomingContext
% go get github.com/grpc-ecosystem/go-grpc-middleware
# github.com/grpc-ecosystem/go-grpc-middleware
../../grpc-ecosystem/go-grpc-middleware/chain.go:77: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:81: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:87: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:88: undefined: grpc.UnaryInvoker
../../grpc-ecosystem/go-grpc-middleware/chain.go:106: undefined: grpc.StreamClientInterceptor
../../grpc-ecosystem/go-grpc-middleware/chain.go:110: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:116: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: undefined: grpc.Streamer
../../grpc-ecosystem/go-grpc-middleware/chain.go:117: too many errors

Prometheus Metrics not provisionned with grpc_auth

Hi,
i use grpc_auth and grpc_prometheus in a gRPC server like that :

server := grpc.NewServer(
		grpc.StreamInterceptor(grpc_prometheus.StreamServerInterceptor),
		grpc.UnaryInterceptor(
			grpc_middleware.ChainUnaryServer(
				middleware.ServerLoggingInterceptor(true),
				grpc_auth.UnaryServerInterceptor(authenticate),
				grpc_prometheus.UnaryServerInterceptor,
				otgrpc.OpenTracingServerInterceptor(tracer, otgrpc.LogPayloads()))),
	)

And auth :

func authenticate(ctx context.Context) (context.Context, error) {
	glog.V(2).Info("Check authentication")
	token, err := grpc_auth.AuthFromMD(ctx, "basic")
	if err != nil {
		return nil, err
	}
	userID, err := auth.CheckBasicAuth(token)
	if err != nil {
		return nil, grpc.Errorf(codes.Unauthenticated, err.Error())
	}
	newCtx := context.WithValue(ctx, transport.UserID, userID)
	return newCtx, nil
}

I try to access services without credentials i ve got this response :

rpc error: code = 16 desc = Unauthorized

So it works fine
But in the /metrics exported for Prometheus, i don't see any metrics with code = Unauthenticated

grpc_server_handled_total{grpc_code="Unauthenticated", grpc_method="xxxxx",grpc_service="xxxxxxx",grpc_type="unary"} 0

Any idea ?

Support grpc's WithDetails and errdetails for validator middleware

Google has a bunch of error detail protobuf messages here: https://godoc.org/google.golang.org/genproto/googleapis/rpc/errdetails

I am currently performing validations by hand like so:

s, _ := status.Newf(codes.InvalidArgument, "invalid input").WithDetails(&errdetails.BadRequest{
	FieldViolations: []*errdetails.BadRequest_FieldViolation{
		{
			Field:       "SomeRequest.email_address",
			Description: "INVALID_EMAIL_ADDRESS",
		},
		{
			Field:       "SomeRequest.username",
			Description: "INVALID_USER_NAME",
		},
	},
})

return s.Err()

I am wondering if you guys can considering the following:

  • Support validation of the whole request and return all bad fields in the error message.
  • Support using errdetails.BadRequest to report which field is invalid.
  • Allow customization of the description for each field violation.

grpc_retry.WithMax(), how to use it properly?

in this example, we see the following code:

func Example_deadlinecall() error {
	client := pb_testproto.NewTestServiceClient(cc)
	pong, err := client.Ping(
		newCtx(5*time.Second),
		&pb_testproto.PingRequest{},
		grpc_retry.WithMax(3),
		grpc_retry.WithPerRetryTimeout(1*time.Second))
	if err != nil {
		return err
	}
	fmt.Printf("got pong: %v", pong)
	return nil
}

But when I use one of those "option modifiers" as grpc_retry.WithMax() in a client call, it fails with a nullpointer exception. This is because the callOption wasn't set.

It seems to work well when I pass it as a modifier to the constructor of grpc_retry.UnaryClientInterceptor, such as in the test.

I do not yet fully understand the code architecture, and I don't know how a grpc.CallOption is to be usted. My question is: Am I overlooking something? Is the example just wrong? Or is the implementation wrong?

Need a full example

Could give a full example for logging with zap?

The readme confuses me several days...

Custom errors

Hi,

When using the logging middleware, its difficult to use custom errors in handlers/other interceptors because the logging middleware expects an rpcErrortype to determine the grpc error code.

By adding another func (that extracts a grpc error code from an error) to the logging options, it would be easy to enable the use of custom errors types. This can be done without changing the current default behavior.

I'll submit a PR.

grpc_logging payloads embedded in the classic log output

Hi,

I am using most of go-grpc-middleware and I have a question regarding logs. I wish to include the payload in the standard output or have a way to correlate both of them.

In order to get, ideally, this king of output:

{
    "level": "info",									
    "msg": "finished unary call",						
    "grpc.code": "OK",								
    "grpc.method": "Ping",							
    "grpc.service": "mwitkow.testproto.TestService", 
    "grpc.start_time": "2006-01-02T15:04:05Z07:00", 
    "grpc.request.deadline": "2006-01-02T15:04:05Z07:00",
    "grpc.request.value": "something",
    "grpc.time_ms": 1.345,		
    "peer.address": {
        "IP": "127.0.0.1",			
        "Port": 60216,				
        "Zone": ""				
    },
    "span.kind": "server",			
    "system": "grpc"				
    "grpc.request.content": {		        
        "msg" : {														
            "value": "something",											
            "sleepTimeMs": 9999											
        }
    },
    "custom_field": "custom_value",					
    "custom_tags.int": 1337,							
    "custom_tags.string": "something",				
}

Below, an extract of my interceptor.

	grpc_zap.UnaryServerInterceptor(logger.Zap, opts...),
	grpc_zap.PayloadUnaryServerInterceptor(logger.Zap, alwaysLoggingDeciderServer),	

One simple way could be to pass a GUID in the context and flag both logs with it. Then we could aggregate both results in Grafana. It' not optimal but would work, though. ๐Ÿค”

By the way, thanks for the amazing work!

[Question] grpc_ctxtags without interceptor

Prior to looking into using the grpc_ctxtags middleware I was using the context myself to propagate the tags. I have the grpc_ctxtags wired up for my grpc server interceptors, but I also have a separate worker that is not a grpc server and I'm not seeing an easy way to use a common chunk of code for the tags and logging with the grpc_ctxtags as it returns a no-op tag when it was not initialized with the interceptor and I'm not seeing any other way to initialize it.

[grpc_logrus] how to add error stack trace to log output

I'm trying to figure out how to insert the stack trace of an error into my logrus generated messages. The errors are constructed using the github.com/pkg/errors package, for example: errors.Wrapf(err, "Failed to execute query"). Ideally, I'd like this to show up in my JSON log as follows:

{
  "timestamp": "2017-12-29T03:29:26Z",
  "system": "grpc",
  "message": "finished unary call",
  "level": "info",
  "error": "rpc error: code = Internal desc = invalid UpdatePartyName request. Expected a Person or Organization",
  "action": "CreateParty",
  "stacktrace": "github.com/myrepo/myproj/manager.init\ngithub.com/myrepo/myproj/manager/manager.go:84\ngithub.com/myrepo/myproj/server.init\n\u003cautogenerated\u003e:1\ngithub.com/myrepo/myproj/cmd.init\n\u003cautogenerated\u003e:1\nmain.init\n\u003cautogenerated\u003e:1\nruntime.main\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/proc.go:183\nruntime.goexit\n/usr/local/Cellar/go/1.9.2/libexec/src/runtime/asm_amd64.s:2337"
}

I've been able to use the WithCodes function to change the error codes, but this doesn't allow me to hook into the log message to insert any new details. Can anyone point me in the right direction?

grpc.request.deadline and grpc.start_time precision is in seconds

grpc.request.deadline and grpc.start_time use d.Format(time.RFC3339) which means maximum precision is in seconds. I believe it would be useful to use d.Format(time.RFC3339Nano) at least.

Best would be a configuration option for me to format all logged timestamps as desired.

I can work on a PR for this change if it makes sense.

grpc_logrus SystemField

I'm confused what SystemField is intended to represent, as it seems that it's being used as both a key (

) and a value (
grpclog.SetLogger(logger.WithField("system", SystemField))
).

My assumption would be that it would represent the value of the "system" field in the log message. Could I get some clarification here? I'll happily submit a pull request to correct it, if this is indeed an oversight. Thanks!

Performance suggestion for chain builders

Pardon the lackluster issue instead of a proper pull request, but I'm under time pressure and would otherwise just forget this.

While I was rolling my own middleware for gRPC, I noticed https://github.com/grpc-ecosystem/go-grpc-middleware/blob/master/chain.go#L18

My suggestion is somewhat self-explanatory and looks as follows:

func ChainUnaryServer(interceptors ...grpc.UnaryServerInterceptor) grpc.UnaryServerInterceptor {
	n := len(interceptors)

	if n > 1 {
		curI := 0
		lastI := n-1
		return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
			var chainHandler grpc.UnaryHandler
			chainHandler = func(currentCtx context.Context, currentReq interface{}) (interface{}, error) {
				if curI == lastI {
					return handler(currentCtx, currentReq)
				}
				curI++
				return interceptors[curI](currentCtx, currentReq, info, chainHandler)
			}

			return interceptors[0](ctx, req, info, chainHandler)
		}
	}

	if n == 1 {
		return interceptors[0]
	}

	// n == 0
	return func(ctx context.Context, req interface{}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
		return handler(ctx, req)
	}
}

Avoids a loop, n lamba constructions and n additional function calls. Adds n if conditions and n increments, but that should still be considerably cheaper. Branches are ordered by most likely occurrence - it's a chain after all, so I assume n is > 1. The built lambdas end up in the hot path, so I think a little bit of microoptimization won't hurt. Coincidentally, I think it's easier to reason about.

Admittedly I have not actually benchmarked it against the code in grpc_middleware (apologies - really low on time) but it should (cough, cough) be quite a bit faster going by common sense. I have however been using this approach in a deployment - with no issues.

If someone wants to pick it up and work it into the interceptor factories, please go ahead. Otherwise I'll work on a proper PR, but that won't be sooner than in 2-3 weeks.

Race in metautils

I'm not sure if I use your library correctly, but I'm getting race with this code:

package main

import (
	"context"

	"github.com/mwitkow/go-grpc-middleware/util/metautils"
	"google.golang.org/grpc/metadata"
)

func main() {
	md := metadata.Pairs("key", "value")
	parent := metadata.NewContext(context.Background(), md)
	for i := 0; i < 1000; i++ {
		go func(parent context.Context) {
			ctx, cancel := context.WithCancel(parent)
			defer cancel()
			metautils.SetSingle(ctx, "key", "val")
		}(parent)
	}
}

The idea is that I receive gRPC request to service A which then calls concurrently multiple services (let's say B, C and D). I re-use parent context but I set some timeout for those requests. Connections between A and B-D are using retry logic from this repository (5 retries, 1 second timeout). So the race is in metautils.SetSingle() where multiple writes are performed on metadata map (storing x-retry-attempty header). Is it intended to not work concurrently or I'm doing something wrong? Above example is narrowed down to calling metautils.SetSingle() as it's not easy to reproduce, but I can prepare more adequate example if needed.

Question, binary file streaming

Hi, I have a question if you know of any existing Go library providing support for file and blob streaming over gRPC?

Example use cases include microservices providing a API for converting media formats (image, audio, video) or microservices providing file compression, text-to-speech, etc.

I found this thread grpc/grpc-go#414 , this seam to be a pretty common need so I figure others must have had dealt with it.

zap example link fail in readme

Examples

Package (HandlerUsageUnaryPing)
Package (Initialization)
Package (InitializationWithDurationFieldOverride)

These links are not working.

[zap]Let me choose the field name.

From the code, we can see:

ServerField = zap.String("span.kind", "server")

zap.String("grpc.code", code.String()),

I use elk, when put log parsed to es, it will fail.

Let me customize the grpc.code field,

for example: use grpc_code

grpc_ctxtags from metadata

I'm trying to log some per-request metadata. The code below gets it done, producing:

grpc.code=OK grpc.method=Ping user=98765 peer.address=127.0.0.1:53878 span.kind=server system=grpc

I was wondering if there's a better way?

On the client side doing:


import "google.golang.org/grpc/metadata"

md := metadata.Pairs("user", "98765")
ctx := metadata.NewOutgoingContext(context.Background(), md)
client.Ping(ctx, &pb_testproto.PingRequest{})

And on the server, something like:

func UnaryServerMetdataTagInterceptor(fields ...string) grpc.UnaryServerInterceptor {
	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
		if ctxMd, ok := metadata.FromIncomingContext(ctx); ok {
			for _, field := range fields {
				if values, present := ctxMd[field]; present {
					tags := grpc_ctxtags.Extract(ctx)
					tags = tags.Set(field, strings.Join(values, ","))
				}
			}
		}
		return handler(ctx, req)
	}
}

myServer := grpc.NewServer(
	grpc_middleware.WithUnaryServerChain(
		grpc_ctxtags.UnaryServerInterceptor(),
		UnaryServerMetdataTagInterceptor("user"),
		grpc_logrus.UnaryServerInterceptor(logrusEntry, logrusOpts...),
	),
	...
)

Related, is there a reason grpc_ctxtags. RequestFieldExtractorFunc doesn't get access to the per-request context? Would the addition of a MetadataExtractorFunc
in grpc_ctxtags.options be welcomed?

zap log needs log a struct.

Now the value is only for string.

Does it possible to save a struct( object) to value ?

{"level":"info","ts":1498443458.2312229,"caller":"servant.git/main.go:95","msg":"{\"Item1\":\"aaa\",\"Item2\":222}","system":"grpc","span.kind":"server","grpc.service":"pb.Greeter","grpc.method":"SayHello"}

let the msg is a object.

Configure logrus level for proto messages

We would like to be able to use the logrus logging interceptor to:

  • Log all messages at info level
  • Log the full json dump of the payload at debug level

I was able to configure the log level for the server messages (e.g. grpc_logrus.UnaryServerInterceptor) with grpc_logrus.WithLevels. However, for payload messages it looks to be hardcoded code link.

Is there interest in adding something similar for the payload levels? I'm open to submitting a pull request, but am not sure the best approach for where to set the payload level; some ideas: Add an option such as grpc_logrus.WithPayloadLevels? Pass it directly into the Payload*Interceptor function? Have it returned from the decider function? Other?

Improve docs around grpc_zap.WithDecider

When creating a client interceptor, despite selecting an opt with WithDecider set to a function that always returns false, I have noticed I am still getting DEBUG entries to zap, for example unary client calls would get a "finished client unary call" for every call

should there be something like

		err := invoker(ctx, method, req, reply, cc, opts...)
+		if !o.shouldLog(method, err) {
+			return err
+		}
		logFinalClientLine(o, logger.With(fields...), startTime, err, "finished client unary call")

in client_interceptors.go's Unary/StreamClientInterceptor functions to prevent the call to logFinalClientLine similar to how this looks like being handled on the server-side of things?

vendored package breaks public API

It looks like google.golang.org/grpc/metadata is now being vendored by this project. This breaks the public API of this package, which might not be your intention?

I don't believe google.golang.org/grpc/metadata needs to be vendored by this package. It kinda breaks type compatibility between this package and other packages using metadata.

somefile.go:74:46: cannot use stream (type "google.golang.org/grpc".ServerStream) as type "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream in argument to grpc_middleware.WrapServerStream:
        "google.golang.org/grpc".ServerStream does not implement "github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc".ServerStream (wrong type for SendHeader method)
                have SendHeader("google.golang.org/grpc/metadata".MD) error
                want SendHeader("github.com/grpc-ecosystem/go-grpc-middleware/vendor/google.golang.org/grpc/metadata".MD) error

Support grpc_glog ?

Does it sound reasonable to support a logging implementation using glog ? We currently use glog and would like to use that for grpc logger.

Rate Limit Interceptor

Hi,

What guys do you think of a new server incerceptor thay would stall API calls for a given duration in order to prevent DoS / bruteforce.

Use case : delay every API call by 200 ms to prevent API DoS.

I know this can be done easily using an auth interceptor with a custom sleep-based function but it doesn't sound very good because it has nothing to do with auth.

could you clean merged branch?

Hello there,

Recently, I am working on migrating from glide to dep in our project. Our project depends on this but when dep try to solve dependencies it iterates a lot branch that this project has. After I checked, a lot of them are already merged into master which means they are useless and can be removed from branch. I am wiring this to notify you guys and wish you can remove unused branches.
Thanks in advance.

Ctx_tags to populate both Request & Response Logging

I like being able to define tags in proto, and have the ctx_tags middleware extract them. This then can be used from request logging interceptors later down the middleware chain.

Can I do this for response logging too? It seems by default the logrus middleware when used with ctx_tags just logs the request. If I add a payload interceptor that is always true, then it logs out the response in grpc.response.content, as a full json struct.

I was hoping it would log like request, and use the opentrace format and populate only the tagged fields.

Any ideas?

This is the sort of logs I'm getting

{"app":"ticket_svc","grpc.code":"OK","grpc.method":"GetTicket","grpc.request.id":"e5179cd4-4c03-41f8-bc07-52d9fcf7bc85","grpc.service":"actourex.core.service.ticket.Command","grpc.time_ms":42,"level":"info","msg":"finished unary call","peer.address":{"IP":"::1","Port":52958,"Zone":""},"severity":"INFO","span.kind":"server","system":"grpc","time":"2017-07-05T18:14:10Z"}

# it would be nice if I could figure out how to have this print with keys like: grpc.response.id = blah, grpc.response.some_other_tagged_field = blah2
{"app":"ticket_svc","grpc.response.content":{ ... the full json payload... },"level":"info","msg":"server response payload logged as grpc.request.content field","severity":"INFO","time":"2017-07-05T18:14:10Z"}

Adding JSON to jsonPayload with grpc_ctxtags

I am trying to add json object as part of my grpc_ctxtags (which show up as jsonPayload in stackdriver). I am shoving the proto object as is, and is being converted to a "string" instead of a json object. For more visuals see this:

jsonPayload: {
caller: "zap/server_interceptors.go:66"
conf: "encoding:LINEAR16 sample_rate_hertz:44100 header:"RIFF\277\377\377\377WAVEfmt \020\000\000\000\001\000\001\000D\254\000\000\210X\001\000\002\000\020\000data\233\377\377\377" language_code:"en-US" session_id:"102k0KxT9EISz-G1IVynLmIUg" session_owner_id:"24f80fa2-2a82-4e7b-a1e1-ab3c2b2dcfd9" stream_start_time:<seconds:1513280382 nanos:649000000 > context:<view:SCHEDULE > "
....

I want the conf object to be a json instead of a raw string. What is the advice on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.