Coder Social home page Coder Social logo

docker-protoc's Introduction

gRPC/Protocol Buffer Compiler Containers

Codacy Badge Master Docker Pulls

This repository contains support for various Docker images that wrap protoc, prototool, grpc_cli commands with gRPC support in a variety of languages removing the need to install and manage these commands locally. It relies on setting a simple volume to the docker container, usually mapping the current directory to /defs, and specifying the file and language you want to generate.

Features

  • Docker images for:
    • protoc with namely/protoc (automatically includes /usr/local/include)
    • Uber's Prototool with namely/prototool
    • A custom generation script to facilitate common use-cases with namely/protoc-all (see below)
    • grpc_cli with namely/grpc-cli
    • gRPC Gateway using a custom go-based server with namely/gen-grpc-gateway
  • Google APIs included in /opt/include/google
  • Protobuf library artifacts included in /opt/include/google/protobuf. NOTE: protoc would only need part of the path i.e. -I /opt/include if you import WKTs like so:
import "google/protobuf/empty.proto";
...
  • Support for all C-based gRPC libraries with Go and Java native libraries

If you're having trouble, see Docker troubleshooting below.

Note - throughout this document, commands for bash are prefixed with $ and commands for PowerShell on Windows are prefixed with PS>. It is not required to use "Windows Subsystem for Linux" (WSL) except for development work on docker-protoc itself

Tag Conventions

For protoc, grpc_cli and prototool a pattern of <GRPC_VERSION>_<CONTAINER_VERSION> is used for all images (or <GRPC_VERSION>_<CONTAINER_VERSION>-rc.<PRERELEASE_NUMBER>) for pre-releases). Example is namely/protoc-all:1.15_0 for gRPC version 1.15 (or namely/protoc-all:1.15_0-rc.1 for a pre-release). The latest tag will always point to the most recent version.

Usage

Pull the container:

$ docker pull namely/protoc-all

After that, change working directory to the one that contains your .proto definition files.

So if you have a directory: ~/my_project/protobufs/ that has: myproto.proto, you'd want to run this:

$ cd ~/my_project/protobufs
$ docker run -v $PWD:/defs namely/protoc-all -f myproto.proto -l ruby #or go, csharp, etc
PS> cd ~/my_project/protobufs
PS> docker run -v ${pwd}:/defs namely/protoc-all -f myproto.proto -l ruby #or go, csharp, etc

The container automatically puts the compiled files into a gen directory with language-specific sub-directories. So for Golang, the files go into a directory ./gen/pb-go; For ruby the directory is ./gen/pb-ruby.

Options

You can use the -o flag to specify an output directory. This will automatically be created. For example, add -o my-gen to add all fileoutput to the my-gen directory. In this case, pb-* subdirectories will not be created.

You can use the -d flag to generate all proto files in a directory. You cannot use this with the -f option.

You can also use -i to add extra include directories. This can be helpful to lift protofiles up a directory when generating. As an example, say you have a file protorepo/catalog/catalog.proto. This will by default output to gen/pb-go/protorepo/catalog/ because protorepo is part of the file path input. To remove the protorepo you need to add an include and change the import:

$ docker run ... namely/protoc-all -i protorepo -f catalog/catalog.proto -l go
# instead of
$ docker run ... namely/protoc-all -f protorepo/catalog/catalog.proto -l go
# which will generate files in a `protorepo` directory.

Ruby-specific options

--with-rbi to generate Ruby Sorbet type definition .rbi files

node/web-specific options

--js-out <string> to modify the js_out= options for node and web code generation

--grpc-web-out <string> to modify the grpc-web_out= options for web code generation

--grpc-out <string> to modify the grpc_out= options for node and web code generation. See https://www.npmjs.com/package/grpc-tools for more details.

gRPC Gateway

This repo also provides a docker image namely/gen-grpc-gateway to generate a grpc-gateway server. By annotating your proto (see the grpc-gateway documentation), you can generate a server that acts as an HTTP server, and a gRPC client to your gRPC service.

Generate a gRPC Gateway docker project with

docker run -v `pwd`:/defs namely/gen-grpc-gateway -f path/to/your/proto.proto -s Service

where Service is the name of your gRPC service defined in the proto. This will create a folder with a simple go server. By default, this goes in the gen/grpc-gateway folder. You can then build the contents of this folder into an actual runnable grpc-gateway server.

Build your gRPC Gateway server with

docker build -t my-grpc-gateway gen/grpc-gateway/

NOTE: If your service does not contain any (google.api.http) annotations, this build will fail with an error ...HandlerFromEndpoint is undefined. You need to have at least one rpc method annotated to build a gRPC Gateway, or use --generate-unbound-methods option to expose all the methods in your proto file

Run this image with

docker run my-grpc-gateway --backend=grpc-service:50051

where --backend refers to your actual gRPC server's address. The gRPC gateway listens on port 80 for HTTP traffic.

Configuring grpc-gateway

The gateway is configured using spf13/viper, see gwy/templates/config.yaml.tmpl for configuration options.

To configure your gateway to run under a prefix, set proxy.api-prefix to that prefix. For example, if you have (google.api.http) = '/foo/bar', and set proxy.api-prefix to /api/', your gateway will listen to requests on '/api/foo/bar'. This can also be set with the environment variable <SERVICE>_PROXY_API-PREFIX where <SERVICE> is the name of the service generating the gateway.

See gwy/test.sh for an example of how to set the prefix with an environment variable.

HTTP Headers

The gateway will turn any HTTP headers that it receives into gRPC metadata. Any permanent HTTP headers will be prefixed with grpcgateway- in the metadata, so that your server receives both the HTTP client-to-gateway headers, as well as the gateway-to-gRPC server headers.

Any headers starting with Grpc- will be prefixed with an X-; this is because grpc- is a reserved metadata prefix.

All other headers will be converted to metadata as-is.

CORS Configuration

You can configure CORS for your gateway through the configuration. This will allow your gateway to receive requests from different origins.

There are four values:

  • cors.allow-origin: Value to set for Access-Control-Allow-Origin header.
  • cors.allow-credentials: Value to set for Access-Control-Allow-Credentials header.
  • cors.allow-methods: Value to set for Access-Control-Allow-Methods header.
  • cors.allow-headers: Value to set for Access-Control-Allow-Headers header.

For CORS, you will want to configure your cors.allow-methods to be the HTTP verbs set in your proto (i.e. GET, PUT, etc.), as well as OPTIONS, so that your service can handle the preflight request.

If you are not using CORS, you can leave these configuration values at their default, and your gateway will not accept CORS requests.

GRPC Client Configuration

  • grpc.max-call-recv-msg-size: Sets the maximum message size in bytes the client can receive.

  • grpc.max-call-send-msg-size: Sets the maximum message size in bytes the client can send.

Other Response Headers

You can configure additional headers to be sent in the HTTP response. Set environment variable with prefix <SERVICE>_RESPONSE-HEADERS_ (e.g SOMESERVICE_RESPONSE-HEADERS_SOME-HEADER-KEY). You can also set headers in the your configuration file (e.g response-headers.some-header-key)

Marshalling options

Setting Marshaler version

By default, gen-grpc-gateway will use a marshaler/unmarshaler based on jsonpb. You can change this behavior by setting gateway.use-jsonpb-v2-marshaler: true, which will use protojson - a newer version which is more aligned with proto <=> json mapping.

Proto names format

By default, gen-grpc-gateway will return proto names as they are in the proto messages. You can change this behavior by setting gateway.use-json-names: true and the gateway will use camelCase JSON names.

Unpopulated fields

By default, gen-grpc-gateway will not emit unpopulated fields. You can change this behavior by setting gateway.emit-unpopulated: true and the gateway will populate these fields with default values.

Unknown fields

By default, gen-grpc-gateway will discard unknown fields from requests. You can change this behavior by setting gateway.keep-unknown: true and the gateway will keep these fields in the requests.

Environment Variables

The gateway project used spf13/viper for configuration. The generated gateway code includes a config file that can be overridden with cli flags or environment variables. For environment-variable overrides, use a <SERVICE>_ prefix, upcase the setting, and replace . with _.

grpc_cli

This repo also contains a Dockerfile for building a grpc_cli.

Run it with

docker run -v `pwd`:/defs --rm -it namely/grpc-cli call docker.for.mac.localhost:50051 \\
LinkShortener.ResolveShortLink "short_link:'asdf'" --protofiles=link_shortener.proto

You can pass multiple files to --protofiles by separating them with commas, for example --protofiles=link_shortener.proto,foo/bar/baz.proto,biz.proto. All of the protofiles must be relative to pwd, since pwd is mounted into the container.

See the grpc_cli documentation for more information. You may find it useful to bind this to an alias:

alias grpc_cli='docker run -v $PWD:/defs --rm -it namely/grpc-cli'

Note the use of single quotes in the alias, which avoids expanding the $PWD parameter when the alias is created.

Now you can call it with

grpc_cli call docker.for.mac.localhost:50051 LinkShortener.ResolveShortLink "short_link:'asdf'" --protofiles=link_shortener.proto

Contributing

Thank you for considering a contribution to namely/docker-protoc!

If you'd like to make an enhancement, or add a container for another language compiler, you will need to run one of the build scripts in this repo. You will also need to be running Mac, Linux, or WSL 2, and have Docker installed.

Build

From the repository root, run this command to build all the known containers:

make build

Note the version tag in Docker's console output - this image tag is required to run the tests using the container with your changes.

You can change some environment variables relevant to the build by setting them as prefixes to the make command. For example, this would build the containers using Node.js 15 and gRPC 1.35. See some interesting variables in variables.sh and entrypoint.sh.

NODE_VERSION=15 GRPC_VERSION=1.35 make build

Test

Running tests for protoc-all image

To run the tests, identify your image tag from the build step and run make test as below:

CONTAINER=namely/protoc-all:VVV make test

(VVV is your version from the tag in the console output when running make build.) Running this will demonstrate that your new image can successfully build containers for each language.

Note that testing currently requires Go to be locally installed.

Adding tests

The tests for protoc-all are written in Go.
To add or modify tests, use the testCase struct in all_test.go to set: the language, the arguments to invoke the image with, the expected generated files and any additional assertions.

gRPC Gateway test

CONTAINER=namely/gen-grpc-gateway:VVV make test-gwy

Auto Dependency Managenement

Dependencies are managed in this repo via Renovate. Please read more here.

Release

  • A new gRPC version based release is handled automatically once the relevant Renovate branch is merged to master via the CI (Github Action).
  • A patch/release candidate release should be created manually in Github and the CI workflow will take care of the rest.

Contributors

Open a PR and ping one of the Namely employees who have worked on this repo recently. We will take a look as soon as we can.
Thank you!!

Namely Employees

Namely employees can merge PRs and cut a release/pre-release by drafting a new Github release and publishing them.
The release name should follow the same tag conventions described in this doc and the gRPC version in the release name
must match the GRPC_VERSION configured in variables.sh.
A valid release/pre-release will be of the form v${GRPC_VERSION}_${BUILD_VERSION}/v${GRPC_VERSION}_${BUILD_VERSION}-rc.${RC_VERSION} respectively.
e.g 1.37_2, 1.38_0-rc.3.
Once a new valid Github release is published, new images will be published to DockerHub via CI.

Docker Troubleshooting

Docker must be configured to use Linux containers.

If on Windows, you must have your C: drive shared with Docker. Open the Docker settings (right-click Docker icon in notification area) and pick the Shared Drives tab. Ensure C: is listed and the box is checked. If you are still experiencing trouble, click "Reset credentials..." on that tab and re-enter your local Windows username and password.

docker-protoc's People

Contributors

aitjcize avatar aldelucca1 avatar alex-namely avatar bobbytables avatar brantc-namely avatar cristianovagos avatar esilkensen avatar gxb5443 avatar heliuchuan avatar ido-namely avatar jcburley avatar kevinsamoei avatar macat avatar maxjakob avatar mdkess avatar mhamrah avatar michaltrmac avatar nycdotnet avatar petermcneely avatar ppiccolo avatar renovate[bot] avatar scastro01 avatar seiji-thirdbridge avatar shiena avatar tnir avatar tomykaira avatar trietsch avatar tsexton6689 avatar vtopc avatar wespen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-protoc's Issues

Not found when using http.StripPrefix

Hi.
I created the api as follows:

service ServiceA {
  rpc Ping (service.common.proto.MessagePing) returns (service.common.proto.MessagePong) {
    option (google.api.http) = {
      get: "/core/serviceA/ping/{timestamp}"
    };
  }
}

Then, I configed file config.yaml

backend: "localhost:8000"
cors:
  allow-origin: "*"
  allow-credentials: 
  allow-methods: "POST, GET, OPTIONS, PUT, DELETE"
  allow-headers: "Accept, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization"
proxy:
  port: 9000
  api-prefix: "/core"
swagger:
  file: "..swagger.json"

When I test API http://localhost:9000/core/serviceA/ping/900000 by postman, the returned result is Not found.

I tried modifying the code in one place:

//mux.Handle(prefix, handlers.CustomLoggingHandler(os.Stdout, http.StripPrefix(prefix[:len(prefix)-1], addRespHeaders(cfg, gwmux)), formatter))

mux.Handle(prefix, addRespHeaders(cfg, gwmux))

And API call result was successful.
I do not know the cause of the error that the line of code generated. Can you explain help me?

`touch ./__init__.py: Permission denied` errors on generate

I'm trying to use namely:protoc-all to generate pb classes for python. It seems like a great tool, but I'm getting permissions errors about the python init.py module files every time I run it. Oddly, the generation seems to succeed, but then only the touch fails.

My intention is to have the generated files and directories owned by the shell user as if they'd run everything locally, so I'm using roughly this make rule:

PROJECT=$(shell git rev-parse --show-toplevel 2> /dev/null)
DIR=$(shell pwd)
DOCKER_PROTOC=namely/protoc-all:1.23_0
UID=$(shell id -u)
GID=$(shell id -g)

.PHONY: types-python
types-python:
	cd proto && \
	docker run \
			-u $(UID):$(GID) \
			-v $(DIR)/proto:/defs \
			-v $(DIR)/pose_detection/protobuf:/pose_detection/protobuf \
		$(DOCKER_PROTOC) -l python -f poses.proto -o /pose_detection/protobuf

...when I run the above, it succeeds in generated the python classes, but I get an error message too:

 make types-python 
cd proto && \
docker run \
		-u 1001:1001 \  
		-v /home/larry/projects/pose-detection/protobuf/proto:/defs \
		-v /home/larry/projects/pose-detection/protobuf/pose_detection/protobuf:/pose_detection/protobuf \
	namely/protoc-all:1.23_0 -l python -f poses.proto -o /pose_detection/protobuf
touch: /pose_detection/__init__.py: Permission denied
Makefile:9: recipe for target 'types-python' failed
make: *** [types-python] Error 1

I'm assuming generating compiled protobuf classes to a local directory is a common purpose for this tool, so maybe I'm just doing something wrong? If that's the case and there's a correction, I would volunteer to update the README with the details.

Thanks,
Larry

Buiding grpc-gateway for many service

Hi
First of all, I thank you because your project helped me a lot.
When i use feature grpc-gateway, i had a problem that when using multiple services in many different proto files, grpc-gateway docker will only connect to one of those services.

Example:
I have file serviceA.proto

service ServiceA {
  rpc Ping (service.common.proto.MessagePing) returns (service.common.proto.MessagePong) {
    option (google.api.http) = {
      get: "/core/serviceA/ping/{timestamp}"
    };
  }
}

And I have file serviceB.proto

service ServiceB {
  rpc Ping (service.common.proto.MessagePing) returns (service.common.proto.MessagePong) {
    option (google.api.http) = {
      get: "/core/serviceA/ping/{timestamp}"
    };
  }
}

Then I run cmd:

$ docker run -v `pwd`:/defs namely/gen-grpc-gateway -f . -s ServiceA

I checked in the fun SetupMux that only service A is registered because in cmd i only possible to configure option -s serviceA

I found a way to fix this by adding a function SetupMux(...) as follows:

func SetupMux(ctx context.Context, cfg proxyConfig) *http.ServeMux {
...
err := gw.RegisterServiceAHandlerFromEndpoint(ctx, gwmux, cfg.backend, opts)
	if err != nil {
		logrus.Fatalf("Could not register gateway: %v", err)
	}

	err = gw.RegisterServiceBHandlerFromEndpoint(ctx, gwmux, cfg.backend, opts)
	if err != nil {
		logrus.Fatalf("Could not register gateway: %v", err)
	}
}

I have a solution to solve this problem. Can I create a pull request for this project?
@gxb5443 @mdkess

Support header whitelists

By default, "HTTP headers that start with 'Grpc-Metadata-' are mapped to gRPC metadata (prefixed with grpcgateway-)".

As someone building a gRPC gateway, I would like to be able to specify which headers are forwarded in the config, and change the prefixing rules.

The matching occurs in the DefaultHeaderMatcher
https://github.com/grpc-ecosystem/grpc-gateway/blob/b2423da79386df5bb9d02f5d3d5793042050a0d9/runtime/mux.go#L53

Which is set here (and can be overridden): https://github.com/grpc-ecosystem/grpc-gateway/blob/b2423da79386df5bb9d02f5d3d5793042050a0d9/runtime/mux.go#L130

This task is to

  1. Determine how to configure headers.
  2. Update the automatically generated grpc-gateway code to support this new header structure. This will be done in a backward compatible way, so that if the header whitelist is not specified, clients get the default behavior described above.

Inconsistent package names

Passing multiple different packages in *.proto files currently fails when using protoc-gen-go.

Consequently, the -d options in gen-proto does not work with multiple packages.

/defs # entrypoint.sh  -d . -l go
Generating go files for . in gen/pb-go
2018/09/06 15:15:28 protoc-gen-go: error:inconsistent package names: package_one, package_two
--go_out: protoc-gen-go: Plugin failed with status code 1.

As a workaround we can have a simple interation other each proto file and called protoc compiler on that file in the entrypoint.sh script:

for filename in ${PROTO_DIR}/*.proto; do
    filename=${filename##*/} # remove path
    filename=${filename%.*} # remove extension

    rm -rf ${GO_OUTDIR}/${filename}
    mkdir ${GO_OUTDIR}/${filename}
    
    ${PROTOC} \
    --go_out=plugins=grpc:${GO_OUTDIR}/${filename} \
    -I ${PROTO_DIR} ${filename}.proto
done

Not sure if it's a pull request or an issue, but was wondering how you manage this kind of thing at namely?

No service type definitions generated when using --with-typescript

Thank you for adding the --with-typescript flag in PR #125.

When using --with-typescript it seems no service type definitions are generated.

In the ts-protoc-gen docs it's mentioned to add service=true before the OUT_DIR, like so:

--ts_out="service=true:${OUT_DIR}"

See https://github.com/improbable-eng/ts-protoc-gen#generating-grpc-service-stubs-for-use-with-grpc-web for more info.

@esilkensen as you worked on this recently do you care to extend your current implementation with this service=true option?

Thanks in advance!

grpc-gateway: swagger files not found in GATEWAY_IMPORT_DIR

Hi,

I just tried the generation of a grpc-gateway with gen-grpc-gateway:1.20_2.

Our protobufs use a go_package option with an url (e.g. our.domain.com/path/to/go/code) so the Go code gets generated into a folder gen/grpc-gateway/src/gen/pb-go/our.domain.com/path/to/go/code.
The swagger files get generated into gen/grpc-gateway/src/gen/pb-go/ directly.

So the ADD instruction in https://github.com/namely/docker-protoc/blob/master/gwy/templates/Dockerfile.tmpl#L25 can not copy these.

Maybe directly copying from $OUT_PATH/ will work here?

lang=GOGO. Can not sets custom mapping for validator.

My code

PROTOC_MAP ?= Mmy/shared/proto/files.proto=github.com/my/shared/proto/files,Mmy/shared/proto/files.proto=github.com/my/shared/proto/files

PROTOC ?= docker run --rm -u ${shell id -u} \
	-v ${PWD}:/defs namely/protoc-all:1.29_1 \
	-l gogo \
	--go-package-map ${PROTOC_MAP} \
	--with-validator \
	-o ./ \
	-i ./proto

$(PROTOC) \
		-d ./proto/path/to/proto/file

I think this is the problem - can not possible sets mapping for validation

GEN_STRING="$GEN_STRING --govalidators_out=gogoimport=true:$OUT_DIR"

My quick solution is to replace local paths imports to global paths
Similar code

find ${PWD}/path/to/gen/proto/files \
		-name '*.pb.validate.go' \
		-exec \
			sed \
				-i.bak \
				's/generatedlocalpath/globalpath/g' \
				{} +
	find ${PWD}/path/to/gen/proto/files \
		-name '*.bak' \
		-exec \
			rm {} +

Feature Request: Add typescript definition support for node (node-ts) with protoc-gen-ts plugin

It would be nice to add typescript support for node via the protoc-gen-ts plugin.

I'm happy to take this on as a PR.

Potential Changes to Dockerfile

RUN set -ex && apk --update --no-cache add \
 ...
    nodejs \
    nodejs-npm \
    && npm i -g ts-protoc-gen

ENV PROTOC_GEN_TS_PATH /usr/local/lib/node_modules/protoc-gen-ts

Potential Changes to all/Entrypoint.sh

case $GEN_LANG in
...
"node(-ts)*") ...
...
esac
....
if [[ $GEN_LANG == "node-ts" ]]; then
    protoc \
        --plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
        --ts_out=$OUT_DIR \
        ${PROTO_FILES[@]}
fi

Go Protocol Buffers APIv2

The Go Team announced a new api for protobuf in March. Since then, a large scale migration has taken place that has left https://github.com/golang/protobuf as a legacy v1 implementation.

We are currently using namely's protoc-all docker image and would like to begin generating proto using the new v2 api, so I was wondering if you had any plans to upgrade this repository - or if we should fork and implement the changes just for us.

It will be a little tricky because the recommended install path is now:

go install google.golang.org/protobuf/cmd/protoc-gen-go

"make build" failing locally and in the pipeline

When running make build locally the build fails at about 15 minutes for the same reason the build is failing in the azure pipeline. Any idea what's going on here?

...
[CXX]     Compiling third_party/abseil-cpp/absl/base/internal/spinlock_wait.cc
In file included from third_party/abseil-cpp/absl/base/internal/spinlock_wait.cc:27:
third_party/abseil-cpp/absl/base/internal/spinlock_linux.inc:17:10: fatal error: linux/futex.h: No such file or directory
 #include <linux/futex.h>
          ^~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:3169: /tmp/grpc/objs/opt/third_party/abseil-cpp/absl/base/internal/spinlock_wait.o] Error 1
The command '/bin/sh -c /tmp/install-protobuf.sh ${grpc} ${grpc_java}' returned a non-zero code: 2

https://dev.azure.com/namely/protoc-all/_build/results?buildId=48&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9&t=f8ed7bd8-2a7f-56f6-9385-7fc29a8b5b7b&l=21032

Gateway container should log traffic

Right now the gateway container logs some useful info when it boots, e.g.:

Proxying requests to gRPC service at 'companies:50051'
hit Ctrl-C to shutdown
launching http server on :80
2018/06/19 22:00:35 Creating grpc-gateway proxy with config: {companies:50051 service.swagger.json     /api/companies/}
2018/06/19 22:00:35 API prefix is /api/companies/

However, it never logs anything after that. It would be great to see some info logged for each requests and response.

cc @mdkess @mhamrah

Option to specify output path

Currently the scripts default to outputing generated files to the current directory. Would it be possible to have the scripts output the generated files to a different directory in its path, and then map it at run time to a different folder.

something like
docker run -v pwd:/defs -v '/some/other/path':/gen namely/protoc-ruby

Passing a `.jar` file as the output produces a directory

Conveniently, the java_out can handle a .jar extension and automatically jar up the .java files for you.

https://developers.google.com/protocol-buffers/docs/reference/java-generated#invocation

When outputting Java code, the protocol buffer compiler's ability to output directly to JAR archives is particularly convenient, as many Java tools are able to read source code directly from JAR files. To output to a JAR file, simply provide an output location ending in .jar. Note that only the Java source code is placed in the archive; you must still compile it separately to produce Java class files.

However, the entrypoint always assumes that the -o parameter is a directory:

if [[ ! -d $OUT_DIR ]]; then
mkdir -p $OUT_DIR
fi

So if you invoke with a jar parameter:

-o build/java/my.jar

It will fail with this error message:

Generating java files for proto in build/java/foo.jar
build/java/my.jar: Is a directory%

I think the solution is to check and only mkdir -p through to the nearest directory here:

if [[ ! -d $OUT_DIR ]]; then
mkdir -p $OUT_DIR
fi

grpc-web plugin error

I am getting the following error message when using the language web:

/usr/local/bin/grpc_web_plugin: program not found or is not executable
--grpc-web_out: protoc-gen-grpc-web: Plugin failed with status code 1.

I am using the image in version 1.23_0:

docker run -v `pwd`:/defs namely/protoc-all -d /defs/protos --with-docs -l web

With 1.22_1 it runs without errors.

I also found out, that the following command has the same results in X_pb.js and X_pb.d.ts files as web should have. Of course it is generating also the X_grpc_pb.js files.

docker run -v `pwd`:/defs namely/protoc-all -d /defs/protos --with-typescript --with-docs -l node

So maybe the web language is obsolete?

Google Protobufs not found

I am having issues when trying to use Google Protobufs.

I have a proto file that looks like this.

syntax = "proto3";
import "google/protobuf/timestamp.proto";

package test.protobuf;

message Auditing {
    int32 createdByUserId = 1;
    google.protobuf.Timestamp createdDate = 2;
    int32 lastModifiedByUserId = 3;
    google.protobuf.Timestamp lastModifiedDate = 4;
}

But when I run the command

docker run  -v `pwd`:/defs namely/protoc-all \
	-d proto/dto -o ./src/main/java \
	-l java

I receive the response

google/protobuf/timestamp.proto: File not found.
auditing.proto:2:1: Import "google/protobuf/timestamp.proto" was not found or had errors.

Looking at the homepage Readme.md to this repository it says this protobuf library artifact is included in the docker image. Do I need to alter my command to include Google Protobufs?

--with-gateway only works for go

With csharp for example, you end up with

gen/pb-csharp
gen/pb-csharp/FunTimes.cs
gen/pb-csharp/FunTimesGrpc.cs
gen/pb-csharp/gateway
gen/pb-csharp/gateway/FunTimes.pb.gw.go
gen/pb-csharp/gateway/FunTimes.swagger.json

But the gateway needs the go Grpc client to work.

Allow configurable prefix for the API endpoints

Imagine if we wanted to host all of our APIs under the /api endpoint.

A service's gRPC gateway may expose an endpoint such as /my-service/foo. Clients would access it by going to /api/my-service/foo. A rewrite rule at the edge proxy (i.e. NGINX) would redirect requests to /api/my-service to the grpc-gateway.

However, with the current implementation of grpc-gateway, the grpc-gateway wouldn't understand the path.

This task is to make the grpc-gateway aware, via a configuration value, of the prefix under which it's running. That is, /my-service/foo defined in the proto would be accessible via /api/my-service/foo.

docker file pulls github.com/golang/protobuf/proto-gen-go from master and not a tag/release

Currently the docker image pulls the github.com/golang/protobuf/protoc-gen-go from master:
https://github.com/namely/docker-protoc/blob/master/Dockerfile#L42

Doing this with the latest released version of protoc 3.6.1 causes:

vendor/github.com/faceit/protos-generated/push-service/v1.0.0/go/push-service.pb.go:23:11: undefined: proto.ProtoPackageIsVersion3

The issue was discussed here is quite some detail:
golang/protobuf#763

Is anyone getting arround this right now? Im about to open a PR to fix it otherwise. Following the install instructions in the README of https://github.com/golang/protobuf#installation

Support node as a target language

Currently only the following languages are supported.

go ruby csharp java python objc

This is just a feature request to support node.

Allow passing multiple "includes"

As implemented today, you can't pass multiple includes via a -i parameter. Each invocation of -i overwrites the last.

Here's the code that does that:

-i) shift
EXTRA_INCLUDES="-I$1"
shift
;;

So, invocations like this don't work

docker run ... namely/protoc-all:1.11 -i protorepo -i something_else

It would be nice if each -i added onto the EXTRA_INCLUDES.

Python import issue

I generate python file by this command (docker run --rm -v `pwd`:/defs namely/protoc-all:1.11 -d ../proto/ -o my_project/my_project -l python) to a folder and then I got the following file structure:

my_project
    my_project
        __init__.py
        a_pb2.py
        a_pb2_grpc.py
    __init__.py
    setup.py

In the a_pb2_grpc.py file, it import a_pb2 as a__pb2 which doesn't work cus in python if you want to import a module in the same folder, you need to do from . import a_pb2 as a__pb2. Not sure if this is a bug. By the way I'm using python 3.6.1

namely/protoc-all:1.31_0 container on docker hub produces errors.

I just noticed that our CI jobs are failing with the latest build of the protoc-all docker container on Docker hub. We were using the latest tag (which is now 1.31_0) and it looks like the /opt/include directory in the container is missing files that were there in the previous container:

google/protobuf/timestamp.proto: File not found.
Import "google/protobuf/timestamp.proto" was not found or had errors.
"google.protobuf.Timestamp" is not defined.

As a short-term fix, I've tagged our container back to a previous release. Just thought you might want to know.

Thanks

Generating python files results in an `__init__.py` at the root of the `gen` directory

With a simple file structure that looks like this:

.
└── foo.proto

If I run:

 docker run -v `pwd`:/defs namely/protoc-all:1.9 -f foo.proto -l python

I get the following output:

.
├── foo.proto
└── gen
    ├── __init__.py          <- this is unexpected, I think
    └── pb_python
        ├── __init__.py
        ├── foo_pb2.py
        └── foo_pb2_grpc.py

It's an empty file. Haven't done much digging yet, just wanted to get the ticket open first.

cat gen/__init__.py
# nothin'

grpc-gateway with gogo

When using the gogo language, the --with-gateway option doesn't work.

I get:

Generating grpc-gateway is Go specific.

Make __init__.py generation optional in protoc-all

As of Python 3.6, __init__.py files are not required. This task is to make it optional to generate those files. This will change the behavior around

if [[ $GEN_LANG == "python" ]]; then

This should be done by passing a command line flag to entrypoint.sh. The default should be to generate those files, to maintain backward compatibility with existing users.

Directory flag specifies maxdepth 1

Today, pulling protofiles using the -d flag specifies a max-depth of 1.

PROTO_FILES=(`find ${PROTO_DIR} -maxdepth 1 -name "*.proto"`)

Is there a specific reason for this? Can it be configurable?

We have a directory structure that looks like this:

src/main/proto/company_pb
├── billing
│   └── events.proto
├── commons
│   ├── sqs
│   │   ├── sqs_attribute.proto
│   │   ├── sqs_attributes.proto
│   │   └── sqs_message.proto
│   └── timestamp.proto

So the maxdepth flag causes the command to fail out with this error

Missing input file.

because no files are passed to the underlying method.

support dart-lang

Currently Dart is not supported.

Is there any plans to support dart-lang as well?

Documentation generation doesn't recognize `markdown`

Documentation generation doesn't recognize markdown
--with-docs FORMAT Generate documentation (FORMAT is optional - see https://github.com/pseudomuto/protoc-gen-doc#invoking-the-plugin)

according to the link above:
The format may be one of the built-in ones ( docbook, html, markdown or json) or the name of a file containing a custom Go template.

but when I try to generate java classes with documents (docker run --rm -v `pwd`:/defs namely/protoc-all -d . -o ./out/test --lint --with-docs markdown -l java) there is an error instead of documentation:

Generating java files for . in ./out/test
2018/10/25 07:59:37 Invalid parameter: markdown
--doc_out: protoc-gen-doc: Plugin failed with status code 1. 

How this could be fixed?

Generates files under `root` user on Linux (Ubuntu 16.04)

I ran the docker run command on Ubuntu 16.04. It however generates the files under root user. Is there any workaround to not generate the files under root? I believe this is because the Dockerfile doesn't specify a user and defaults to root.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.