hyperium / tonic Goto Github PK
View Code? Open in Web Editor NEWA native gRPC client & server implementation with async/await support.
Home Page: https://docs.rs/tonic
License: MIT License
A native gRPC client & server implementation with async/await support.
Home Page: https://docs.rs/tonic
License: MIT License
Building the examples in tonic-examples
in this repository crashes.
I just checked out this repository, so it's just master
of tonic
.
$ git rev-parse HEAD
af5754bd437ffbb5c7a9fbe36af14cd182d48c53
$ uname -a
Linux think5 4.19.80 #1-NixOS SMP Thu Oct 17 20:45:44 UTC 2019 x86_64 GNU/Linux
$ rustc --version
rustc 1.39.0-beta.7 (23f8f652b 2019-10-26)
$ cargo --version
cargo 1.39.0-beta (1c6ec66d5 2019-09-30)
Following tonic-examples/README.md, I tried to run helloworld-client
.
This resulted in the following build error:
$ cargo run --bin helloworld-client
[...]
Compiling tonic-examples v0.1.0 (/home/leo/Code/other/tonic/tonic-examples)
error: failed to run custom build command for `tonic-examples v0.1.0 (/home/leo/Code/other/tonic/tonic-examples)`
Caused by:
process didn't exit successfully: `/home/leo/Code/other/tonic/target/debug/build/tonic-examples-c5600591329ec7f8/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1165:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:76
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:60
4: core::fmt::write
at src/libcore/fmt/mod.rs:1030
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:64
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:196
9: std::panicking::default_hook
at src/libstd/panicking.rs:210
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:477
11: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:380
12: rust_begin_unwind
at src/libstd/panicking.rs:307
13: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
14: core::result::unwrap_failed
at src/libcore/result.rs:1165
15: core::result::Result<T,E>::unwrap
at /rustc/23f8f652bcea053b70c0030008941f5f8476b5a0/src/libcore/result.rs:933
16: build_script_build::main
at tonic-examples/build.rs:2
17: std::rt::lang_start::{{closure}}
at /rustc/23f8f652bcea053b70c0030008941f5f8476b5a0/src/libstd/rt.rs:64
18: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:49
19: std::panicking::try::do_call
at src/libstd/panicking.rs:292
20: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:80
21: std::panicking::try
at src/libstd/panicking.rs:271
22: std::panic::catch_unwind
at src/libstd/panic.rs:394
23: std::rt::lang_start_internal
at src/libstd/rt.rs:48
24: std::rt::lang_start
at /rustc/23f8f652bcea053b70c0030008941f5f8476b5a0/src/libstd/rt.rs:64
25: main
26: __libc_start_main
27: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
warning: build failed, waiting for other jobs to finish...
error: build failed
└── tonic v0.1.0-alpha.3
└── tonic-build v0.1.0-alpha.3
Linux cccjlx 5.3.7-arch1-1-ARCH #1 SMP PREEMPT Fri Oct 18 00:17:03 UTC 2019 x86_64 GNU/Linux
tonic-build
Proto buffers definition:
syntax = "proto3";
package foo;
service Foo {
rpc Foo(stream FooRequest) returns (stream FooResponse) {}
}
message FooRequest {}
message FooResponse {}
The following error occurs when execute cargo build
:
error[E0404]: expected trait, found struct `Foo`
--> /xxx/target/debug/build/xxx-16b05b8476663e11/out/foo.rs:126:35
|
126 | struct Foo<T: Foo>(pub Arc<T>);
| ^^^ not a trait
help: possible better candidate is found in another module, you can import it into scope
|
67 | use crate::proto::foo::server::Foo;
|
error[E0404]: expected trait, found struct `Foo`
--> /xxx/target/debug/build/xxx-16b05b8476663e11/out/foo.rs:127:29
|
127 | impl<T: Foo> tonic::server::StreamingService<super::FooRequest> for Foo<T> {
| ^^^ not a trait
help: possible better candidate is found in another module, you can import it into scope
|
67 | use crate::proto::foo::server::Foo;
And I notice there is a new struct named Foo
in the generated rpc method, which conflicts with the previous defined trait Foo
The complete generated code:
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct FooRequest {}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct FooResponse {}
#[doc = r" Generated client implementations."]
pub mod client {
#![allow(unused_variables, dead_code, missing_docs)]
use tonic::codegen::*;
pub struct FooClient<T> {
inner: tonic::client::Grpc<T>,
}
impl FooClient<tonic::transport::Channel> {
#[doc = r" Attempt to create a new client by connecting to a given endpoint."]
pub fn connect<D>(dst: D) -> Result<Self, tonic::transport::Error>
where
D: std::convert::TryInto<tonic::transport::Endpoint>,
D::Error: Into<StdError>,
{
tonic::transport::Endpoint::new(dst).map(|c| Self::new(c.channel()))
}
}
impl<T> FooClient<T>
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T::ResponseBody: Body + HttpBody + Send + 'static,
T::Error: Into<StdError>,
<T::ResponseBody as HttpBody>::Error: Into<StdError> + Send,
<T::ResponseBody as HttpBody>::Data: Into<bytes::Bytes> + Send,
{
pub fn new(inner: T) -> Self {
let inner = tonic::client::Grpc::new(inner);
Self { inner }
}
#[doc = r" Check if the service is ready."]
pub async fn ready(&mut self) -> Result<(), tonic::Status> {
self.inner.ready().await.map_err(|e| {
tonic::Status::new(
tonic::Code::Unknown,
format!("Service was not ready: {}", e.into()),
)
})
}
pub async fn foo<S>(
&mut self,
request: tonic::Request<S>,
) -> Result<tonic::Response<tonic::codec::Streaming<super::FooResponse>>, tonic::Status>
where
S: Stream<Item = Result<super::FooRequest, tonic::Status>> + Send + 'static,
{
self.ready().await?;
let codec = tonic::codec::ProstCodec::new();
let path = http::uri::PathAndQuery::from_static("/foo.Foo/Foo");
self.inner.streaming(request, path, codec).await
}
}
impl<T: Clone> Clone for FooClient<T> {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
}
}
}
}
#[doc = r" Generated server implementations."]
pub mod server {
#![allow(unused_variables, dead_code, missing_docs)]
use tonic::codegen::*;
#[doc = "Generated trait containing gRPC methods that should be implemented for use with FooServer."]
#[async_trait]
pub trait Foo: Send + Sync + 'static {
#[doc = "Server streaming response type for the Foo method."]
type FooStream: Stream<Item = Result<super::FooResponse, tonic::Status>> + Send + 'static;
async fn foo(
&self,
request: tonic::Request<tonic::Streaming<super::FooRequest>>,
) -> Result<tonic::Response<Self::FooStream>, tonic::Status> {
Err(tonic::Status::unimplemented("Not yet implemented"))
}
}
#[derive(Clone, Debug)]
pub struct FooServer<T: Foo> {
inner: Arc<T>,
}
#[derive(Clone, Debug)]
#[doc(hidden)]
pub struct FooServerSvc<T: Foo> {
inner: Arc<T>,
}
impl<T: Foo> FooServer<T> {
#[doc = "Create a new FooServer from a type that implements Foo."]
pub fn new(inner: T) -> Self {
let inner = Arc::new(inner);
Self::from_shared(inner)
}
pub fn from_shared(inner: Arc<T>) -> Self {
Self { inner }
}
}
impl<T: Foo> FooServerSvc<T> {
pub fn new(inner: Arc<T>) -> Self {
Self { inner }
}
}
impl<T: Foo, R> Service<R> for FooServer<T> {
type Response = FooServerSvc<T>;
type Error = Never;
type Future = Ready<Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, _: R) -> Self::Future {
ok(FooServerSvc::new(self.inner.clone()))
}
}
impl<T: Foo> Service<http::Request<HyperBody>> for FooServerSvc<T> {
type Response = http::Response<tonic::body::BoxBody>;
type Error = Never;
type Future = BoxFuture<Self::Response, Self::Error>;
fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: http::Request<HyperBody>) -> Self::Future {
let inner = self.inner.clone();
match req.uri().path() {
"/foo.Foo/Foo" => {
struct Foo<T: Foo>(pub Arc<T>);
impl<T: Foo> tonic::server::StreamingService<super::FooRequest> for Foo<T> {
type Response = super::FooResponse;
type ResponseStream = T::FooStream;
type Future =
BoxFuture<tonic::Response<Self::ResponseStream>, tonic::Status>;
fn call(
&mut self,
request: tonic::Request<tonic::Streaming<super::FooRequest>>,
) -> Self::Future {
let inner = self.0.clone();
let fut = async move { inner.foo(request).await };
Box::pin(fut)
}
}
let inner = self.inner.clone();
let fut = async move {
let method = Foo(inner);
let codec = tonic::codec::ProstCodec::new();
let mut grpc = tonic::server::Grpc::new(codec);
let res = grpc.streaming(method, req).await;
Ok(res)
};
Box::pin(fut)
}
_ => Box::pin(async move {
Ok(http::Response::builder()
.status(200)
.header("grpc-status", "12")
.body(tonic::body::BoxBody::empty())
.unwrap())
}),
}
}
}
}
tonic
I have a dependency on alpha.4
internally, and would like to be able to use tonic
with it.
This can be addressed by bumping the version. I'm not entirely sure what the drawbacks of a bump like this would be. Looking into the differences between 3 and 4 now.
In the meantime, the alternative is to downgrade our version of hyper
to alpha.3
.
nice to have..
Some users may wish to use tonic-build as an separate step, so:
Handled in this neighborhood:
15: core::result::Result<T,E>::unwrap
at /rustc/fa5c2f3e5724bce07bf1b70020e5745e7b693a57/src/libcore/result.rs:933
16: tonic_build::fmt
at /home/john/code/tonic/tonic-build/src/lib.rs:200
17: tonic_build::Builder::compile
tonic_build::compile_protos("proto/helloworld/helloworld.proto")?;
out: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
out: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
out: Output { status: ExitStatus(ExitStatus(0)), stdout: "", stderr: "" }
It would be helpful if it detailed the generated files.
Alternatively, for 1 above - update the README to reflect the OUT_DIR requirement.
tonic-benches v0.1.0 (/home/john/code/tonic/tonic-benches)
├── tonic v0.1.0-alpha.2 (/home/john/code/tonic/tonic)
└── tonic-build v0.1.0-alpha.2 (/home/john/code/tonic/tonic-build)
├── tonic v0.1.0-alpha.2 (/home/john/code/tonic/tonic) ()
├── tonic-build v0.1.0-alpha.2 (/home/john/code/tonic/tonic-build) ()
l
Linux tribble 5.0.0-31-generic #33~18.04.1
-Ubuntu SMP Tue Oct 1 10:20:39 UTC 2019 x8
6_64 x86_64 x86_64 GNU/Linux
tonic-build
Running from 1987a40.
Linux nexus 5.3.1 #1-NixOS SMP Sat Sep 21 05:19:47 UTC 2019 x86_64 GNU/Linux
Affects tonic
When running these lines I occasionally get an error message from prost
informing me that a buffer underflow has occurred:
Error: Status { code: Internal, message: "failed to decode Protobuf message: buffer underflow" }
Sometimes this happens, other times it doesn't. It seems to be related to the number of bytes that I try to push through from the server to the client.
After digging around for quite some time, the failing requests all seem to be missing 3 additional bytes after the 5 byte header. When testing the blob-client
with 10000
bytes a request ends up having a total length of 10005
. After the buffer is advanced 5 bytes, the length is now 10000
. The prost::encoding::bytes::merge
function then reads off 3 bytes to get the length of the byte array from the proto, which is 10000
and then checks for underflow.
I'm stumped as to how these bytes are missing in some cases and are present in many others.
This prost issue (https://github.com/danburkert/prost/issues/98) suggests using {encode|decode}_length_delimited
, but that didn't seem to have any effect when I swapped those calls in in prost.rs
.
Documentation or examples on how to use nested message types from Rust, e.g. for something like:
message SearchResponse {
message Result {
string url = 1;
string title = 2;
repeated string snippets = 3;
}
repeated Result results = 1;
}
Just to make getting started with using Tonic a bit easier, since it's not as obvious how nested data structures might be handled in Rust.
0.1.0-alpha.4
Linux 5.0.0-29-generic Ubuntu SMP Thu Sep 12 13:05:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
tonic-build
tonic-build fails to compile with the "transport" feature disabled, throwing the following error:
error[E0061]: this function takes 0 parameters but 1 parameter was supplied
--> /home/.../.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-build-0.1.0-alpha.4/src/client.rs:10:19
|
10 | let connect = generate_connect(&service_ident);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected 0 parameters
...
69 | fn generate_connect() -> TokenStream {
| ------------------------------------ defined here
I took a quick glance at the affected code, and it looks like it should be a quick fix :)
The official protocol definition mentions the "grpc-timeout"
header which is a way for the client to tell the server what deadline it has.
As I understand it, the timeout should be converted into a deadline as soon as the header is read (before the rest of the payload).
Using this, when the server is overloaded, requests can just be dropped with DEADLINE_EXCEEDED
or CANCELLED
(not sure which one should be picked on the server side) if it takes too long until they are processed.
E.g. in C++, There is the ServerContext::IsCancelled()
function which can be used to exit early: Blog post on gRPC deadlines
How can this be achieved with tonic?
Are there features in tower/hyper which could be used?
tonic
Currently a small subset of of the configuration options available for OpenSSL or Rustls are exposed by the builder API, via the openssl_tls
and rustls_tls
functions:
openssl_tls
for servers and rustls_tls
for clients)openssl_tls
for servers and rustls_tls
for clients)Clearly many more configuration points exist in each of the supported TLS libraries which are not exposed. If these are needed, users lose the benefit of the 'batteries included' nature of the transport
implementations which are part of Tonic.
Currently the TLS library in use can be switched out in Tonic by enabling the appropriate Cargo feature, and changing the method name called from the builder - the arguments are compatible for each.
Wrapping every available option in a cross-TLS library compatible way in this way could lead to a sprawling API, and there may still be divergence between libraries. Instead, it would be better to allow the native configuration structure from each library to be passed into the builders:
rustls::ServerConfig
and rustls::ClientConfig
for Rustlsopenssl::ssl::SslAcceptorBuilder
and openssl::ssl::SslConnectorbuilder
for OpenSSL.This does add considerable complexity though, and there are some cases where the existing mechanism is sufficient - so it is desirable to keep the simpler interface for those that prefer that.
It's probably better for these different mechanisms for configuring TLS to be mutually exclusive with one another - that is, it should not be possible to pass in both a ServerConfig
and an Identity
. Consequently, I propose making the existing methods on the builder take an enumeration instead, with separate variants for each configuration mechanism.
An alternative to the enumeration is to have builder methods for each configuration mechanism and ensure via some other mechanism that they are not used together. This seems more complex to me, though there is currently not much in the way of precedent for the style suggested above in the builder API so this might be preferable.
We should attempt to use feature flags to reduce dependencies that we don't use.
This is not only for tonic, the tower-gRPC, tower-web has the same issue when I tried all of them in docker container.
This could be related to how docker handle network conflict with how rust handle.
I do not know if I did something or not. the tonic server cannot get accessed from other docker containers within the same bridge network.
Here is what I did:
So I kind of make sure it is related to rust implementation on gRPC.
I tried the tower-web too. I found tower-web also has the same problem. So I guess it is related to rust.
Running microservices is getting more and more popular. I think there must be some work around I just do not know. Can someone help to figure it out?
tonic-examples v0.1.0 (/home/ccc/workspace/tonic/tonic-examples)
├── tonic v0.1.0-alpha.5 (/home/ccc/workspace/tonic/tonic)
└── tonic-build v0.1.0-alpha.5 (/home/ccc/workspace/tonic/tonic-build)
Linux cccjlx 5.3.7-arch1-2-ARCH #1 SMP PREEMPT @1572002934 x86_64 GNU/Linux
tonic
When I run the following command(server ran in another terminal):
$ cd tonic/tonic-examples
$ cargo run --bin load-balance-client
And I got the error:
Compiling tonic-examples v0.1.0 (/home/ccc/workspace/tonic/tonic-examples)
Finished dev [unoptimized + debuginfo] target(s) in 2.98s
Running `/home/ccc/workspace/tonic/target/debug/load-balance-client`
thread 'tokio-runtime-worker-0' panicked at 'generator resumed after completion', tonic/src/transport/service/connection.rs:29:79
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Error: Status { code: Unknown, message: "Client: buffer\'s worker closed unexpectedly" }
It went well when I use version v0.1.0-alpha.4
, so it may be broken by new changes. 😂
Tracking issue for request Routing
Tonic should allow a single socket/server to serve multiple services. Routing is the way to get there.
Current progress: https://github.com/hyperium/tonic/blob/master/tonic-interop/src/bin/server.rs#L73
See also: tower-rs/tower-grpc#2
tonic
Currently, Tonic's Client
API requires users to write code like this:
let request = tonic::Request::new(HelloRequest {
name: "hello".into(),
});
let response = client.say_hello(request).await?;
The need to wrap the request struct in a tonic::Request
is a minor ergonomics issue that could be surprising. This is admittedly a pretty minor papercut, but it seems like it would reduce some friction in the APIs that I suspect a new user is most likely to use (and make the basic examples seem simpler!)...
What do you think about changing the client API so that users could just write:
let request = HelloRequest {
name: "hello".into(),
};
let response = client.say_hello(request).await?;
We could add a trait like this:
pub trait IntoRequest {
type ReqMessage;
fn into_request(self) -> tonic::Request<Self::ReqMessage>
where
Self: Sized,
;
}
and generate impls for all the generated request messages, like
impl tonic::IntoRequest for #request {
type ReqMessage = Self;
fn into_request(self) -> tonic::Request<Self>
where
Self: Sized,
{
tonic::Request::new(self)
}
}
Then, we could change the generated RPC methods on the client to be like this:
async fn #ident(&mut self, request: impl tonic::IntoRequest<ReqMessage = #request>)
We could add an impl of IntoRequest
for Request<T>
like this:
impl<T> IntoRequest for Request<T> {
type ReqMessage = T;
fn into_request(self) ->Request<T>
where
Self: Sized,
{
self
}
}
which would allow users could still construct tonic::Request
types if needed, such as when manually adding headers etc.
Something similar could probably be done for response messages.
We could also consider using From
and Into
here, rather than defining a new IntoRequest
trait. This is what I originally suggested in #1 (comment). Ideally, I think it would be better to use the stdlib traits where possible, rather than defining a new one. However, @LucioFranco says that this causes some issues with type inference which a new trait could possibly avoid.
tonic = { version = "0.1.0-alpha.4", features = ["rustls", "prost"] }
tonic-build = { version = "0.1.0-alpha.4" }
OS: macOS Catalina (10.15)
rustc
version: 1.39.0-nightly (2019-09-10)
# output from: `uname -a`
Darwin MBP 19.0.0 Darwin Kernel Version 19.0.0: Wed Sep 25 20:18:50 PDT 2019; root:xnu-6153.11.26~2/RELEASE_X86_64 x86_64
Hello,
I was trying to write an abstraction library for using Google Cloud Platform APIs.
More specifically, I was writing the one for the Pub/Sub service of GCP.
So, I cloned the protobuf definitions of their services (https://github.com/googleapis/googleapis), and attempted to call one of their endpoints.
Here is the build.rs file I used:
fn main() {
tonic_build::configure()
.build_client(true)
.build_server(false)
.format(true)
.out_dir("src/api")
.compile(&["protos/google/pubsub/v1/pubsub.proto"], &["protos"])
.unwrap();
println!("cargo:rerun-if-changed=protos/google/pubsub/v1/pubsub.proto");
}
protos/
is the directory containing the clone of Google's protobufs.
Then, I attempt to connect and make a call to the service, like this:
use http::HeaderValue;
use tonic::transport::{Certificate, Channel, ClientTlsConfig};
use tonic::Request;
mod api {
// Generated protobuf bindings
include!("api/google.pubsub.v1.rs");
}
const ENDPOINT: &str = "https://pubsub.googleapis.com";
const SCOPES: [&str; 2] = [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/pubsub",
];
#[tokio::main]
async fn main() {
let certs = tokio::fs::read("roots.pem").await.unwrap();
let mut tls_config = ClientTlsConfig::with_rustls();
tls_config.ca_certificate(Certificate::from_pem(certs.as_slice()));
tls_config.domain_name("pubsub.googleapis.com");
let channel = Channel::from_static(ENDPOINT)
.intercept_headers(|headers| {
let token = "some-valid-google-oauth-token";
let value = format!("Bearer {0}", token);
let value = HeaderValue::from_str(value.as_str()).unwrap();
headers.insert("authorization", value);
})
.tls_config(&tls_config)
.channel();
let mut service = api::client::PublisherClient::new(channel);
let response = service.list_topics(Request::new(api::ListTopicsRequest {
project: format!("projects/{0}", "some-gcp-project-name"),
page_size: 10,
page_token: String::default(),
}));
dbg!(response.await);
}
The roots.pem
file I use here is the sample PEM file from Google Trust Services (https://pki.goog/roots.pem).
The dbg!(response.await)
statement always yields:
[src/sample.rs:42] response.await = Err(
Status {
code: Internal,
message: "Unexpected compression flag: 60",
},
)
This happens regardless of the validity of the OAuth token.
To see if the connection flow was valid, I reproduced the experiment using Golang's google.golang.org/grpc
and github.com/golang/protobuf
packages:
package main
import (
"context"
"fmt"
"os"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/metadata"
pubsub "google.golang.org/genproto/googleapis/pubsub/v1" // generated protobuf bindings
)
func clientInterceptor(
ctx context.Context,
method string,
req interface{},
reply interface{},
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
opts ...grpc.CallOption,
) error {
md := metadata.New(map[string]string{
"authorization": "Bearer some-valid-google-oauth-token",
})
ctx = metadata.NewOutgoingContext(ctx, md)
return invoker(ctx, method, req, reply, cc, opts...)
}
func main() {
creds, err := credentials.NewClientTLSFromFile("roots.pem", "pubsub.googleapis.com")
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s\n", err.Error())
}
conn, err := grpc.Dial("pubsub.googleapis.com:443", grpc.WithTransportCredentials(creds), grpc.WithUnaryInterceptor(clientInterceptor))
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s\n", err.Error())
}
defer conn.Close()
pub := pubsub.NewPublisherClient(conn)
resp, err := pub.ListTopics(context.Background(), &pubsub.ListTopicsRequest{Project: "projects/some-gcp-project-name", PageSize: 10})
if err != nil {
fmt.Fprintf(os.Stderr, "error: %s\n", err.Error())
}
fmt.Printf("topics: %#v\n", resp)
}
This Go code, provided with valid credentials, works as expected whereas the Rust code, with the same credentials, encounters the invalid compression flag
error.
Both the C++ gRPC client, and the Go client (and probably more) expose the peer address and other connection metadata such as the mTLS certificate used by the client - these features combined are great for certificate-based access control and request source filtering / audit logs / metrics / etc.
It looks like there was a brief discussion for the C++ implementation on how to expose the peer address in a way that could be used across multiple transports (both IP and unix domain sockets are particularly common) that would be wise to take into account too - they ended up returning a string URI for each transport. The Go implementation handles this nicely using the net.Addr abstraction in the standard library that covers various transports.
Add a peer
-like field/method to the Request struct.
Perhaps we would express the different transports as enum variants with their associated addresses? Something like:
enum PeerAddr {
Tcp(std::net::SocketAddr),
Unix(std::os::unix::net::SocketAddr),
}
I'm not too sure what is best for the mTLS certificate as #47 points out (and #48 fixes), attempting to abstract away the different TLS types for each backend into a common type is probably not the best direction to take - that said, maybe each TLS implementation can have a method on the Request that returns their native certificate type, and place them behind feature flags in the same way the with_rustls
/with_openssl
methods are.
As noted above, the C++ implementation returns a string URI and leaves the caller to parse it according to the expected value. This requires unnecessarily verbose code for the caller to extract the address, I hope we'd be able to do something a little nicer!
I'm trying to use Tonic, but I keep hitting #43. I've confirmed that tagging master on Cargo.toml solves the issue.
Could a new release be cut?
We should expose the hyper set_nodelay
option as a config option for the transport channel and server.
Main Crate
I'm trying to output a unidirectional tonic gRPC stream via SSE using warp. I'm using warp from seanmonstar/warp#265 to be able to use it with async await. Changing warp to be compatible with async / await requires that all responses handed over to warp (and in turn handed over to hyper) to be Sync
. However, the stream returned from a streaming gRPC response is not Sync
.
Rustc complains that the decoder
specifically is the part that is not sync:
tonic/tonic/src/codec/decode.rs
Line 22 in 5d0a795
As far as I can see, decoder
is not accessible from the outside, and thus it should be safe to mark the whole Streaming
struct as Sync
. I would be happy to file a PR for that.
I don't see any, because AFAIK the Sync
bound stems from a layer deep down inside hyper
, and I guess it's unfeasible to change that somehow to remove it.
If we have a simple /proto
folder structure as follows:
└── proto
└── helloworld
├── helloworld.proto
└── hello.proto
Where helloworld.proto
is:
syntax = "proto3";
package helloworld;
import "hello.proto";
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
and hello.proto
is:
syntax = "proto3";
package names;
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
And we use default tonic-build just like the example
tonic_build::compile_protos("proto/helloworld/helloworld.proto")?;
It will successfully build both of the files into separate Rust files as expected but then using tonic::proto_include
like so:
pub mod helloworld {
tonic::include_proto!("helloworld");
}
it will attempt to only import the main package and will require the user to manually include each package (because there could be several) as follows:
pub mod hello {
tonic::include_proto!("hello");
}
pub mod helloworld {
tonic::include_proto!("helloworld");
}
I am pretty sure this is the current intended behavior but my inquiry is whether we want to include another macro like include_protos!
for example that would save the user from the boilerplate or potentially bake this in tonic_build
itself because we have info about which proto files are imported within each proto
file. If either of these is desirable please inform me and I could try helping implement it!
Thank you for the awesome work by the way! I love tonic
❤️
It would be handy to add benchmarks mostly around encoding and decoding.
Master
64-bit Windows 10
Cannot build on windows.
From a fresh checkout tried cargo run --bin helloworld-client
Got:
error: failed to run custom build command for
openssl-sys v0.9.50
Caused by:
process didn't exit successfully:C:\projects\test\tonic\target\debug\build\openssl-sys-0c79ae8c49ce2a76\build-script-main
(exit code: 101)
--- stdout
cargo:rustc-cfg=const_fn
cargo:rerun-if-env-changed=X86_64_PC_WINDOWS_MSVC_OPENSSL_LIB_DIR
X86_64_PC_WINDOWS_MSVC_OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=X86_64_PC_WINDOWS_MSVC_OPENSSL_INCLUDE_DIR
X86_64_PC_WINDOWS_MSVC_OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=X86_64_PC_WINDOWS_MSVC_OPENSSL_DIR
X86_64_PC_WINDOWS_MSVC_OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_DIR
OPENSSL_DIR unset
note: vcpkg did not find openssl as libcrypto and libssl: Could not find Vcpkg tree: No vcpkg.user.targets found. Set the VCPKG_ROOT environment variable or run 'vcpkg integrate install'
note: vcpkg did not find openssl as ssleay32 and libeay32: Could not find Vcpkg
tree: No vcpkg.user.targets found. Set the VCPKG_ROOT environment variable or run 'vcpkg integrate install'--- stderr
thread 'main' panicked at 'Could not find directory of OpenSSL installation, and this
-sys
crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it, you can set theOPENSSL_DIR
environment variable for the
compilation process.Make sure you also have the development packages of openssl installed.
For example,libssl-dev
on Ubuntu oropenssl-devel
on Fedora.If you're in a situation where you think the directory should be found
automatically, please open a bug at https://github.com/sfackler/rust-openssl
and include information about your system as well as this message.$HOST = x86_64-pc-windows-msvc
$TARGET = x86_64-pc-windows-msvc
openssl-sys = 0.9.50It looks like you're compiling for MSVC but we couldn't detect an OpenSSL
installation. If there isn't one installed then you can try the rust-openssl
README for more information about how to download precompiled binaries of
OpenSSL:https://github.com/sfackler/rust-openssl#windows
', C:\Users\jkolb.cargo\registry\src\github.com-1ecc6299db9ec823\openssl-sys-0.9.50\build\find_normal.rs:150:5
note: run withRUST_BACKTRACE=1
environment variable to display a backtrace.warning: build failed, waiting for other jobs to finish...
error: build failed
The http
crate with its 0.2
release will move its builders to be from &mut self
to self
. I want to consider following suit with tonic
. Mostly, I wanted to open this to start a discussion.
cc @jen20
I'd love for some of the methods within the codec module to be opened up to have pub
instead of pub(crate)
visibility.
tonic
I've been toying around with adding some of the awesome work done here to my own projects, and I had to fork tonic to expose some of the methods within the codec module, as well as a few elsewhere.
Move to pub permissions for:
codec::decode::new_request
codec::encode::encode_server
codec::encode::EncodeBody
request::from_http_parts
request::into_http
response::into_http
It would be nice if there was an easier way to plug into the hyper request -> proto and proto -> hyper response flows, but these were the fastest (read: not necessarily best,) changes to make to get to that end for the time being. Basically, it might be worth allowing Tonic to be used outside of Tower just over raw hyper requests and responses.
I would like to obtain parity with the go-grpc-middleware repo. By obtaining parity we can be more confident that tonic can accomplish anything gprc-go can.
We should create a new project tonic-middleware
to provide a repository of both "official" middlware, and links to other middleware.
https://github.com/grpc-ecosystem/go-grpc-middleware
grpc_auth
- a customizable (via AuthFunc
) piece of auth middleware
grpc_ctxtags
- a library that adds a Tag
map to context, with data populated from request body
We need to be able to chain Interceptor functions for this to reach parity.
grpc_zap
- integration of zap logging library into gRPC handlers.
grpc_logrus
- integration of logrus logging library into gRPC handlers.
Should have a https://github.com/tokio-rs/tracing
middleware which can output the following information: https://github.com/grpc-ecosystem/go-grpc-middleware/blob/master/logging/logrus/doc.go
grpc_prometheus
- Prometheus client-side and server-side monitoring middlewareotgrpc
- OpenTracing client-side and server-side interceptorsgrpc_opentracing
- OpenTracing client-side and server-side interceptors with support for streaming and handler-returned tagsgrpc_retry
- a generic gRPC response code retry mechanism, client-side middlewaregrpc_validator
- codegen inbound message validation from .proto
optionsgrpc_recovery
- turn panics into gRPC errorsratelimit
- grpc rate limiting by your own limiterCurrently, when you attempt to create a channel it does not connect when you would expect it too. https://docs.rs/tonic/0.1.0-alpha.3/tonic/transport/struct.Endpoint.html#method.channel this currently is sync.
This causes issues when you want to make sure you're connected before you make the first RPC call. Due to how tower-reconnect
is implemented it becomes lazy and thus attempts to connect on the first rpc call which is a bit annoying.
To change this we should make our own version of the reconnect middleware that allows us to create a channel, check that it is open/valid and then lazily reconnect if it fails.
Ensure the readme code compiles and is up to date.
I think we can call rustdoc directly and test the readme in its own CI stage.
prost
is a great library and has already set a very good foundation for tonic
. That said, we shoud look into enabling other types of codegen "drivers". This would allow us to have codegen that can create flatbuffers and json based grpc services.
tonic-build
?cc @cpcloud
Currently tonic provides interceptor_fn but this is a little lower level than a lot of other solutions provide.
Ideally I could chain middleware functions without manually building my own stack inside an interceptor_fn.
See: #69, tower-rs/tower-grpc#86, https://github.com/grpc-ecosystem/go-grpc-middleware
On: tonic-build v0.1.0-alpha.5 (/code/tonic/tonic-build)
We need api doc.
This method is also helpful for diagnosing protobuf build issues - (pb namespace, permissions, rustfmt, toolchain, config, etc.)
https://docs.rs/tonic-build/0.1.0-alpha.5/tonic_build/fn.configure.html
Line 166 in 23f648b
Although I don't know much about the two libraries, on it's surface rust-protobuf
appears to be more maintained than prost
. What was the reasoning behind that choice?
New users need a good on-boarding experience. Opening an issue in support of that.
A couple scenarios for validation/enhancement:
1 - A user is familiar with using protobufs, clones, builds, runs hello world example, and starts crafting their own protobufs in tonic.
2 - A user is looking for a client/server architecture and is un-familar with with the stack.
3 - A user familiar with tokio/tower but new to grpc.
The main crate's api needs to be audited, the transport module is still under dev so probably not worth it right now to give that a thorough review but would be nice to get a light review of it.
Hello, thanks for this library it is so far working very well for me, a nearly complete beginner to rust.
I've got the example hello world server/client all up and running happily and now want to extend that server by adding the gRPC health check service to the hello world service. I'm got this working correctly instead of the hello world server, but I'm having trouble getting them running side by side. I tried the following, which compiles and runs, but only serves the hello world serve, the health check one simply returns ERR=Status { code: Unimplemented }
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = "[::1]:50051".parse().unwrap();
let greeter = MyGreeter::default();
let greeter_server = Server::builder()
.serve(addr, GreeterServer::new(greeter));
let example = ExampleServer::default();
let health_server = Server::builder()
.serve(addr, HealthServer::new(example));
join!(greeter_server, health_server);
Ok(())
}
It seems that the second server is clobbering the endpoints of the first server? I'm not really sure where to go from here, so any pointers would be greatly appreciated!
It is common to have a need to do concurrent client requests, but it seems that Grpc client dispatcher is implemented to handle one call at a time, so the following example does not work by design:
use futures::join;
pub mod hello_world {
tonic::include_proto!("helloworld");
}
use hello_world::{client::GreeterClient, HelloRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut client = GreeterClient::connect("http://[::1]:50051")?;
let request1 = tonic::Request::new(HelloRequest { name: "hello".into() });
let res1 = client.say_hello(request1);
let request2 = tonic::Request::new(HelloRequest { name: "hello".into() });
let res2 = client.say_hello(request2);
println!("RESPONSES={:?}", join!(res1, res2));
Ok(())
}
error[E0499]: cannot borrow `client` as mutable more than once at a time
--> tonic-examples/src/helloworld/client.rs:17:16
|
14 | let res1 = client.say_hello(request1);
| ------ first mutable borrow occurs here
...
17 | let res2 = client.say_hello(request2);
| ^^^^^^ second mutable borrow occurs here
18 |
19 | println!("RESPONSES={:?}", join!(res1, res2));
| ---- first borrow later used here
It seems that there is a need for multiplexing and a pool of clients.
This arguably effects both tonic
and tonic_build
It is quite confusing when you write one line of code and it produces an absolute mountain of error messages. This is what happens when you generate code using tonic_build
and you then include that generated code using tonic::include_proto!
.
This is because the generated code not only depends on tonic
but all: prost
, bytes
, and prost-types
.
One possible solution to this is to expose those dependencies through tonic
so that as long as the user is depending on tonic, the generated code will work. This can be done by not documenting this explicitly and discouraging the user of these dependencies through proto
except for in the generated code.
Alternatively, things can be left how they are and simply documented better.
tonic_build
In all of my company's projects, we use #![deny(missing_docs)]
. With protobuf codegen, we are able to document all the fields because prost turns // regular comments
into /// rust docstrings
. But I still get denied for the overall module which I cannot figure out how to document:
missing documentation for module foo:
pub mod foo {
}
I'm not sure if this is already possible, and if it would be a part of Tonic, Prost, or somewhere in the middle, but it'd be great to be able to generate code that is compatible with #![deny(missing_docs)]
. One potential solution is to allow a top-level comment of the form //!
that then gets transformed into a module docstring.
Any format to address the missing documentation would be valid.
After codegen, I can solve the problem by editing the code in my target directory. I've automated this in my build.rs
for now to get around it, but it is suboptimal. I have tried every combination of //
, //!
, and ///
in both my protobuf file and around tonic::include_proto!
but to no avail.
tonic
Currently the various TLS builders allow either raw configuration or configuration using the simple interfaces exposed by tonic
. It would be nice if they could be used together such that changes made via the simple configuration are reflected in the raw configuration.
Unclear at this stage.
This bug reports some usability issues with TLS in tonic
.
tonic = "0.1.0-alpha.3"
tonic-build = "0.1.0-alpha.3"
Windows 64-bit
When building a grpc client with tonic
and rustls
flag, without specifying a certificate in the builder, I expected tonic
to find my server's certificate in the root store, but instead I get some pretty cryptic error messages out of webpki
crate.
I did some digging, and even opened an issue on webpki
. The short of it: Tonic is able to build a full certificate chain. That's about all I know about that topic! I really know nothing about it. What fixed it for me was adding a line like this when creating my client config:
tls.root_store.add_server_trust_anchors(&webpki_roots::TLS_SERVER_ROOTS);
I don't know exactly what this line does, but it does fix my problem. It does require a new dependency on the webpki_roots
crate.
I could set the trust anchors manually, but it would circumvent a lot of the boilerplate tonic provides around rustls
and ClientConfig
.
Does it make sense to add this behavior and dependency to tonic
? My first reaction is no, but this seems like something others might trip on, and it would be nice if tonic
exposed this for free for users like me that need it. Perhaps behind a feature flag? I also don't know if OpenSSL operates the same way and would take advantage of the feature in the same way. I wasn't able to try it because OpenSSL linking is a bit awkward on windows.
I'm happy to PR a change in!
master: afa9d9d
Linux RILEY-LT 4.19.43-microsoft-standard #1 SMP Mon May 20 19:35:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
cargo-build
Because include_proto uses OUT_DIR it doesn't know where to find the files when using tonic_build::configure().out_dir("src/proto")
Line 11 in c1db642
At the moment, any function that takes a Client as a parameter requires specifying some pretty scary-looking type bounds:
async fn foo<T>(mut client: MyClient<T>)
where
T: tonic::client::GrpcService<tonic::body::BoxBody>,
T::ResponseBody: tonic::codegen::Body + tonic::codegen::HttpBody + Send + 'static,
T::Error: Into<tonic::codegen::StdError>,
<T::ResponseBody as tonic::codegen::HttpBody>::Error: Into<tonic::codegen::StdError> + Send,
<T::ResponseBody as tonic::codegen::HttpBody>::Data: Into<bytes::Bytes> + Send,
{ ... }
It would be nice to somehow simplify these sorts of declarations.
I'm not entirely sure how to work around this issue!
A heavyweight solution might be to write a proc macro that automagically includes these type bounds on a function. e.g:
#[tonic::clientbounds(T)]
async fn foo<T>(mut client: MyClient<T>) { ... }
There are probably other, simpler solutions, but I can't think of any great ones off the top of my head. That said, I don't have a thorough understanding of tonic internals, so maybe there is an easy fix.
tonic-0.1.0-alpha.1
Windows 10
Like this issue, #25, but StringValue
and others in google/protobuf/wrappers.proto
this time
tonic
generates client APIs for HTTP/2 gRPC clients. However;
I propose allowing the transport to be abstracted from the rest of the application logic so that alternative transports could be implemented.
This may add additional overhead in the transport, but ideally it would not.
Implementing specific clients and servers for these use cases is an alternative, but will contain a lot of duplicate logic.
git master
2734d3a
Linux RILEY-LT 4.19.43-microsoft-standard #1 SMP Mon May 20 19:35:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
tonic-build
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error("expected identifier")', src/libcore/result.rs:1165:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.37/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:76
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:60
4: core::fmt::write
at src/libcore/fmt/mod.rs:1030
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1412
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:64
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:196
9: std::panicking::default_hook
at src/libstd/panicking.rs:210
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:477
11: std::panicking::continue_panic_fmt
at src/libstd/panicking.rs:380
12: rust_begin_unwind
at src/libstd/panicking.rs:307
13: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
14: core::result::unwrap_failed
at src/libcore/result.rs:1165
15: core::result::Result<T,E>::unwrap
at /rustc/5752b6348ee6971573b278c315a02193c847ee32/src/libcore/result.rs:933
16: tonic_build::service::generate_unary
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/service.rs:219
17: tonic_build::service::generate_methods
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/service.rs:186
18: tonic_build::service::generate
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/service.rs:8
19: <tonic_build::ServiceGenerator as prost_build::ServiceGenerator>::generate
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/lib.rs:176
20: prost_build::code_generator::CodeGenerator::push_service::{{closure}}
at /home/riley/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.5.0/src/code_generator.rs:685
21: core::option::Option<T>::map
at /rustc/5752b6348ee6971573b278c315a02193c847ee32/src/libcore/option.rs:447
22: prost_build::code_generator::CodeGenerator::push_service
at /home/riley/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.5.0/src/code_generator.rs:682
23: prost_build::code_generator::CodeGenerator::generate
at /home/riley/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.5.0/src/code_generator.rs:103
24: prost_build::Config::generate
at /home/riley/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.5.0/src/lib.rs:566
25: prost_build::Config::compile_protos
at /home/riley/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.5.0/src/lib.rs:543
26: tonic_build::Builder::compile
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/lib.rs:98
27: tonic_build::compile_protos
at /home/riley/.cargo/git/checkouts/tonic-54a04bc36763b24d/8f80d0f/tonic-build/src/lib.rs:130
28: build_script_build::main
at rlbi_proto/build.rs:2
29: std::rt::lang_start::{{closure}}
at /rustc/5752b6348ee6971573b278c315a02193c847ee32/src/libstd/rt.rs:64
30: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:49
31: std::panicking::try::do_call
at src/libstd/panicking.rs:292
32: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:80
33: std::panicking::try
at src/libstd/panicking.rs:271
34: std::panic::catch_unwind
at src/libstd/panic.rs:394
35: std::rt::lang_start_internal
at src/libstd/rt.rs:48
36: std::rt::lang_start
at /rustc/5752b6348ee6971573b278c315a02193c847ee32/src/libstd/rt.rs:64
37: main
38: __libc_start_main
39: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
admin.proto:
syntax = "proto3";
package rlbi.admin.v1;
import "google/protobuf/empty.proto";
option java_package = "rlbi.admin.v1";
option java_outer_classname = "AdminProto";
option optimize_for = SPEED;
service Admin {
rpc AddApplication(AddApplicationRequest) returns (google.protobuf.Empty);
rpc RenameApplication(RenameApplicationRequest) returns (google.protobuf.Empty);
}
message AddApplicationRequest {
string name = 1;
}
message RenameApplicationRequest {
string old_name = 1;
string new_name = 2;
}
build.rs:
fn main() {
tonic_build::compile_protos("proto/rlbi/admin/v1/admin.proto").unwrap();
}
Cargo.toml:
[package]
name = "rlbi_proto"
version = "0.1.0"
authors = ["Riley Labrecque"]
edition = "2018"
[dependencies]
tonic = { git = "https://github.com/hyperium/tonic", features = ["rustls"] }
prost = "0.5"
bytes = "0.4"
[build-dependencies]
tonic-build = { git = "https://github.com/hyperium/tonic" }
Note: Compiling some other protos works fine, so it's something about mine, but I haven't quite been able to figure it out yet. Maybe the import or usage of the common types?
Hi,
I've written a trivial gRPC server that receives a request with an array of ints, adds them up, and returns the sum.
A test client does the following:
The client and the server communicate over the loopback device, so there should be very little latency in the network link.
I see a curious result in my system: the CPU usage of both the server, and the client, is approx. 10%.
I've implemented the same protocol in go, and the server and the client use 100% CPU, each. The request rate that the go implementation handles is 90x the request rate of the rust implementation. I've also tried "server in rust" + "client in go", and vice versa. In both cases the request rate is as low as with "server and client in rust".
For now, let us disregard the 90x request rate difference. The interesting question is: in the combination "client in go" + "server in rust", why does tonic-based gRPC server limit itself to 10% CPU? Go client can generate many more requests per second than the rust server handles. It feels like a bug in tokio or h2 that does not immediately wake a function blocked in recv().
I've tested with the following packages:
Please find the test code attached.
grpc-test-rust.tar.gz
grpc-test-go.tar.gz
What tools do rust and tokio provide to trace the execution of a tokio-based server? How can I see where tonic sleeps, and where does the request handling latency come from?
I am trying to run tonic on Raspberry Pi. It failed first at
rustup component add rustfmt --toolchain beta
I know it runs very well on my Mac, but just not working on Raspberry Pi because it is using Armv7 and the rustfmt is not ready for Armv7 beta yet.
If I try to ignore it but I had a hard stop when building...
Caused by:
process didn't exit successfully: /home/pi/github/trust-engine/docker-api/target/debug/build/docker-api-f5114b92b0e93bfc/build-script-build
(exit code: 101)
--- stdout
out: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "error: 'rustfmt' is not installed for the toolchain 'beta-armv7-unknown-linux-gnueabihf'\nTo install, run rustup component add rustfmt --toolchain beta-armv7-unknown-linux-gnueabihf
\n" }
--- stderr
thread 'main' panicked at 'assertion failed: out.status.success()', /home/pi/github/tonic/tonic-build/src/lib.rs:185:9
note: run with RUST_BACKTRACE=1
environment variable to display a backtrace.
I am not quite sure if rustfmt is a must-have in building tonic. If possible, do we have a walk around?
I am currently working on the project on RaspBerry Pi. If we do not have a solution on this, I have to give up the Tonic and back to Hyper. I like Tonic so much, and hope we can find a solution soon.
0.1.0-alpha.1
Linux pc 4.19.69-1-MANJARO #1 SMP PREEMPT Thu Aug 29 08:51:46 UTC 2019 x86_64 GNU/Linux
It seems decoder didn't free its buffer in time so it gets panic very soon when transmitting messages continuously.
Backtrace:
tokio-runtime-worker-4' panicked at 'assertion failed: self.remaining_mut() >= src.remaining()', /home/skye/.cargo/registry/src/github.com-1ecc6299db9ec823/bytes-0.4.12/src/buf/buf_mut.rs:230:9
stack backtrace:
0: 0x55b8f58d6be2 - backtrace::backtrace::libunwind::trace::hc13eee94dc9acf87
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.35/src/backtrace/libunwind.rs:88
1: 0x55b8f58d6be2 - backtrace::backtrace::trace_unsynchronized::heb95bad2d38970b7
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.35/src/backtrace/mod.rs:66
2: 0x55b8f58d6be2 - std::sys_common::backtrace::_print::hc7c53d16b772cd09
at src/libstd/sys_common/backtrace.rs:47
3: 0x55b8f58d6be2 - std::sys_common::backtrace::print::h1a5c96cc746359e4
at src/libstd/sys_common/backtrace.rs:36
4: 0x55b8f58d6be2 - std::panicking::default_hook::{{closure}}::h09e52b8c91cd08d3
at src/libstd/panicking.rs:200
5: 0x55b8f58d68c6 - std::panicking::default_hook::h683e3e2fe7c55791
at src/libstd/panicking.rs:214
6: 0x55b8f58d7335 - std::panicking::rust_panic_with_hook::h185be11cf56dd6c5
at src/libstd/panicking.rs:477
7: 0x55b8f5861545 - std::panicking::begin_panic::h26f44ad2be4189d7
at /rustc/9eae1fc0ea9b00341b8fe191582c4decb5cb86a3/src/libstd/panicking.rs:411
8: 0x55b8f4f3c1e6 - bytes::buf::buf_mut::BufMut::put::h1eda9ff079116541
at /home/skye/rusty-p4/benchmark/<::std::macros::panic macros>:3
9: 0x55b8f4e29730 - <tonic::codec::decode::Streaming<T> as futures_core::stream::Stream>::poll_next::hbf98f28ee14a6dcc
at /home/skye/.cargo/registry/src/github.com-1ecc6299db9ec823/tonic-0.1.0-alpha.1/src/codec/decode.rs:237
master: afa9d9d
Linux RILEY-LT 4.19.43-microsoft-standard #1 SMP Mon May 20 19:35:22 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
cargo-build
When using includes it seems like the client/server are generated into the include file output. This only seems to happen under certain conditions, I believe related to compiling multiple protos at once via tonic_build::configure().compile(&["proto/health.proto", "proto/test.proto"], &["proto"]).unwrap();
Repro here:
https://github.com/rlabrecque/tonic_include_repro
Specifically see: https://github.com/rlabrecque/tonic_include_repro/blob/master/src/proto/google.protobuf.rs
This is a followup to the conversation in #81 (comment) - I decided to create a new ticket to raise awareness.
tonic = "0.1.0-alpha.5"
Darwin JayMBP-2.local 18.7.0 Darwin Kernel Version 18.7.0: Sat Oct 12 00:02:19 PDT 2019; root:xnu-4903.278.12~1/RELEASE_X86_64 x86_64
In older releases, we're able to use non-Sync futures, like hyper::client::ResponseFuture
to yield items in a streaming response. #84 added a Sync trait bound to the pinned Stream
trait object returned by the service impl. The following code works with the previous version of tonic:
#[derive(Debug)]
pub struct RouteGuide {
client: Client<HttpConnector, Body>,
}
#[tonic::async_trait]
impl server::RouteGuide for RouteGuide {
type RouteChatStream =
Pin<Box<dyn Stream<Item = Result<RouteNote, Status>> + Send + 'static>>;
async fn route_chat(
&self,
request: Request<tonic::Streaming<RouteNote>>,
) -> Result<Response<Self::RouteChatStream>, Status> {
println!("RouteChat");
let stream = request.into_inner();
let client = self.client.clone();
let output = async_stream::try_stream! {
futures::pin_mut!(stream);
while let Some(note) = stream.next().await {
let _note = note?;
// Make a simple HTTP request. What could possibly go wrong?
let res = client.get(hyper::Uri::from_static("http://httpbin.org/get")).await;
// Receive the response as a byte stream
let mut body = res.unwrap().into_body();
let mut bytes = Vec::new();
while let Some(chunk) = body.next().await {
bytes.extend(chunk.map_err(|_| Status::new(tonic::Code::Internal, "Error"))?);
}
let message = String::from_utf8_lossy(&bytes).to_string();
let note = RouteNote {
location: None,
message,
};
yield note;
}
};
Ok(Response::new(Box::pin(output)
as Pin<
Box<dyn Stream<Item = Result<RouteNote, Status>> + Send + 'static>,
>))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = "[::1]:10000".parse().unwrap();
println!("Listening on: {}", addr);
let client = hyper::client::Client::new();
let route_guide = RouteGuide {
client,
};
let svc = server::RouteGuideServer::new(route_guide);
Server::builder().serve(addr, svc).await?;
Ok(())
}
And the updated code now fails:
#[derive(Debug)]
pub struct RouteGuide {
client: Client<HttpConnector, Body>,
}
#[tonic::async_trait]
impl server::RouteGuide for RouteGuide {
type RouteChatStream =
Pin<Box<dyn Stream<Item = Result<RouteNote, Status>> + Send + Sync + 'static>>;
async fn route_chat(
&self,
request: Request<tonic::Streaming<RouteNote>>,
) -> Result<Response<Self::RouteChatStream>, Status> {
println!("RouteChat");
let stream = request.into_inner();
let client = self.client.clone();
let output = async_stream::try_stream! {
futures::pin_mut!(stream);
while let Some(note) = stream.next().await {
let _note = note?;
// Make a simple HTTP request. What could possibly go wrong?
let res = client.get(hyper::Uri::from_static("http://httpbin.org/get")).await;
// Receive the response as a byte stream
let mut body = res.unwrap().into_body();
let mut bytes = Vec::new();
while let Some(chunk) = body.next().await {
bytes.extend(chunk.map_err(|_| Status::new(tonic::Code::Internal, "Error"))?);
}
let message = String::from_utf8_lossy(&bytes).to_string();
let note = RouteNote {
location: None,
message,
};
yield note;
}
};
Ok(Response::new(Box::pin(output)
as Pin<
Box<dyn Stream<Item = Result<RouteNote, Status>> + Send + Sync + 'static>,
>))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = "[::1]:10000".parse().unwrap();
println!("Listening on: {}", addr);
let client = hyper::client::Client::new();
let route_guide = RouteGuide {
client,
};
let svc = server::RouteGuideServer::new(route_guide);
Server::builder().add_service(svc).serve(addr).await?;
Ok(())
}
error[E0277]: `(dyn core::future::future::Future<Output = std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>> + std::marker::Send + 'static)` cannot be shared between threads safely
|
= help: the trait `std::marker::Sync` is not implemented for `(dyn core::future::future::Future<Output = std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>> + std::marker::Send + 'static)`
= note: required because of the requirements on the impl of `std::marker::Sync` for `std::ptr::Unique<(dyn core::future::future::Future<Output = std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>> + std::marker::Send + 'static)>`
= note: required because it appears within the type `std::boxed::Box<(dyn core::future::future::Future<Output = std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>> + std::marker::Send + 'static)>`
= note: required because it appears within the type `std::pin::Pin<std::boxed::Box<(dyn core::future::future::Future<Output = std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>> + std::marker::Send + 'static)>>`
= note: required because it appears within the type `hyper::client::ResponseFuture`
= note: required because it appears within the type `for<'r, 's, 't0, 't1, 't2, 't3, 't4, 't5, 't6, 't7, 't8, 't9, 't10, 't11, 't12, 't13, 't14> {tonic::codec::decode::Streaming<routeguide::RouteNote>, std::pin::Pin<&'r mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, &'s mut std::pin::Pin<&'t0 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, std::pin::Pin<&'t1 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, futures_util::stream::next::Next<'t2, std::pin::Pin<&'t3 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, futures_util::stream::next::Next<'t4, std::pin::Pin<&'t5 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, (), std::option::Option<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, tonic::status::Status, &'t6 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, impl core::future::future::Future, impl core::future::future::Future, (), routeguide::RouteNote, &'t7 hyper::client::Client<hyper::client::connect::http::HttpConnector>, hyper::client::Client<hyper::client::connect::http::HttpConnector>, &'t8 str, http::uri::Uri, hyper::client::ResponseFuture, hyper::client::ResponseFuture, (), std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>, hyper::body::body::Body, std::vec::Vec<u8>, &'t9 mut hyper::body::body::Body, hyper::body::body::Body, impl core::future::future::Future, impl core::future::future::Future, (), std::option::Option<std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, &'t12 mut std::vec::Vec<u8>, std::vec::Vec<u8>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, [closure@<::async_stream::try_stream macros>:8:25: 8:54], std::result::Result<hyper::body::chunk::Chunk, tonic::status::Status>, &'t13 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, (), std::string::String, routeguide::RouteNote, &'t14 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, ()}` = note: required because it appears within the type `[static generator@<::async_stream::try_stream macros>:7:10: 11:11 stream:tonic::codec::decode::Streaming<routeguide::RouteNote>, __yield_tx:async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, client:hyper::client::Client<hyper::client::connect::http::HttpConnector> for<'r, 's, 't0, 't1, 't2, 't3, 't4, 't5, 't6, 't7, 't8, 't9, 't10, 't11, 't12, 't13, 't14> {tonic::codec::decode::Streaming<routeguide::RouteNote>, std::pin::Pin<&'r mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, &'s mut std::pin::Pin<&'t0 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, std::pin::Pin<&'t1 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, futures_util::stream::next::Next<'t2, std::pin::Pin<&'t3 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, futures_util::stream::next::Next<'t4, std::pin::Pin<&'t5 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, (), std::option::Option<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, tonic::status::Status, &'t6 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, impl core::future::future::Future, impl core::future::future::Future, (), routeguide::RouteNote, &'t7 hyper::client::Client<hyper::client::connect::http::HttpConnector>, hyper::client::Client<hyper::client::connect::http::HttpConnector>, &'t8 str, http::uri::Uri, hyper::client::ResponseFuture, hyper::client::ResponseFuture, (), std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>, hyper::body::body::Body, std::vec::Vec<u8>, &'t9 mut hyper::body::body::Body, hyper::body::body::Body, impl core::future::future::Future, impl core::future::future::Future, (), std::option::Option<std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, &'t12 mut std::vec::Vec<u8>, std::vec::Vec<u8>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, [closure@<::async_stream::try_stream macros>:8:25: 8:54], std::result::Result<hyper::body::chunk::Chunk, tonic::status::Status>, &'t13 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, (), std::string::String, routeguide::RouteNote, &'t14 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, ()}]` = note: required because it appears within the type `std::future::GenFuture<[static generator@<::async_stream::try_stream macros>:7:10: 11:11 stream:tonic::codec::decode::Streaming<routeguide::RouteNote>, __yield_tx:async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, client:hyper::client::Client<hyper::client::connect::http::HttpConnector> for<'r, 's, 't0, 't1, 't2, 't3, 't4, 't5, 't6, 't7, 't8, 't9, 't10, 't11, 't12, 't13, 't14> {tonic::codec::decode::Streaming<routeguide::RouteNote>, std::pin::Pin<&'r mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, &'s mut std::pin::Pin<&'t0 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, std::pin::Pin<&'t1 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>, futures_util::stream::next::Next<'t2, std::pin::Pin<&'t3 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, futures_util::stream::next::Next<'t4, std::pin::Pin<&'t5 mut tonic::codec::decode::Streaming<routeguide::RouteNote>>>, (), std::option::Option<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, tonic::status::Status, &'t6 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, std::result::Result<routeguide::RouteNote, tonic::status::Status>, impl core::future::future::Future, impl core::future::future::Future, (), routeguide::RouteNote, &'t7 hyper::client::Client<hyper::client::connect::http::HttpConnector>, hyper::client::Client<hyper::client::connect::http::HttpConnector>, &'t8 str, http::uri::Uri, hyper::client::ResponseFuture, hyper::client::ResponseFuture, (), std::result::Result<http::response::Response<hyper::body::body::Body>, hyper::error::Error>, hyper::body::body::Body, std::vec::Vec<u8>, &'t9 mut hyper::body::body::Body, hyper::body::body::Body, impl core::future::future::Future, impl core::future::future::Future, (), std::option::Option<std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, &'t12 mut std::vec::Vec<u8>, std::vec::Vec<u8>, std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>, [closure@<::async_stream::try_stream macros>:8:25: 8:54], std::result::Result<hyper::body::chunk::Chunk, tonic::status::Status>, &'t13 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, (), std::string::String, routeguide::RouteNote, &'t14 mut async_stream::yielder::Sender<std::result::Result<routeguide::RouteNote, tonic::status::Status>>, impl core::future::future::Future, ()}]>`
= note: required because it appears within the type `impl core::future::future::Future`
= note: required because it appears within the type `async_stream::async_stream::AsyncStream<std::result::Result<routeguide::RouteNote, tonic::status::Status>, impl core::future::future::Future>`
= note: required for the cast to the object type `dyn futures_core::stream::Stream<Item = std::result::Result<routeguide::RouteNote, tonic::status::Status>> + std::marker::Send + std::marker::Sync`
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.