Coder Social home page Coder Social logo

prost's Introduction

continuous integration Documentation Crate Dependency Status Discord

PROST!

prost is a Protocol Buffers implementation for the Rust Language. prost generates simple, idiomatic Rust code from proto2 and proto3 files.

Compared to other Protocol Buffers implementations, prost

  • Generates simple, idiomatic, and readable Rust types by taking advantage of Rust derive attributes.
  • Retains comments from .proto files in generated Rust code.
  • Allows existing Rust types (not generated from a .proto) to be serialized and deserialized by adding attributes.
  • Uses the bytes::{Buf, BufMut} abstractions for serialization instead of std::io::{Read, Write}.
  • Respects the Protobuf package specifier when organizing generated code into Rust modules.
  • Preserves unknown enum values during deserialization.
  • Does not include support for runtime reflection or message descriptors.

Using prost in a Cargo Project

First, add prost and its public dependencies to your Cargo.toml:

[dependencies]
prost = "0.12"
# Only necessary if using Protobuf well-known types:
prost-types = "0.12"

The recommended way to add .proto compilation to a Cargo project is to use the prost-build library. See the prost-build documentation for more details and examples.

See the snazzy repository for a simple start-to-finish example.

MSRV

prost follows the tokio-rs project's MSRV model and supports 1.70. For more information on the tokio msrv policy you can check it out here

Generated Code

prost generates Rust code from source .proto files using the proto2 or proto3 syntax. prost's goal is to make the generated code as simple as possible.

protoc

With prost-build v0.11 release, protoc will be required to invoke compile_protos (unless skip_protoc is enabled). Prost will no longer provide bundled protoc or attempt to compile protoc for users. For install instructions for protoc, please check out the protobuf install instructions.

Packages

Prost can now generate code for .proto files that don't have a package spec. prost will translate the Protobuf package into a Rust module. For example, given the package specifier:

package foo.bar;

All Rust types generated from the file will be in the foo::bar module.

Messages

Given a simple message declaration:

// Sample message.
message Foo {
}

prost will generate the following Rust struct:

/// Sample message.
#[derive(Clone, Debug, PartialEq, Message)]
pub struct Foo {
}

Fields

Fields in Protobuf messages are translated into Rust as public struct fields of the corresponding type.

Scalar Values

Scalar value types are converted as follows:

Protobuf Type Rust Type
double f64
float f32
int32 i32
int64 i64
uint32 u32
uint64 u64
sint32 i32
sint64 i64
fixed32 u32
fixed64 u64
sfixed32 i32
sfixed64 i64
bool bool
string String
bytes Vec<u8>

Enumerations

All .proto enumeration types convert to the Rust i32 type. Additionally, each enumeration type gets a corresponding Rust enum type. For example, this proto enum:

enum PhoneType {
  MOBILE = 0;
  HOME = 1;
  WORK = 2;
}

gets this corresponding Rust enum 1:

pub enum PhoneType {
    Mobile = 0,
    Home = 1,
    Work = 2,
}

You can convert a PhoneType value to an i32 by doing:

PhoneType::Mobile as i32

The #[derive(::prost::Enumeration)] annotation added to the generated PhoneType adds these associated functions to the type:

impl PhoneType {
    pub fn is_valid(value: i32) -> bool { ... }
    #[deprecated]
    pub fn from_i32(value: i32) -> Option<PhoneType> { ... }
}

It also adds an impl TryFrom<i32> for PhoneType, so you can convert an i32 to its corresponding PhoneType value by doing, for example:

let phone_type = 2i32;

match PhoneType::try_from(phone_type) {
    Ok(PhoneType::Mobile) => ...,
    Ok(PhoneType::Home) => ...,
    Ok(PhoneType::Work) => ...,
    Err(_) => ...,
}

Additionally, wherever a proto enum is used as a field in a Message, the message will have 'accessor' methods to get/set the value of the field as the Rust enum type. For instance, this proto PhoneNumber message that has a field named type of type PhoneType:

message PhoneNumber {
  string number = 1;
  PhoneType type = 2;
}

will become the following Rust type 2 with methods type and set_type:

pub struct PhoneNumber {
    pub number: String,
    pub r#type: i32, // the `r#` is needed because `type` is a Rust keyword
}

impl PhoneNumber {
    pub fn r#type(&self) -> PhoneType { ... }
    pub fn set_type(&mut self, value: PhoneType) { ... }
}

Note that the getter methods will return the Rust enum's default value if the field has an invalid i32 value.

The enum type isn't used directly as a field, because the Protobuf spec mandates that enumerations values are 'open', and decoding unrecognized enumeration values must be possible.

Field Modifiers

Protobuf scalar value and enumeration message fields can have a modifier depending on the Protobuf version. Modifiers change the corresponding type of the Rust field:

.proto Version Modifier Rust Type
proto2 optional Option<T>
proto2 required T
proto3 default T for scalar types, Option<T> otherwise
proto3 optional Option<T>
proto2/proto3 repeated Vec<T>

Note that in proto3 the default representation for all user-defined message types is Option<T>, and for scalar types just T (during decoding, a missing value is populated by T::default()). If you need a witness of the presence of a scalar type T, use the optional modifier to enforce an Option<T> representation in the generated Rust struct.

Map Fields

Map fields are converted to a Rust HashMap with key and value type converted from the Protobuf key and value types.

Message Fields

Message fields are converted to the corresponding struct type. The table of field modifiers above applies to message fields, except that proto3 message fields without a modifier (the default) will be wrapped in an Option. Typically message fields are unboxed. prost will automatically box a message field if the field type and the parent type are recursively nested in order to avoid an infinite sized struct.

Oneof Fields

Oneof fields convert to a Rust enum. Protobuf oneofs types are not named, so prost uses the name of the oneof field for the resulting Rust enum, and defines the enum in a module under the struct. For example, a proto3 message such as:

message Foo {
  oneof widget {
    int32 quux = 1;
    string bar = 2;
  }
}

generates the following Rust3:

pub struct Foo {
    pub widget: Option<foo::Widget>,
}
pub mod foo {
    pub enum Widget {
        Quux(i32),
        Bar(String),
    }
}

oneof fields are always wrapped in an Option.

Services

prost-build allows a custom code-generator to be used for processing service definitions. This can be used to output Rust traits according to an application's specific needs.

Generated Code Example

Example .proto file:

syntax = "proto3";
package tutorial;

message Person {
  string name = 1;
  int32 id = 2;  // Unique ID number for this person.
  string email = 3;

  enum PhoneType {
    MOBILE = 0;
    HOME = 1;
    WORK = 2;
  }

  message PhoneNumber {
    string number = 1;
    PhoneType type = 2;
  }

  repeated PhoneNumber phones = 4;
}

// Our address book file is just one of these.
message AddressBook {
  repeated Person people = 1;
}

and the generated Rust code (tutorial.rs):

#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Person {
    #[prost(string, tag="1")]
    pub name: ::prost::alloc::string::String,
    /// Unique ID number for this person.
    #[prost(int32, tag="2")]
    pub id: i32,
    #[prost(string, tag="3")]
    pub email: ::prost::alloc::string::String,
    #[prost(message, repeated, tag="4")]
    pub phones: ::prost::alloc::vec::Vec<person::PhoneNumber>,
}
/// Nested message and enum types in `Person`.
pub mod person {
    #[derive(Clone, PartialEq, ::prost::Message)]
    pub struct PhoneNumber {
        #[prost(string, tag="1")]
        pub number: ::prost::alloc::string::String,
        #[prost(enumeration="PhoneType", tag="2")]
        pub r#type: i32,
    }
    #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, ::prost::Enumeration)]
    #[repr(i32)]
    pub enum PhoneType {
        Mobile = 0,
        Home = 1,
        Work = 2,
    }
}
/// Our address book file is just one of these.
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct AddressBook {
    #[prost(message, repeated, tag="1")]
    pub people: ::prost::alloc::vec::Vec<Person>,
}

Accessing the protoc FileDescriptorSet

The prost_build::Config::file_descriptor_set_path option can be used to emit a file descriptor set during the build & code generation step. When used in conjunction with the std::include_bytes macro and the prost_types::FileDescriptorSet type, applications and libraries using Prost can implement introspection capabilities requiring details from the original .proto files.

Using prost in a no_std Crate

prost is compatible with no_std crates. To enable no_std support, disable the std features in prost and prost-types:

[dependencies]
prost = { version = "0.6", default-features = false, features = ["prost-derive"] }
# Only necessary if using Protobuf well-known types:
prost-types = { version = "0.6", default-features = false }

Additionally, configure prost-build to output BTreeMaps instead of HashMaps for all Protobuf map fields in your build.rs:

let mut config = prost_build::Config::new();
config.btree_map(&["."]);

When using edition 2015, it may be necessary to add an extern crate core; directive to the crate which includes prost-generated code.

Serializing Existing Types

prost uses a custom derive macro to handle encoding and decoding types, which means that if your existing Rust type is compatible with Protobuf types, you can serialize and deserialize it by adding the appropriate derive and field annotations.

Currently the best documentation on adding annotations is to look at the generated code examples above.

Tag Inference for Existing Types

Prost automatically infers tags for the struct.

Fields are tagged sequentially in the order they are specified, starting with 1.

You may skip tags which have been reserved, or where there are gaps between sequentially occurring tag values by specifying the tag number to skip to with the tag attribute on the first field after the gap. The following fields will be tagged sequentially starting from the next number.

use prost;
use prost::{Enumeration, Message};

#[derive(Clone, PartialEq, Message)]
struct Person {
    #[prost(string, tag = "1")]
    pub id: String, // tag=1
    // NOTE: Old "name" field has been removed
    // pub name: String, // tag=2 (Removed)
    #[prost(string, tag = "6")]
    pub given_name: String, // tag=6
    #[prost(string)]
    pub family_name: String, // tag=7
    #[prost(string)]
    pub formatted_name: String, // tag=8
    #[prost(uint32, tag = "3")]
    pub age: u32, // tag=3
    #[prost(uint32)]
    pub height: u32, // tag=4
    #[prost(enumeration = "Gender")]
    pub gender: i32, // tag=5
    // NOTE: Skip to less commonly occurring fields
    #[prost(string, tag = "16")]
    pub name_prefix: String, // tag=16  (eg. mr/mrs/ms)
    #[prost(string)]
    pub name_suffix: String, // tag=17  (eg. jr/esq)
    #[prost(string)]
    pub maiden_name: String, // tag=18
}

#[derive(Clone, Copy, Debug, PartialEq, Eq, Enumeration)]
pub enum Gender {
    Unknown = 0,
    Female = 1,
    Male = 2,
}

Nix

The prost project maintains flakes support for local development. Once you have nix and nix flakes setup you can just run nix develop to get a shell configured with the required dependencies to compile the whole project.

Feature Flags

  • std: Enable integration with standard library. Disable this feature for no_std support. This feature is enabled by default.
  • derive: Enable integration with prost-derive. Disable this feature to reduce compile times. This feature is enabled by default.
  • prost-derive: Deprecated. Alias for derive feature.
  • no-recursion-limit: Disable the recursion limit. The recursion limit is 100 and cannot be customized.

FAQ

  1. Could prost be implemented as a serializer for Serde?

Probably not, however I would like to hear from a Serde expert on the matter. There are two complications with trying to serialize Protobuf messages with Serde:

  • Protobuf fields require a numbered tag, and currently there appears to be no mechanism suitable for this in serde.
  • The mapping of Protobuf type to Rust type is not 1-to-1. As a result, trait-based approaches to dispatching don't work very well. Example: six different Protobuf field types correspond to a Rust Vec<i32>: repeated int32, repeated sint32, repeated sfixed32, and their packed counterparts.

But it is possible to place serde derive tags onto the generated types, so the same structure can support both prost and Serde.

  1. I get errors when trying to run cargo test on MacOS

If the errors are about missing autoreconf or similar, you can probably fix them by running

brew install automake
brew install libtool

License

prost is distributed under the terms of the Apache License (Version 2.0).

See LICENSE for details.

Copyright 2022 Dan Burkert & Tokio Contributors

Footnotes

  1. Annotations have been elided for clarity. See below for a full example.

  2. Annotations have been elided for clarity. See below for a full example.

  3. Annotations have been elided for clarity. See below for a full example.

prost's People

Contributors

aalexandrov avatar adeschamps avatar briansmith avatar carols10cents avatar caspermeijn avatar chetaldrich avatar danburkert avatar danielsn avatar erwanor avatar gibbz00 avatar hanya avatar im-0 avatar jeffparsons avatar jen20 avatar koushiro avatar ldm0 avatar luciofranco avatar max-meldrum avatar mumbleskates avatar mzabaluev avatar nbaztec avatar neoeinstein avatar nrc avatar olix0r avatar sfackler avatar tinrab avatar tottoto avatar vorner avatar vriesk avatar yoshikitakashima avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prost's Issues

test-all-types: all_types_proto3_eq is missing a few cases

all_types_proto3_eq tries to compare two TestAllTypes messages ignoring NaN float values. Unfortunately it's missing a few cases: floats nested in Value oneofs in the optional_value and repeated_value fields. The fuzz tests will fail due to this defect after running for many hours.

Using borrowed values

Currently everything is using String and Vec<u8> -- are there any plans in the future to allow supporting borrowed types like Cow<'a, str> and Cow<'a, [u8]>? Maybe it could be toggleable via some flag in build.rs.

A lot of times I'm creating an instance of a Prost-generated struct just to convert it to bytes immediately after, or I'll have a bunch of Prost structs that point to the same data (e.g., a &'static str error message), so in those cases I don't need the struct to own the data.

panicked at 'byte index 1 is not a char boundary; it is inside '\u{1}' (bytes 0..1)

Hi there!

Looks like a regression recently happened. Not sure if rustc is to blame (this is with 1.24.0-nightly from 2018-01-03) or if it is a bug in prost.

Here's a trivial proto file:

syntax = "proto3";

package edgedns.cli;

message Command {
    message ServiceLoad {
        string service_id = 1;
        string library_path = 2;
    }
    message ServiceUnload {
        string service_id = 1;
    }
    oneof action {
        ServiceLoad service_load = 1;
        ServiceUnload service_unload = 2;
    }
}

And an impressive build.rs file:

extern crate prost_build;

fn main() {
    prost_build::compile_protos(&["src/edgedns-cli.proto"], &["src/"]).unwrap();
}

That sadly fails with:

thread 'main' panicked at 'byte index 1 is not a char boundary; it is inside '\u{0}' (bytes 0..1) of `���������dp|��������M��-��-���-��-�@�@8 H�@H�@�@	�|����	�|����]R�P�A�}��������|����`[...]', src/libcore/str/mod.rs:2234:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49

...

  12: core::str::traits::<impl core::slice::SliceIndex<str> for core::ops::range::RangeFrom<usize>>::index
             at /Users/travis/build/rust-lang/rust/src/libcore/str/mod.rs:1987
  13: core::str::traits::<impl core::ops::index::Index<core::ops::range::RangeFrom<usize>> for str>::index
             at /Users/travis/build/rust-lang/rust/src/libcore/str/mod.rs:1734
  14: unicode_segmentation::word::UWordBounds::get_next_cat
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-segmentation-1.2.0/src/word.rs:611
  15: <unicode_segmentation::word::UWordBounds<'a> as core::iter::iterator::Iterator>::next
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-segmentation-1.2.0/src/word.rs:227
  16: <&'a mut I as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/iterator.rs:2380
  17: <core::iter::Filter<I, P> as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/mod.rs:1362
  18: <unicode_segmentation::word::UnicodeWords<'a> as core::iter::iterator::Iterator>::next
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-segmentation-1.2.0/src/word.rs:30
  19: heck::transform
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/heck-0.3.0/src/lib.rs:81
  20: <str as heck::camel::CamelCase>::to_camel_case::{{closure}}
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/heck-0.3.0/src/snake.rs:37
  21: prost_build::ident::to_snake
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/ident.rs:8
  22: core::ops::function::FnMut::call_mut
             at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:146
  23: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &'a mut F>::call_once
             at /Users/travis/build/rust-lang/rust/src/libcore/ops/function.rs:271
  24: <core::option::Option<T>>::map
             at /Users/travis/build/rust-lang/rust/src/libcore/option.rs:404
  25: <core::iter::Map<I, F> as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/mod.rs:1251
  26: <core::iter::Chain<A, B> as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/mod.rs:758
  27: <core::iter::Chain<A, B> as core::iter::iterator::Iterator>::next
             at /Users/travis/build/rust-lang/rust/src/libcore/iter/mod.rs:754
  28: itertools::Itertools::join
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/itertools-0.6.5/src/lib.rs:1203
  29: prost_build::code_generator::CodeGenerator::resolve_ident
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/code_generator.rs:574
  30: prost_build::code_generator::CodeGenerator::resolve_type
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/code_generator.rs:551
  31: prost_build::code_generator::CodeGenerator::append_oneof
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/code_generator.rs:379
  32: prost_build::code_generator::CodeGenerator::append_message
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/code_generator.rs:215
  33: prost_build::code_generator::CodeGenerator::generate
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/code_generator.rs:94
  34: <core::str::Split<'a, P> as core::iter::traits::DoubleEndedIterator>::next_back
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/lib.rs:342
  35: prost_build::Config::compile_protos
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/lib.rs:321
  36: prost_build::compile_protos
             at /Users/j/.cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.2.3/src/lib.rs:401
  37: build_script_build::main
             at ./build.rs:4
  38: std::rt::lang_start::{{closure}}
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  39: std::panicking::try::do_call
             at src/libstd/rt.rs:59
             at src/libstd/panicking.rs:480
  40: backtrace_vector_release
             at src/libpanic_unwind/lib.rs:101
  41: std::sys_common::gnu::libbacktrace::init_state
             at src/libstd/panicking.rs:459
             at src/libstd/panic.rs:365
             at src/libstd/rt.rs:58
  42: std::rt::lang_start
             at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
  43: build_script_build::main

Any idea?

rustdoc does not emit Message impl for generated types

I am not sure if I am too dumb, but I couldn't figure out how to actually read a file using the generated code an example would be really helpful.

Using prost, I've generated the rust bindings for the OpenStreetMap .pbf files: https://crates.io/crates/osm-proto-rs

And here is a small PBF file, for testing: https://github.com/sharazam/small-test-pbf-file

How do I read the file, for example the header block? Do I use bytes or just File::open or is there a hidden "Reader" somewhere? Thanks in advance.

Oneof is a bit unwieldy

I have some deeply nested messages along the lines of

message A {
    oneof specific {
         B b = 1;
         C c = 2;
    }
}

message B {
    oneof more_specific {
        E e = 1;
        F f = 2;
    }
}

And so on, this is a... pain with prost. Currently, a OneOf is an option over an enum, meaning trying to do this involves repeated checks for malformed messages. It would perhaps be nice to allow an optional directive to prost_build and prost_derive to allow failure on decoding when a oneof isn't set instead of going the option route.

I say optional because I understand that it IS considered a possible field value in protobuf to help ease failure while decoding messages made with old formats, but it's a pretty big ergonomics issue even 2 or 3 nested levels deep.

Auto-detection of local protoc

Hello

I've noticed the prost-build can be told to use local protoc command instead of downloading and building one itself. This is great thing for various offline build environments.

However, would it be possible to detect if protoc is available automatically and greatly speeding up compilation out of the box?

codegen: add option to box Message fields

If I have a protocol version 1:

    message B {
      int64 num = 1;
    }
    message A {
      B sub = 1;
    }

The generated code will inline the B struct into the A struct, and my code using it will access the field directly.

If I then release version 2 that introduces recursion:

    message B {
      int64 num = 1;
      A recurse = 2;
    }
    message A {
      B sub = 1;
    }

My code using the old generated API won't even compile when updating to the new version, which just adds a field to a proto I might not even be using directly.

As a separate concern, as someone who spends a lot of time optimizing C++ code that uses a lot of protobufs, many proto designs assume that unused submessages will take up minimal space when not used (i.e. by being empty references) and inlining them would lead to a lot more memory usage. It's at least worth mentioning somewhere in the docs that the default of inlining submessages is a very different memory tradeoff than all of the other proto implementations I've seen.

Implement Default on generated structs

Message::clear() is resetting all fields to their default values already, but in some situations it would be nice to be able to just create a "cleared" object.

Add a codegen builder

There are expected to be a few more optional features in the future: map type, service generator, zero copy, etc.

Fail to decode a binary format PerftestData message

Hello,

I did a benchmark about three protobuf crates, rust-protobuf, quick-protobuf and this, prost. :-)


I found an unexpected error when decoding a binary format message, PerftestData.

Error { repr: Custom(Custom { kind: InvalidData, error: StringError("invalid zero tag value") }) }

Snippet:

let mut is = File::open(&Path::new("perftest_data.pbbin")).unwrap();
let mut data = Vec::new();
is.read_to_end(&mut data).unwrap();
let mut bin = bytes::Bytes::from(data).into_buf();

// FIXME: Error { repr: Custom(Custom { kind: InvalidData, error: StringError("invalid zero tag value") }) }
let test_data_prost = perftest_data_prost::PerftestData::decode_length_delimited(&mut bin).unwrap();

See more at https://github.com/overvenus/quick-protobuf/blob/aeef8af0467eb15983f34d5ac1eab17539b19815/benches/rust-protobuf/perftest.rs#L354-L359

Is it a bug or am I missing something?

Thank you!

Consider "inlining" oneof messages

In addition to #41, I was wondering if inlining message fields into oneof generated enums might be something you'd consider.

message Foo {
    oneof foo_type {
        Bar bar = 1;
        Baz baz = 2;  
        float y = 3;
    }
}

Bar {
    required string a = 1;
}

Baz {
    oneof baz_type {
        int32 id = 1;
        float number = 2;
    }
}

Can be interpreted as

// Omitting probably necessary module scoping

enum BazType {
    Id(int),
    Number(float),
}

enum FooType {
    Bar { a: String },
    Baz { baz_type: Option<BazType> },
    Y(f32),
}

struct Foo {
    foo_type: Option<FooType>   
}

Which also reduces stutter a lot. This would allow the code I mentioned in my reply to #41 to be (in addition to the switching to Unknown fields)

match self.specific_msg {
    SpecificMsg::Config { 
        which_module: WhichModule::CoreCfg {
            plugin_type: plugin_type,
        }
    } =>{ /* do things */ },
     SpecificMsg::Config { 
       which_module: WhichModule::CoreCfg {
            plugin_type:  OneOfEnum::Unknown,
        }
    } => panic!("Invalid plugin_type"),
    SpecificMsg::Config { 
        which_module: WhichModule::Unknown
     } => panic!("This module doesn't handle this type of config"),
    SpecificMsg::Unknown => panic!("Invalid variant denoting the specific message type"),
}

It's not quite as notable with this example, but it reduces the fact that all oneof fields have to be double nested which causes a lot of extra levels. It also looks a lot more like the Rust code I'd write to describe these types. Personally, I'd prefer having both an inline enum and a struct I can convert to (if the message occurs outside this context) that always having to use a struct. Because I have a lot of oneofs with marker messages like

message Foo {
    oneof type {
        A a = 1;
        B b = 2;
        C c = 3;
        D d = 4;
    }
}

message A {}
message B {}
message C {}
message D { required string name = 1; }

Because marker messages reduce error handing from the other way of doing this (enums with accompanying optional fields) as well as improving future extensibility.

To me this is basically the exact same thing as writing

enum Foo {
    A,
    B,
    C,
    D { name: String },
}

(Albeit prost would probably generate a struct with one field for Foo which is fine, I don't expect too complex of auto-sugaring).

This is neglecting the Option/Unknown variant of course, but if #41 is not adopted in favor of Unknown variants, there's no harm in just generating struct Foo { type: Option<foo::Type> } and if it is it's just an extra variant.

I'll admit there are some questions about the semantics here with regards to if you can/should do this if a message is used both as a field and in a oneof, but personally I'd prefer having both a standalone and an inlined variant available to having to use the struct because I tend to use deeply-nested messages. At the very least, I think this should be done if it appears a message is only used as a oneof variant.

Implement 'Eq' trait for proto

Is there a way to specify custom trait implementations, such as Eq, in the proto file for prost-build to generate?

If not, what is the correct way to implement a trait for the generated Rust code?

Thanks!

Documentation for the ServiceGenerator trait

Hello

What is the status around services in prost? There seem to be some support in the ServiceGenerator trait in the prost-build crate, but it is unclear how it should work and what the result would be.

Would it be possible to have an example with a service?

Can't use decode_length_delimited from a tokio Decoder implementation.

The signature for tokio's decode function in its codec library is:

fn decode(&mut self, buf: &mut BytesMut)

This is the error I encounter trying to use it specifically:

   |
57 |         match T::decode_length_delimited(buf) {
   |               ^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `bytes::Buf` is not implemented for `bytes::BytesMut`
   |
   = note: required because of the requirements on the impl of `bytes::Buf` for `&mut bytes::BytesMut`
   = note: required because of the requirements on the impl of `bytes::IntoBuf` for `&mut bytes::BytesMut`
   = note: required by `prost::Message::decode_length_delimited`

But decode_length_delimited expects an IntoBuf/Buf as its input. There does not appear to be a way to translate a BytesMut into something with the Buf trait short of copying it into a new buffer? (since the buffer is passed in borrowed you can't freeze() it)

Can't detect partial protobuf message with decode_length_delimited

In a non-blocking context I need to detect if I don't have enough bytes to consume an entire protobuf message so I can wait for more bytes and try again. Right now, decode_length_delimited's interface looks like it can only tell me if parsing succeeded or failed, with failure reasons in a fairly opaque error object.

I don't really have a good suggestion for how to represent this in the function interface, unfortunately. Especially if you want to stick to Result types.

compile_protos produces invalid code

I am compiling the file "protobuf/client.proto" with the build.rs file.

protobuf/client.proto

syntax = "proto3";

package command;

message SessionInformation {
        string email = 1;
        int32 connection_count = 2;
}

message ToCommandClientMessage {
        int32 packet_id = 1;
        oneof content {
                SessionInformation session_information = 2;
        }
}

build.rs

extern crate prost_build;

fn main() {
    prost_build::compile_protos(&["protobuf/client.proto"], &["protobuf/"], None).unwrap();
}

Error thrown during compilation:

error[E0463]: can't find crate for `_bytes`
 --> /Users/janberktold/Desktop/corp/client/target/debug/build/airmagic-1cb7d03e3fe124d2/out/command.rs:1:35
  |
1 | #[derive(Clone, Debug, PartialEq, Message)]
  |                                   ^^^^^^^ can't find crate

error: aborting due to previous error(s)

I am including the generated code like this:

extern crate prost;
#[macro_use] extern crate prost_derive;

pub mod commands {
    include!(concat!(env!("OUT_DIR"), "/command.rs"));
}

This is the generated file:

#[derive(Clone, Debug, PartialEq, Message)]
pub struct SessionInformation {
    #[prost(string, tag="1")]
    pub email: String,
    #[prost(int32, tag="2")]
    pub connection_count: i32,
}
#[derive(Clone, Debug, PartialEq, Message)]
pub struct ToCommandClientMessage {
    #[prost(int32, tag="1")]
    pub packet_id: i32,
    #[prost(oneof="to_command_client_message::Content", tags="2")]
    pub content: Option<to_command_client_message::Content>,
}
pub mod to_command_client_message {
    #[derive(Clone, Debug, Oneof, PartialEq)]
    pub enum Content {
        #[prost(message, tag="2")]
        SessionInformation(super::SessionInformation),
    }
}

Is this a mistake on my side and if so, what am I doing wrong?

Add definitions for well known types

Protobuf well known types can currently be used, but they are just generated from the .proto. Would be nicer if they were built in to the library, and delegated to std types where possible.

Suggestion for enumerations

Hello

I'm still a bit unhappy about the fact enumerations are i32 values, for two reasons:

  • Handling them is cumbersome. I can't just access the field, I have to convert it first…
  • They are interchangeable. Let's say I have enums enum Direction { Unspecified = 0, In = 1, Out = 2 } and enum Proto { Unspecified = 0, Udp = 1, Tcp = 2 }. I can do message.proto = Direction::Out as i32 and it will work. I can do Direction::from_i32(message.proto) and it'll work. And I like Rust's strict type system keeping an eye on me.

So I was wondering if there could be more convenient and safer alternative. What about:

  • Create the enum as it is now, eg enum Direction { Unspecified = 0, In = 1, Out = 2 }
  • Also create a companion struct DirectionRaw(pub i32)

This way, we can have conversion methods on both of them to each other, but not with i32 directly (type safety). The fact the conversion methods would be on the DirectionRaw would make it more discoverable.

Furthermore, I think the conversion method would be something like fn enum(&self) -> Option<Direction> ‒ this way it is possible to both know if the input was valid/known and call unwrap_or_default() on it to get the appropriate default.

Does that make sense? Or does it have some downsides I don't see?

I would be willing to implement it, but I wanted to check first if it makes sense.

optimize for protos containing large blobs

One small optimization prost can do for proto message containing large blobs is - specialize decoding protobuf from reference counted buffers and give a reference to a slice of the underlying buffer as bytes instead of copying them. Similarly, when encoding the message simply chain these big blobs instead of copying them to output buffer. I think the C++ does this optimization for all bytes fields that specify ctype=cord.

Tuple structs with enumerations don't work

It seems tuple structs are broken when they contain an enumeration. I hit it when implementing tests for https://github.com/danburkert/prost/pull/54, but I can reproduce on master:

/// A special case with a tuple struct
#[test]
fn tuple_struct() {
    #[derive(Clone, PartialEq, Message, Debug)]
    struct NewType(
        #[prost(int32, tag="1")]
        pub i32,
        #[prost(enumeration="BasicEnumeration", tag="5")]
        pub i32,
    );
}

The error is:

   Compiling conformance v0.0.0 (file:///home/vorner/prog/prost/conformance)
error: expected identifier, found `1`
   --> tests/src/lib.rs:152:32
    |
152 |     #[derive(Clone, PartialEq, Message, Debug)]

If I comment the second field out,it compiles. I had no success yet pinpointing where exactly it happens, because cargo expand gives me the same error instead of the (broken) source code.

As the prost-derive code contains code to handle tuple structs, I guess they are supposed to work.

CLI containers needs kubectl

the cli container can fail to run because kubectl isn't installed.

Do we even need a CLI container since we have the artifacts one (from which the CLI can be extracted)?

compile_protos for multi-lang project

My project is structured such that the .proto files live in a different directory from the rust code and build.rs script. The main driver for this is to allow both the go and rust codebases to share a common set of protos across the project.

|—>go
|	|—>file.go
|
|—>proto
|	|—>my.proto
|
|—>rust
	|—>Cargo.toml
	|—>build.rs
	|—>src
		|—>lib.rs

I haven't figured out the right way to specify the proto path to compile_protos and get the following error when providing the absolute path. What is the correct way to do this?

_File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file. Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).\n") }) }', src/libcore/result.rs:906:4

prost_build::compile_protos in build.rs_

Make Message object safe

Not clear what the best way to do this is, but being able to use Message as an object is indespinsable.

Nicer `Debug` for enums

Hello

I understand why enums are encoded as numbers in the resulting structure. Still, having the Debug trait output the number is a bit uncomfortable, eg:

Message { stuff: Some(3) }

Would it be possible to provide a custom Debug implementation that would provide the enum variant instead of the number if there's one available and fall back to the number if the name is unknown?

Enumerations and CamelCase

I happened to notice a behavior with prost code generator, that doesn't convert enum items correctly. It seems to make the assumption the items are snake_case, converting them to CamelCase. In our case, the enum values are already in CamelCase, but then they are converted into broken Camelcase, which is hard to read and now when converting our projects from rust-protobuf to prost, breaks lots of things.

What is the expected behavior? Always having proto enums in snake_case and then in Rust CamelCase? Allowing CamelCase in proto and converting it to Camelcase in Rust or detecting if proto has already CamelCase and converting it correctly to CamelCase in Rust?

Examples & Suggested Error Handling for Encode/Decode

Are there any examples on how to properly use encode and decode?

I've been using the following structure but am not sure if it's best to cast the expected proto type on decode. Would also appreciate any suggestions on recommended error handling.

let request = protos::my::custom::proto{};
let mut request_buffer = BytesMut::with_capacity(4096);

request.encode(&mut request_buffer).expect("encode error");

let result : Result<protos::my::custom::proto, DecodeError> = Message::decode(request_buffer) ;
assert_eq!(result.unwrap(), request);

Thanks!

Consider generating idiomatic enum value names

Idiomatic protocol buffer definitions for enums is to prefix the enum value names with the name of the enum. This is because in protocol buffers, the enum value names are in the same namespace as the enum itself (which means you aren't allowed to have two different values named GOOD even if they are in different enums. And therefore this is common and idiomatic in protocol buffers:

enum ResultCode {
    RESULT_CODE_GOOD = 0;
    RESULT_CODE_BAD = 1;
    RESULT_CODE_UGLY = 2;
}

Rust doesn't suffer from this unfortunate design and so if you were to code the same thing in idiomatic Rust, it would look like this:

enum ResultCode {
    Good = 0,
    Bad = 1,
    Ugly = 2,
}

Not like this:

enum ResultCode {
    ResultCodeGood = 0,
    ResultCodeBad = 1,
    ResultCodeUgly = 2,
}

And you'll find that all of the newer "official" protocol buffer generators do this (take C# for example).
They will detect if the enum value names are prefixed with the enum's name and strip it off if so.

I've already written this behavior for Prost, controlled by config and will send a PR in case you're interested. Currently it defaults to existing behavior, but I think you might want to consider making idiomatic Rust the default behavior. For those who don't prefix their protobuf enum value names, well they won't notice either way, and for those that do, they'll probably appreciate the improvement to their code.

JSON Mapping

Are there any plans to support the official proto3 JSON mapping in prost-build? When building https://github.com/cretz/prost-twirp I have found a need for it. Even if it is not supported directly, the field/type attributes in the prost-build config are still difficult to use for this purpose. I suppose I could parse the protobuf myself w/ the other lib here and then create the field/type attributes though it would be nice if prost-build gave me the AST.

Compile errors for enumeration referencing `super`

I was trying to structure my protos directory and ran into a compile error that I haven't been able to fix. I've extended your snazzy shirt example with the below directory structure and sizes.proto and items.proto files respectively:

protos
    |-items
           |-items.proto
    | -sizes
           |-sizes.proto
syntax = "proto3";

package snazzy.size;

enum Size {
    SMALL = 0;
    MEDIUM = 1;
    LARGE = 2;
}
syntax = "proto3";

package snazzy.items;

import "sizes/sizes.proto";

message Shirt {
	string color = 1;
	snazzy.size.Size size = 2;
}

The rust compiler cannot find the enumeration via super reference:
#[derive(Clone, Debug, PartialEq, Message)] ^^^^^^^ Could not find size in super

#[derive(Clone, Debug, PartialEq, Message)]
pub struct Shirt {
    #[prost(string, tag="1")]
    pub color: String,
    #[prost(enumeration="super::size::Size", tag="2")]
    pub size: i32,
}

Any thoughts on how I can work around this or if I'm missing something in the setup?

Would it support service

Would it support service one day, it seems not work for the following situation:

syntax = "proto3";
package grpc;

service HelloService {
  rpc SayHello (HelloRequest) returns (HelloResponse);
}

message HelloRequest {
  string greeting = 1;
}

message HelloResponse {
  string reply = 1;
}

Support for `groups`

While groups where deprecated and never made it into the official version of Protobuf there still are used in some codebases (by Google nonetheless, why, I don't know, probably due to legacy issues) because the original C++ and Python implementations can deal with them fine. AFAIK none of the Rust implementations support Groups properly (which can be understood for the reason stated before) but I wonder if there is interest in adding support for them.

My personal use case: I'm trying to implement/partially port a natural language processing model from Google which uses Tensorflow and some custom C++ extensions. Protobuf is used extensively in this settings, but some of the defined messages use "Group". Refactoring the code to not use groups would be a major pain as it permeates a big chunk of the codebase in this case.

prost does not seem to work outright with groups, it generated the code just fine, but when it comes to compile the library it will fail to derive type information correctly.

An example below:
protobuf file:

// LINT: ALLOW_GROUPS

syntax = "proto2";

package syntaxnet;

// Task input descriptor.
message TaskInput {
  // Name of input resource.
  required string name = 1;

  // Name of stage responsible of creating this resource.
  optional string creator = 2;

  // File format for resource.
  repeated string file_format = 3;

  // Record format for resource.
  repeated string record_format = 4;

  // Is this resource multi-file?
  optional bool multi_file = 5 [default = false];

  // An input can consist of multiple file sets.
  repeated group Part = 6 {
    // File pattern for file set.
    optional string file_pattern = 7;

    // File format for file set.
    optional string file_format = 8;

    // Record format for file set.
    optional string record_format = 9;
  }
}

generated code:

/// Task input descriptor.
#[derive(Clone, Debug, PartialEq, Message)]
pub struct TaskInput {
    /// Name of input resource.
    #[prost(string, required, tag="1")]
    pub name: String,
    /// Name of stage responsible of creating this resource.
    #[prost(string, optional, tag="2")]
    pub creator: Option<String>,
    /// File format for resource.
    #[prost(string, repeated, tag="3")]
    pub file_format: Vec<String>,
    /// Record format for resource.
    #[prost(string, repeated, tag="4")]
    pub record_format: Vec<String>,
    /// Is this resource multi-file?
    #[prost(bool, optional, tag="5")]
    pub multi_file: Option<bool>,
    #[prost(group, repeated, tag="6")]
    pub part: Vec<task_input::Part>,
}
pub mod task_input {
    /// An input can consist of multiple file sets.
    #[derive(Clone, Debug, PartialEq, Message)]
    pub struct Part {
        /// File pattern for file set.
        #[prost(string, optional, tag="7")]
        pub file_pattern: Option<String>,
        /// File format for file set.
        #[prost(string, optional, tag="8")]
        pub file_format: Option<String>,
        /// Record format for file set.
        #[prost(string, optional, tag="9")]
        pub record_format: Option<String>,
    }
}

error when running cargo build:

error: proc-macro derive panicked
  |
2 | #[derive(Clone, Debug, PartialEq, Message)]
  |                                   ^^^^^^^
  |
  = help: message: called `Result::unwrap()` on an `Err` value: Error(Msg("invalid message field TaskInput.part"), State { next_error: Some(Error(Msg("no type attribute"), State { next_
error: None, backtrace: None })), backtrace: None })

Or maybe there is a workaround this?

Allow building prost-build without requiring curl & OpenSSL

For a variety of reasons, including improving the build time of my project, I would like to build prost-build without downloading and building curl and its dependencies, particularly OpenSSL. In particular, I would like to avoid the build failure I get:

error: failed to run custom build command for `openssl-sys v0.9.26`
process didn't exit successfully: `/home/m/go/src/github.com/runconduit/conduit/target/debug/build/openssl-sys-6bc9b54748dee815/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_LIB_DIR
cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_INCLUDE_DIR
cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_DIR
cargo:rerun-if-env-changed=OPENSSL_DIR
run pkg_config fail: "Failed to run `\"pkg-config\" \"--libs\" \"--cflags\" \"openssl\"`: No such file or directory (os error 2)"

--- stderr
thread 'main' panicked at '

Could not find directory of OpenSSL installation, and this `-sys` crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it,  you can set the `OPENSSL_DIR` environment variable for the
compilation process.

If you're in a situation where you think the directory *should* be found
automatically, please open a bug at https://github.com/sfackler/rust-openssl
and include information about your system as well as this message.

    $HOST = x86_64-unknown-linux-gnu
    $TARGET = x86_64-unknown-linux-gnu
    openssl-sys = 0.9.26


It looks like you're compiling on Linux and also targeting Linux. Currently this
requires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`
could not be found. If you have OpenSSL installed you can likely fix this by
installing `pkg-config`.

', /home/m/.cargo/registry/src/github.com-1ecc6299db9ec823/openssl-sys-0.9.26/build.rs:213:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

warning: build failed, waiting for other jobs to finish...
error: build failed

I don't want to require users to install pkg_config or do any manual configuration.

In fact, we already require protoc for the Go code in our project, so it would be ideal for us if we could download protoc ourselves and build prost-build without any downloading logic included at all.

If it's not acceptable to provide a way to completely avoid the downloading logic then maybe #27 might help? However, since our project is security-sensitive it would also be important to verify the SHA-256 checksums of the downloaded files, instead of just relying on HTTPS, so I think additional work would be necessary.

Generating code via a binary

Thanks for this, really excited about this project!

I just tried to see if I could use this in an existing project. The protobuf definition files are in a different repo there and the build setup assumes that the generated code is present. I think the prost-build approach is great, but just generating the code can also have advantages.

Reading the readme it seems to imply that it's possible to generate the code, but I could not find any cargo install ready binaries in one of the crates. Is it possible to use the classic protoc approach using prost currently?

Investigate deeply nested message reuse

Reddit user nwydo points out that prost doesn't do as good of a job reusing deeply nested message allocations as the reference C++ implementation, due to differences between Vec and RepeatedPtrField. While I think this isn't necessarily as critical for prost since nested message fields are inline (which often has the effect of flattening the hierarchy), it could be significant for encoding/decoding n-ary tree-like structures. Should be relatively easy to write a benchmark for it.

One possible solution is to add a clear+merge all-in-one operation to Message (perhaps called Message::decode_into), which could re-use existing deeply nested allocations.

Option in Proto3

The field modifiers section of the README explains that optional proto2 fields are wrapped in Option, while required fields and proto3 fields are not. This seems strange to me because the default in proto3 should correspond to optional, not required. It was determined that required fields should be used sparingly, so they were removed completely in proto3.

In PROST's proto3 deserialization, are missing fields converted to default values of the given type, or does deserialization fail?

Probst compile_protobuf crashes our CI

This is a long shot, since we are unable to stabily reproduce the issue. Travis apparently changed something lately, and after this our build.rs started segfaulting. Our build script only contains

extern crate prost_build;
fn main() {
    prost_build::compile_protos(&["src/proto.proto"], &["src/"]).unwrap();
}

This is our project. And this is a sample failure on travis.

Do you have any ideas where those crashes could come from? Does prost_build contain unsafe code?

Document Enumerations

README says:

All .proto enumeration types convert to the Rust i32 type. Additionally, each enumeration type gets a corresponding Rust enum type, with helper methods to convert i32 values to the enum type. The enum type isn't used directly as a field, because the Protobuf spec mandates that enumerations values are 'open', and decoding unrecognized enumeration values must be possible.

It took me some (rather cheerless) time to discover that I can convert an enumeration like the one in the README example

    #[derive(Clone, Copy, Debug, PartialEq, Eq, Enumeration)]
    pub enum PhoneType {
        Mobile = 0,
        Home = 1,
        Work = 2,
    }

to the value this way:

PhoneType::Mobile as i32

However, for the opposite direction, I'm lost for good. How do I convert an i32 code to PhoneType? Where do I find those "helper methods to convert i32 values to the enum type"? I'd really appreciate an example in the README. Thank you.

Furthermore, what is the #[derive(Enumeration)] ? Can't find the Enumeration trait anywhere..

Consider adding `#![allow(dead_code)]` in generated files

Hello,

If there was any message or field in generated files we don't use, rustc blames.

warning: method is never used: `b1`
  --> perftest_data_prost.rs:43:35
   |
43 | #[derive(Clone, Debug, PartialEq, Message)]
   |                                   ^^^^^^^
   |
   = note: #[warn(dead_code)] on by default
#[derive(Clone, Debug, PartialEq, Message)]       // <== line 43.
pub struct TestBytes {
    #[prost(bytes, optional, tag="1")]
    pub b1: Option<Vec<u8>>,
}

Since they are generated, adding #![allow(dead_code)] seems reasonable to me. What's you option?

Nested, imported identifier causes "too many initial supers" error

I'm importing google.rpc.status in my proto and then embedding that type within a oneof. When building, I see this error:

error[E0433]: failed to resolve. There are too many initial `super`s.  --> /path/to/target/debug/build/chord-687b2102b1406e1a/out/chord.v1.rs:97:29
   |
97 |         Error(super::super::super::google::rpc::Status),
   |                             ^^^^^ There are too many initial `super`s.

error: Compilation failed, aborting rustdoc

I think these supers are generated by resolve_ident. Perhaps the ident_path has an extra term since it's part of a oneof? I'll try to take a closer look tomorrow.

compile error with recursive types in `oneof`

When a message contains a recursive type in a oneof, prost's generated code fails because the type isn't properly Boxed.

Reproduction: https://github.com/olix0r/_bug_prost-recurse

:; cargo check
   Compiling prost-recurse-bug v0.0.0 (file:///Users/ver/b/rs/_bug_prost-recurse)
error[E0072]: recursive type `bug::Node` has infinite size
 --> /Users/ver/b/rs/_bug_prost-recurse/target/debug/build/prost-recurse-bug-cf7523091859c97f/out/bug.rs:2:1
  |
2 | pub struct Node {
  | ^^^^^^^^^^^^^^^ recursive type has infinite size
3 |     #[prost(oneof="node::Child", tags="1, 2, 3")]
4 |     pub child: ::std::option::Option<node::Child>,
  |     --------------------------------------------- recursive without indirection
  |
  = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `bug::Node` representable

error[E0072]: recursive type `bug::node::Child` has infinite size
  --> /Users/ver/b/rs/_bug_prost-recurse/target/debug/build/prost-recurse-bug-cf7523091859c97f/out/bug.rs:16:5
   |
16 |     pub enum Child {
   |     ^^^^^^^^^^^^^^ recursive type has infinite size
...
22 |         Node(super::Node),
   |              ------------ recursive without indirection
   |
   = help: insert indirection (e.g., a `Box`, `Rc`, or `&`) at some point to make `bug::node::Child` representable

error: aborting due to 2 previous errors

error: Could not compile `prost-recurse-bug`.

To learn more, run the command again with --verbose.

src/bug.proto:

syntax = "proto3";
package bug;

message Node {
    message Empty {}

    message Seq {
        repeated Node nodes = 1;
    }

    oneof child {
        Empty empty = 1;
        Seq nodes = 2;
        Node node = 3;
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.