tradias / asio-grpc Goto Github PK
View Code? Open in Web Editor NEWAsynchronous gRPC with Asio/unified executors
Home Page: https://tradias.github.io/asio-grpc/
License: Apache License 2.0
Asynchronous gRPC with Asio/unified executors
Home Page: https://tradias.github.io/asio-grpc/
License: Apache License 2.0
In hello-world-client-cpp20.cpp, how could it be modified to have a timeout getting the response?
Am I correct that the:
co_await agrpc::finish(*reader, response, status);
Would wait indefinitely for a response?
How to make one 'blocking' (actually co_await) call that would resume execution upon either timeout or reception of the response?
Then, how would I know whether it 'unblocked' due to timeout or response reception?
Also, what if the server later sends the response after the timeout? Would it get thrown away or could it get accidentally processed as the response to the next request?
Thank you for implementing this excellent project to provide a consolidated way of executing async grpc command and send/receive tcp packages asynchronously with boost asio library. I just begin to use boost asio recently and have a couple of quesions when using this library.
According to this link: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html, multiple threads may call io_context::run() to set up a threads pool and the io_context may distribute work across them. Dose asio-grpc's execution_context also guarantee thread safety if threads pool is enabled on it? I am using C++20 coroutines and assuming that each co_spawn will locate a thread from the threads pool and run the composed asynchronous operations. Correct me if my understanding is wrong. What if the composed asynchronous operations contains a blocking operation, it may block the running thread and how can I prevent the other co_spawn call to use the blocked thread for execution? In additional, co_spawn could spawn from both execution_context and excutor. I am guessing that if spawn from execution_context it will locate a new thread and run while from excutor, it will just run on the thread that the excutor is running. Is my guessing correct?
Meanwhile #8 mentions that if co_spawn non-grpc async operation like steady_timer from grpc_context, it will automatically spawns a 2nd io_context thread. So it seems that asio-grpc internally maintain two threads for both grpc execution_context and io_context to run async grpc operations and other async non-grpc operations. And the last comments says version 1.4 would also support ask io_context for a agrpc::GrpcContext. Considering my application would serve many clients and for each client's requst issue one single composed asynchronous operations containing one async grpc call and several async tcp read&write to the server call and response back to the client, will asio-grpc guarantee there won't have interleave between the grpc operation and the tcp operations when the single composed asynchronous operation is co_spawned from either grpc_context or io_context since they are from two context on two thread? Also does asio-grpc support the mode of having threads pool for io_context and single thread for grpc_context or both have threads pool enabled?
one single composed asynchronous operations
/ \
client1 --> { co_wait async grpc operation, co_wait async tcp operations } --> server
client2 --> { co_wait async grpc operation, co_wait async tcp operations } --> server
clientN ...
Hope to get some guidence from you. Thanks.
I have both boost asio and stand-alone asio installed. I am trying to build the tests and examples so that I can having working examples for my own code, but one particular test fails:
asio_grpc_add_test_util(asio-grpc-test-util-standalone-asio "STANDALONE_ASIO" "17")
[ 59%] Building CXX object test/utils/CMakeFiles/asio-grpc-test-util-standalone-asio.dir/cmake
_pch.hxx.gch
In file included from /include/asio/spawn.hpp:902,
fatal error: boost/context/fiber.hpp: No such file or directory
40 | # include <boost/context/fiber.hpp>
My standalone asio installation was configured with: --with-boost=$BOOST_PREFIX, but I don't see boost includes for asio-grpc-test-util-standalone-asio.
How can I get around this problem?
Due to potential race-conditions in agrpc::request(&Stub::Async...
when switching to another thread after the request and interacting with the responder, it would be good to have overloads for the &Stub::PrepareAsync
versions which avoid the problem. They also make it easier to mock those requests.
Targets v1.8.0.
Thank you for providing this library.
I want to build a generic grpc server by using grpc::AsyncGenericService
. And I find that agrpc::RepeatedlyRequestContext does not support grpc::GenericServerContext
(maybe my misunderstanding).
So, is there any way to build an AsyncGenericService with asio-grpc?
I am seeing a tight loop with below stack trace. The application is a boost asio single threaded app which is now integrated with asio grpc. The boost version is 1.81. I have followed the client example in the examples directory ( share-io-context-client.cpp ).
Please throw some light on what could possibly go wrong.
(gdb) bt
#0 0x00007fb21bf778e9 in __GI___libc_realloc (oldmem=0x557a957fd180, bytes=90) at ./malloc/malloc.c:3496
#1 0x00007fb21cd7d8e5 in gpr_realloc () from /lib/x86_64-linux-gnu/libgrpc.so.10
#2 0x00007fb21cd06fe5 in grpc_error_string(grpc_error*) () from /lib/x86_64-linux-gnu/libgrpc.so.10
#3 0x00007fb21cd60518 in ?? () from /lib/x86_64-linux-gnu/libgrpc.so.10
#4 0x00007fb21ce5015c in grpc_impl::CompletionQueue::AsyncNextInternal(void**, bool*, gpr_timespec) () from /lib/x86_64-linux-gnu/libgrpc++.so.1
#5 0x0000557a66c7cddd in grpc_impl::CompletionQueue::AsyncNext<gpr_timespec> (this=0x557a6766ddf0, tag=0x7fff16a2e388, ok=0x7fff16a2e390, deadline=...) at /usr/include/grpcpp/impl/codegen/completion_queue_impl.h:198
#6 0x0000557a66c7ccd0 in agrpc::b::detail::get_next_event (cq=0x557a6766ddf0, event=..., deadline=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:165
#7 0x0000557a66c7c6c4 in agrpc::b::detail::GrpcContextImplementation::handle_next_completion_queue_event (grpc_context=..., deadline=..., invoke=agrpc::b::detail::InvokeHandler::YES)
at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:173
#8 0x0000557a66c7c4bd in agrpc::b::detail::GrpcContextImplementation::do_oneagrpc::b::detail::IsGrpcContextStoppedPredicate (grpc_context=..., deadline=..., invoke=agrpc::b::detail::InvokeHandler::YES, stop_predicate=...)
at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:214
#9 0x0000557a66c7ba9a in agrpc::b::detail::GrpcContextDoOne::poll (grpc_context=..., deadline=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:164
#10 0x0000557a66c7b6fa in agrpc::b::detail::run_impl<agrpc::b::detail::GrpcContextDoOne, agrpc::b::DefaultRunTraits, boost::asio::io_context, agrpc::b::detail::AreContextsStoppedConditionboost::asio::io_context > (grpc_context=...,
execution_context=..., stop_condition=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:191
#11 0x0000557a66c7b5bd in agrpc::b::run<agrpc::b::DefaultRunTraits, boost::asio::io_context, agrpc::b::detail::AreContextsStoppedConditionboost::asio::io_context > (grpc_context=..., execution_context=..., stop_condition=...)
at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:214
#12 0x0000557a66c7b545 in agrpc::b::run<agrpc::b::DefaultRunTraits, boost::asio::io_context> (grpc_context=..., execution_context=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:207
#13 0x0000557a668c11d2 in main::$_3::operator() (this=0x7fff16a2e718) at
#14 0x0000557a668c1185 in boost::asio::detail::binder0main::$_3::operator() (this=0x7fff16a2e718) at opensource/base/boost_1_81_0/boost/asio/detail/bind_handler.hpp:60
#15 0x0000557a668c1165 in boost::asio::asio_handler_invoke<boost::asio::detail::binder0main::$_3 > (function=...) at opensource/base/boost_1_81_0/boost/asio/handler_invoke_hook.hpp:88
#16 0x0000557a668c113f in boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder0main::$_3, main::$_3> (function=..., context=...) at opensource/base/boost_1_81_0/boost/asio/detail/handler_invoke_helpers.hpp:54
#17 0x0000557a668c110d in boost::asio::detail::asio_handler_invoke<boost::asio::detail::binder0main::$_3, main::$_3> (function=..., this_handler=0x7fff16a2e718) at opensource/base/boost_1_81_0/boost/asio/detail/bind_handler.hpp:111
#18 0x0000557a668c0ffd in boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder0main::$_3, boost::asio::detail::binder0main::$_3 > (function=..., context=...)
at opensource/base/boost_1_81_0/boost/asio/detail/handler_invoke_helpers.hpp:54
#19 0x0000557a668c12d3 in boost::asio::detail::executor_op<boost::asio::detail::binder0main::$_3, std::allocator, boost::asio::detail::scheduler_operation>::do_complete (owner=0x557a67661f00, base=0x557a97e2c740)
at opensource/base/boost_1_81_0/boost/asio/detail/executor_op.hpp:70
#20 0x0000557a6697293e in boost::asio::detail::scheduler_operation::complete (this=0x557a97e2c740, owner=0x557a67661f00, ec=..., bytes_transferred=0) at opensource/base/boost_1_81_0/boost/asio/detail/scheduler_operation.hpp:40
#21 0x0000557a669722a7 in boost::asio::detail::scheduler::do_run_one (this=0x557a67661f00, lock=..., this_thread=..., ec=...) at opensource/base/boost_1_81_0/boost/asio/detail/impl/scheduler.ipp:492
#22 0x0000557a66971d77 in boost::asio::detail::scheduler::run (this=0x557a67661f00, ec=...) at opensource/base/boost_1_81_0/boost/asio/detail/impl/scheduler.ipp:210
#23 0x0000557a66954e9e in boost::asio::io_context::run (this=0x7fff16a2ff78) at opensource/base/boost_1_81_0/boost/asio/impl/io_context.ipp:63
#24 0x0000557a668bf69c in main (argc=40, argv=0x7fff16a33c18)
Hi, would you mind supporting the usage, which includes the sources by the compiler directly, not as a CMake package?
For example, let's say I have a server.cpp
file, compiling with the following command without installing:
g++ -std=c++17 -fcoroutines \
-DAGRPC_STANDALONE_ASIO \
-DASIO_HAS_CO_AWAIT -DASIO_HAS_STD_COROUTINE -DASIO_HAS_CO_AWAIT \
-Iasio-grpc/src -Iasio/asio/include -I</path/to/any/other/includes> \
server.cpp \
-lgrpc++ -lgpr -lgrpc++_reflection -lprotobuf -lpthread
The changes to support this seems little. The only problem is "memory_resource.hpp" not found since "memory_resource.hpp" is generated by CMake through src/agrpc/detail/memory_resource.hpp.in
. A seemed reasonable way is to use a definition instead of a cmake variable.
// src/agrpc/detail/memory_resource.hpp.in
@ASIO_GRPC_MEMORY_RESOURCE_INDLUCES@
// =>
// src/agrpc/detail/memory_resource.hpp
#ifdef ASIO_GRPC_USE_BOOST_CONTAINER
namespace pmr = boost::container::pmr;
namespace container = boost::container;
#else
namespace pmr = std::pmr;
namespace container = std;
#endif
And then compile with g++ -DASIO_GRPC_USE_BOOST_CONTAINER ...
I am using WSL Ubuntu 20.04.x. I have tried all the possible ways of compiling and building.
Nothing works. Recent ones is with vcpkg fails with errors.
I can post here hunter and conan errors as well, if that is helpful.
I am sure, being newbie, I must have missed something.
CMakeLists.txt
cmake_minimum_required(VERSION 3.16)
project(expr)
add_compile_options(-Wall -ggdb -std=c++14 -pthread)
find_package(asio-grpc CONFIG REQUIRED)
add_executable(boo boo.cpp)
target_link_libraries(boo PUBLIC asio-grpc::asio-grpc)
ERROR
➜ build cmake -DCMAKE_TOOLCHAIN_FILE=/home/bbhushan/tools/vcpkg/scripts/buildsystems/vcpkg.cmake ..
CMake Error at CMakeLists.txt:4 (find_package):
Could not find a package configuration file provided by "asio-grpc" with
any of the following names:
asio-grpcConfig.cmake
asio-grpc-config.cmake
Add the installation prefix of "asio-grpc" to CMAKE_PREFIX_PATH or set
"asio-grpc_DIR" to a directory containing one of the above files. If
"asio-grpc" provides a separate development package or SDK, be sure it has
been installed.
-- Configuring incomplete, errors occurred!
See also "/home/bbhushan/work/expr/build/CMakeFiles/CMakeOutput.log".
vpkg list
➜ vcpkg git:(master) ./vcpkg list | grep "asio\|grpc"
asio-grpc:x64-linux 2.3.0 Asynchronous gRPC with Asio/unified executors
asio:x64-linux 1.24.0 Asio is a cross-platform C++ library for network...
boost-asio:x64-linux 1.80.0#2 Boost asio module
grpc:x64-linux 1.50.1 An RPC library and framework
grpc[codegen]:x64-linux Build code generator machinery
cmake VERSION
➜ vcpkg git:(master) cmake --version
cmake version 3.16.3
I've been trying to understand how it is possible for your excellent library to be able to execute async operations (for example steady_timer) on the same main thread as grpc.
I ran the hello-world-server-cpp20.cpp in a debugger to help my understanding.
Part of my initial confusion is because I see in grpcContext.ipp/get_next_event() that it seems to be blocking only on the grpc completion queue (call to get_completion_queue()->AsyncNext()), so how could it unblock on other async events that are not grpc?
Then using the debugger I found that at the start of main() of hello-world-server-cpp20.cpp that this statement:
boost::asio::basic_signal_set signals{grpc_context, SIGINT, SIGTERM};
spawns a 2nd ASIO thread.
Then I see that when I have another non-grpc async operation, such as steady_timer, that when the steady_timer expires it wakes up this 2nd ASIO thread somehow. Then when that wakes up somehow you get it to post a grpc alarm with immediate deadline to the grpc completion queue which unblocks the main thread and allows the handler for the steady timer to execute. Is this the proper understanding?
So would it be that any (not just steady_timer) non-grpc async completion handlers would wake up that 2nd thread which then sends an immediate grpc alarm to wake up the completion handler in the main thread? That is how you get non-grpc async completion handlers to execute?
Not knowing how boost::asio works so well, I suppose this 2nd thread is always there and wakes up the main thread, even when using basic io_context and not an overridden execution_context? Or is this 2nd thread somehow created because you have overridden the basic io_context/execution_context?
Which compile definitions are recommended for clang build on Linux?
I already figured out that I need to set on my cmake line:
-DASIO_GRPC_USE_BOOST_CONTAINER=1
But I wonder if there are any others?
I'm using a slightly older version of asio-grpc, it would take me a little effort to update so don't want to have to do that. I'm using commit a17b559. Don't know if this is the reason?
The reason that I ask is that when I use clang 10.0.1 I get a clang crash when trying to build hello-world-server-cpp20:
/gitworkspace/distributions/clang/10.0.1/bin/clang++ -DBOOST_ALL_NO_LIB -DCARES_STATICLIB -I/gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/native.build/src/generated -isystem /gitworkspace/rbresali/mdt/mdt_stage/native.stage/usr/local/include -stdlib=libc++ -fPIC -g -Wall -Werror -DBOOST_THREAD_VERSION=4 -save-temps=obj -std=gnu++2a -MD -MT src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -MF src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o.d -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -c /gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/src/hello-world-server-cpp20.cpp
Stack dump:
0. Program arguments: /vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10 -cc1 -triple x86_64-unknown-linux-gnu -S -save-temps=obj -disable-free -disable-llvm-verifier -discard-value-names -main-file-name hello-world-server-cpp20.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mframe-pointer=all -fmath-errno -fno-rounding-math -masm-verbose -mconstructor-aliases -munwind-tables -target-cpu x86-64 -dwarf-column-info -fno-split-dwarf-inlining -debug-info-kind=limited -dwarf-version=4 -debugger-tuning=gdb -resource-dir /vol/dwdmgit_distributions/clang/10.0.1/lib64/clang/10.0.1 -Wall -Werror -std=gnu++2a -fdebug-compilation-dir /gitworkspace/rbresali-mdt_211028.110446/mdt_example/asio_grpc_example/native.build -ferror-limit 19 -fmessage-length 0 -fgnuc-version=4.2.1 -fobjc-runtime=gcc -fdiagnostics-show-option -faddrsig -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.s -x ir src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc
1. Code generation
2. Running pass 'Function Pass Manager' on module 'src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc'.
3. Running pass 'X86 DAG->DAG Instruction Selection' on function '@"_ZN5boost4asio6detail20co_spawn_entry_pointINS0_15any_io_executorEZ4mainE3$_0NS1_16detached_handlerEEENS0_9awaitableINS1_28awaitable_thread_entry_pointET_EEPNS6_IvS8_EES8_T0_T1_"'
#0 0x00000000016b6e24 PrintStackTraceSignalHandler(void*) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b6e24)
#1 0x00000000016b4b8e llvm::sys::RunSignalHandlers() (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b4b8e)
#2 0x00000000016b7225 SignalHandler(int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b7225)
#3 0x00007f5f529b1630 __restore_rt (/lib64/libpthread.so.0+0xf630)
#4 0x00000000021de359 llvm::DAGTypeLegalizer::getTableId(llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de359)
#5 0x00000000021de216 llvm::DAGTypeLegalizer::RemapValue(llvm::SDValue&) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de216)
#6 0x00000000021dd97f llvm::DAGTypeLegalizer::ReplaceValueWith(llvm::SDValue, llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21dd97f)
#7 0x00000000021e01ac llvm::DAGTypeLegalizer::DisintegrateMERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21e01ac)
#8 0x0000000002236e99 llvm::DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x2236e99)
#9 0x00007ffe5377a490
clang-10: error: unable to execute command: Segmentation fault (core dumped)
clang-10: error: clang frontend command failed due to signal (use -v to see invocation)
clang version 10.0.1
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /gitworkspace/distributions/clang/10.0.1/bin
clang-10: note: diagnostic msg: PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace, preprocessed source, and associated run script.
clang-10: note: diagnostic msg:
********************
PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.cpp
clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.sh
clang-10: note: diagnostic msg:
********************
Hi,
I am trying to run the example-server.cpp and example-client.cpp example from the version 1.1.2 [installed using vcpkg with boost container feature].
I got an assertion error GRPC_CALL_ERROR_TOO_MANY_OPERATIONS at the following line in the server code
here is the stack trace when the assertion occurs.
here is the detail information about the assertion.
do you have any idea why the bug happens?
In hello-world-server-cpp20.cpp you show an example with just 1 rpc type, the HelloRequest.
What if I have multiple rpcs that I want to handle in my server? Lets say I want to add a "GoodbyeRequest"?
Would the proper way to do this be to spawn another co-routine, a 2nd call to co_spawn() that looks very similar to the one with HelloRequest, but I would have it be a GoodbyeRequest instead? Basically replace all "hello" with "goodbye" (case insensistive)?
Here it is https://github.com/Tradias/asio-grpc#usage
Section: As a subdirectory using standalone Asio:
find_package(gRPC)
find_package(asio)
add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC gRPC::grpc++_unsecure asio-grpc::asio-grpc-standalone-asio asio::asio)
First of all, no such find_package(asio) in cmake modules. Replaced this line to:
# Standalone Asio
find_path(ASIO_INCLUDE_PATH asio.hpp HINTS
"/usr/include"
"/usr/local/include"
"/opt/local/include"
)
if (NOT ASIO_INCLUDE_PATH)
message(FATAL_ERROR "${Green}No ASIO found${Reset}")
endif()
message(STATUS "${Yellow}Found ASIO${Reset}: ${ASIO_INCLUDE_PATH}")
add_library(asio INTERFACE)
target_include_directories(asio
INTERFACE ${ASIO_INCLUDE_PATH}
)
target_compile_definitions(asio
INTERFACE ASIO_STANDALONE
)
Then, tried to test option ASIO_GRPC_USE_BOOST_CONTAINER.
Added -DASIO_GRPC_USE_BOOST_CONTAINER=ON
to cmake call.
Nothing changes. It looks like file asio-grpc/cmake/AsioGrpcOptionDefaults.cmake
is never execuded.
Please test your library using add_subdirectory() and a pure CMake generate call.
To make working with Alarm safer and more flexible we should have an I/O-object similar to agrpc::RPC
.
agrpc::Alarm alarm{grpc_context};
bool ok = co_await alarm.wait(deadline, asio::use_awaitable);
Additionally it would be nice to have a &&
overload where the alarms keeps itself alive for the duration of the asynchronous operation. The sizeof a grpc::Alarm
is 24 bytes - that seems affordable.
auto [alarm, ok] = co_await agrpc::Alarm(grpc_context).wait(deadline, asio::use_awaitable);
Feature request: Add support for systemd socket activation (LISTEN_FDS, LISTEN_FDNAMES)
The advantages are for instance
Native network performance over the socket-activated socket when running rootless Podman. Network traffic is normally passed through the program slirp4netns which comes with a performance penalty.
There is also a security advantage when running socket-activated containers with Podman. It's sometimes possible to run the container with --network=none. I wrote a blog about it https://www.redhat.com/sysadmin/socket-activation-podman
The possibility to stop on inactivity and the starting when the next client connects. (The functionality to "stop on inactivity" would need to be implemented in asio-grpc though)
In the future it might be possible to run a socket-activated container with Podman as a systemd system service that makes use of (User=
). Then it would be possible to use port numbers below 1024 without having to run the command sudo sh -c "echo 0 > /proc/sys/net/ipv4/ip_unprivileged_port_start"
. Unfortunately, I think there are some hurdles to get this working still.
@c6supper I have noticed that you started making changes to asio-grpc to support QNX. Is there anything I can do to help?
I am not familiar with QNX, what are its limitations? Does it support C++17?
I'm trying to use a promise as the completion token to agrpc methods, and getting a compiler error using gcc 11. Here is a minimal example, based off your streaming-server.cpp example:
// additional includes required:
#include <asio/experimental/promise.hpp>
#include <asio/this_coro.hpp>
asio::awaitable<void> handle_bidirectional_streaming_request(example::v1::Example::AsyncService& service)
{
grpc::ServerContext server_context;
grpc::ServerAsyncReaderWriter<example::v1::Response, example::v1::Request> reader_writer{&server_context};
bool request_ok = co_await agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming,
service, server_context, reader_writer);
if (!request_ok)
{
// Server is shutting down.
co_return;
}
example::v1::Request request;
// none of the below work to put as COMPLETIONTOKEN - the following line fails to compile:
// asio::experimental::use_promise
// asio::experimental::use_promise_t<agrpc::GrpcContext>{}
// asio::experimental::use_promise_t<agrpc::GrpcContext::executor_type>{}
// asio::experimental::use_promise_t<agrpc::s::BasicGrpcExecutor<>>{}
// asio::experimental::use_promise_t<asio::this_coro::executor_t>{}
auto&& read_promise = agrpc::read(reader_writer, COMPLETIONTOKEN);
co_await read_promise.async_wait(asio::use_awaitable);
}
The use case is that later in the function I would simultaneously await any of 3 conditions:
New request from client, finished writing response to client, or new response ready from data processing thread pool:
auto&& write_promise = agrpc::write(rw, response, COMPLETIONTOKEN);
auto&& data_ready_promise = // asynchronously dispatch work to data processing thread pool
auto rwd_promise = asio::experimental::promise<>::all(
std::forward<decltype(read_promise)>(read_promise),
std::forward<decltype(write_promise)>(write_promise),
std::forward<decltype(data_ready_promise)>(data_ready_promise)
);
std::tie(read_ok, write_ok, data_ready_ok) = co_await rwd_promise.async_wait(asio::use_awaitable);
Hi, thanks for creating a wonderful framework. It has made my life much easier.
I have used your framework for a few months and now I need to set up our project on a new machine. The installation is successful with the following command
./vcpkg install asio-grpc[boost-container]:x64-linux
However, when I compile my project, I got the following errors.
Do you have any idea why does this error happen?
I look forward to hearing from you soon.
Thanks
Thanks for a wonderful library, I was in the process of migration some code to agrpc.
how do I hookup existing boost::asio::io_context with GrpcContext ??
some thing like "boost::asio::posix::stream_descriptor"
The api is not clear to me.
Please advise.
The functions in rpc.hpp currently miss overloads for the -Interface versions of the streaming reader/writer.
Work takes place on the responder-interface
branch.
Targets v1.7.0.
Hi, any ideas about how to handle bidistream requests in a generic server?
(I notice that there is only a unary handler in the "example/generic-server.cpp" file.)
Many thanks
Add convenience overloads to agrpc::request
and agrpc::repeatedly_request
for grpc::GenericServerAsyncReaderWriter
, grpc::GenericClientAsyncReaderWriter
and the like.
Also consider adding a generic server benchmark to grpc_bench.
Work is being done on the generic-rpcs branch.
Targets v1.7.0.
Find a minimal, flexible interface for read, write and bidirectional-streams. Make sure the different flavors of write
are supported, like write_last
. Ensure or document thread-safety guarantees.
For semantics discussion see also: #17
Consider aligning interface with unifex Stream: https://github.com/facebookexperimental/libunifex/blob/main/doc/concepts.md#streams
Work is being done on the read-write-stream branch.
When invoking a synchronous function it is possible to initiate some work for the GrpcContext and wait for the result using a std::promise
:
Response get_synchronous()
{
std::promise<void()> promise;
auto future = promise.get_future();
Response response;
agrpc::read(reader, response,
[&](bool)
{
promise.set_value();
});
return response;
}
But if we are invoking get_synchronous
from the thread that runs the GrpcContext then we end up in a deadlock. The proposed GrpcContext.run_while()
can be used to prevent that:
Response get_synchronous()
{
if (grpc_context.get_executor().running_in_this_thread()) {
// use promise based get_synchronous
} else {
bool is_result_ready{};
// Initiate some IO work
Response response;
agrpc::read(reader, response,
[&](bool)
{
is_result_ready = true;
});
// Wait for the work to finish
grpc_context.run_while([&]() { return !is_result_ready; });
return response;
}
}
Hi,
I tried test example/multi-threaded-server with enabled DefaultHealthCheckService and found that after I set grpc::EnableDefaultHealthCheckService(true) before start server all handlers worked in only one thread. What could be causing this and how to fix it?
asio-grpc/src/agrpc/detail/rpcs.hpp
Line 38 in e10e42b
asio-grpc/src/agrpc/detail/rpcs.hpp
Line 42 in e10e42b
When comparing these two RPCs, I have no idea when the case in ServerSingleArgRequest
will happen. That means there's no request but have response. Could provide a proto example for that?
the problem is :
if the io_context
has not used before agrpc::run
, asio::post(io_context, cb)
from another thread, cb can't be called...
void test_func(){
asio::io_context io_context{1};
example::v1::Example::Stub stub{grpc::CreateChannel(host, grpc::InsecureChannelCredentials())};
agrpc::GrpcContext grpc_context{std::make_unique<grpc::CompletionQueue>()};
asio::post(io_context,
[&]
{
auto worker = asio::make_work_guard(io_context);// without this line , asio::post(io_context, cb) from another thread can't work....
io_context.get_executor().on_work_finished();
agrpc::run(grpc_context, io_context);
io_context.get_executor().on_work_started();
});
io_context.run();
}
Hi,
Asio typically comes with boost, and boost does not install a "Findasio.cmake" file. This causes a cmake faile. (I'm using v3.25.1.) Chris Kohlhoff does not provide cmake files either.
To install through cmake, I ended up patching cmake/AsioGrpcFindPackages.cmake
, replacing find_package(asio)
with:
SET(_asio_grpc_asio_root "${CMAKE_PREFIX_PATH}/include/boost")
Note that CMAKE_PREFIX_PATH/include/boost
is the most likely place to find the header asio.hpp.
Hi,
first, I would like to thank you for a great library.
By looking at the example, it looks like that in order to share the asio::io_context for both grpc and non grpc operations, we should replace all io_context with grpcContext, is that a right assumption?
thank you.
Hi:
To test mixing generic asio with asio-grpc I did the following:
I modified hello-world-server-cpp20.cpp to add a steady_timer that would call an asynchronous timer callback 5 seconds after the server gets the request.
But the timer callback is getting called right away, not waiting 5 seconds. I tried this without asio-grpc following the boost tutorial and it does delay 5 seconds: https://www.boost.org/doc/libs/1_77_0/doc/html/boost_asio/tutorial/tuttimer2/src.html
So I added the 3 lines below marked with a *:
...
bool request_ok = co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service,
server_context, request, writer);
* printf("Got request\n");
* boost::asio::steady_timer t(grpc_context, boost::asio::chrono::seconds(5));
* t.async_wait(&timer_callback);
if (!request_ok)
{
co_return;
}
...
My timer_callback() is simply this:
void timer_callback(const boost::system::error_code& /*e*/)
{
printf("Inside timer callback\n");
}
When I launch the client and it sends the request then the timer callback is immediately called and it doesn't wait 5 seconds.
Do you have any idea why? Is there a problem using asio-grpc with steady_timer?
The typical use of PollContext is for asio::io_context{1}
. This should be as performant and easy to use as possible.
Also add option to run only the grpc::CompletionQueue
of the GrpcContext.
Work takes place on the improve-poll-context
branch.
Targets v1.7.0.
Recently I ran the grpc_bench to compare the performance of different settings. I found the coroutine based one is slower than both the boost fiber version and the grpc multi-thread version. Do you have any insight about this?
I'm hitting multiple issues using generic CMake (no package managers) on Fedora 36 and Ubuntu 22. I can make it work with changes.
Are you open to CMake changes/PRs?
I tried extensively with following code. Get compilation error as in this post.
any help is appreciated. Thanks you @Tradias
Code
15 #include "zprobe.grpc.pb.h"
16
17 #include <agrpc/asio_grpc.hpp>
18 #include <boost/asio/co_spawn.hpp>
19 #include <boost/asio/detached.hpp>
20 #include <boost/asio/signal_set.hpp>
21 #include <grpcpp/server.h>
22 #include <grpcpp/server_builder.h>
23
24 #include <optional>
25 #include <thread>
26 namespace asio = boost::asio;
27
28
29 // begin-snippet: server-side-helloworld
30 // ---------------------------------------------------
31 // Server-side hello world which handles exactly one request from the client before shutting down.
32 // ---------------------------------------------------
33 // end-snippet
34 int main(int argc, const char** argv)
35 {
36 const auto port = argc >= 2 ? argv[1] : "50051";
37 const auto host = std::string("0.0.0.0:") + port;
38
39 std::unique_ptr<grpc::Server> server;
40
41 grpc::ServerBuilder builder;
42 agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};
43 builder.AddListeningPort(host, grpc::InsecureServerCredentials());
44 zprobe::ProbeService::AsyncService service;
45 builder.RegisterService(&service);
46 server = builder.BuildAndStart();
47
48 asio::co_spawn(
49 grpc_context,
50 [&]() -> asio::awaitable<void>
51 {
52 grpc::ServerContext server_context;
53 helloworld::HelloRequest request;
54 grpc::ServerAsyncResponseWriter<helloworld::HelloReply> writer{&server_context};
55 co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context,
56 request, writer, asio::use_awaitable);
57 helloworld::HelloReply response;
58 response.set_message("Hello " + request.name());
59 co_await agrpc::finish(writer, response, grpc::Status::OK, asio::use_awaitable);
60 },
61 asio::detached);
62
63 grpc_context.run();
64
65 server->Shutdown();
66 }```
**CMAKE -- Success**
cmake .. "-DCMAKE_TOOLCHAIN_FILE=~/tools/vcpkg/scripts/buildsystems/vcpkg.cmake" "-DCMAKE_PREFIX_PATH=$MY_INSTALL_DIR"
**CMakeLists.txt**
```target_link_libraries(zprobe
PUBLIC zprobe_grpc_proto
${_REFLECTION}
${_GRPC_GRPCPP}
${_PROTOBUF_LIBPROTOBUF}
asio-grpc::asio-grpc-standalone-asio)
ERROR on make
[ 83%] Building CXX object CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:26:29: error: ‘namespace asio = boost::boost::asio;’ conflicts with a previous declaration
26 | namespace asio = boost::asio;
| ^
In file included from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution/allocator.hpp:19,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution.hpp:18,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/any_io_executor.hpp:22,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/asio_forward.hpp:24,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/default_completion_token.hpp:18,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/default_completion_token.hpp:19,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/alarm.hpp:18,
from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/asio_grpc.hpp:33,
from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:17:
/home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/detail/type_traits.hpp:51:11: note: previous declaration ‘namespace asio { }’
51 | namespace asio {
| ^~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp: In function ‘int main(int, const char**)’:
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:48:11: error: ‘co_spawn’ is not a member of ‘asio’
48 | asio::co_spawn(
| ^~~~~~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:24: error: ‘awaitable’ in namespace ‘asio’ does not name a template type
50 | [&]() -> asio::awaitable<void>
| ^~~~~~~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:33: error: expected ‘{’ before ‘<’ token
50 | [&]() -> asio::awaitable<void>
| ^
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:34: error: expected primary-expression before ‘void’
50 | [&]() -> asio::awaitable<void>
| ^~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:61:15: error: ‘detached’ is not a member of ‘asio’; did you mean ‘boost::asio::detached’?
61 | asio::detached);
| ^~~~~~~~
In file included from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:19:
/home/bbhushan/tools/vcpkg/installed/x64-linux/include/boost/asio/detached.hpp:103:22: note: ‘boost::asio::detached’ declared here
103 | constexpr detached_t detached;
| ^~~~~~~~
make[2]: *** [CMakeFiles/zprobe.dir/build.make:76: CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/zprobe.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
It's exciting to find existing repo that integrates grpc with asio and c++20 coroutine. I always appreciate async grpc interfaces like what grpc dotnet provides.
By the way, is there any plan to adapt this repo to (maybe) c++23 executors/network once they are landed?
When the user generates client-side mock code (e,g, through the GENERATE_MOCK_CODE of asio_grpc_protobuf_generate
) then they need a way of dealing with asio-grpc provided void*
tags. Implement a function that completes those tags immediately.
Work takes place on the client-mock-test-utils
branch.
Targets v1.7.0.
Hi, we have tried to install asio-grpc 1.3.1 using https://vcpkg.info. Building it works but importing the lib into our c++14 project caused compiling problem. The header file "version" does not exist in c++14.
#include <version>
we are using msvc in windows 10
We cannot easily upgrade our big project with many libs to c++20. We would like to ask if there is a way to install asio-grpc 1.21 using vcpkg?
Thanks
Hi,
Thank you for your awesome library.
It is a very convenient way to use asio for writing single threaded (but concurrent) applications without worrying about problems in multi threaded applications.
If i'm not mistaken, right now the only way to use agrpc is to instantiate GrpcContext and run it on its own thread, which means we need to run asio::io_context on a separate thread and deal with concurrency problems between them.
Is there any plan for making it possible to reuse asio::io_context for agrpc services?
Hi,
Thank you for writing this library.
I'm currently trying trying to use asio-grpc for implementing a service that, as a part of a request, can call back to a connected client to get additional data - dependency injection style. This dependency injection channel is a long-living bidirectional streaming grpc call. My problem is that the server-logic is calling into a normal pure virtual class (interface) for requesting these values. AFAIK this rules out using co_await, co_return since this would imply my interface should return a coroutine. So I'm trying to figure out if I can implement such interface by using co_yield, where the consumer of the values does not need to be a coroutine.
The server-logic is being triggered by another async grpc call, but the server-logic itself is not async.
I hope someone is able to help me to figure out if and how this is possible. Let me know if my description is not clear enough.
Best regards
grpc::CompletionQueue
and grpc::Server
to provide clean shutdown and multi-threadingHello,
Firstly, great job adapting an un-ergonomic library to asio. I have been debugging a problem all day where combining a streaming agrpc::read
with any other future with the awaitable ||
operator leads to the other futures not completing. It can be reproduced in the streaming-client
bidirectional stream by changing the server such that it does not write to the stream, and awaiting on something like an alarm alongside the read/write calls on the client like below (after disabling the context timeout):
// ... line 86
std::variant<bool, std::tuple<bool, bool>> res =
co_await (agrpc::wait(alarm, expiry) ||
(agrpc::read(*reader_writer, response) && agrpc::write(*reader_writer, request)));
// ...
This will hang forever, beyond the expiry of the timer. Is this expected behavior? I understand there exist timeouts on client contexts, however as far as I can tell this leaves the stream in an unrecoverable state. Are reads "cancel safe" meaning they can be re-initiated without ill effects like here in Rust, or is this just not possible? The use case is a recurring ping-type request being sent while simultaneously receiving data. The only solution I can think of would be to co_spawn
another task to do this, but I would like to do it with these operators if at all possible.
Thanks
Suppose I have a server streaming rpc:
rpc ServerStream(Req) returns (stream Resp);
When a client calls ServerStream, the server does some bookkeeping; when the client disconnects, the bookkeeping needs to be removed. Is there an API, let's say on_recv_client_close(release_function)
, that can register a callback for a client closed event?
Thank you.
P.S.
agrpc::write
to indicate that client is closed. But I want to get notified even when the server doesn't send anything.GRPC_OP_RECV_CLOSE_ON_SERVER
op that I don't know whether it helps.First, let me be clear that i don't know much about the boost library, so maybe my mistake is in that regard. The issue I'm facing is: I have a server-streaming service that needs to send data every x seconds to a client. There could be multiple requests from the same client, as long as the parameters of the request are different as well as multiple clients connected at the same time. To do that, what I did was start a repeatedly_request as shown on issue #14 :
boost::asio::system_context ctx;
auto guard = boost::asio::make_work_guard(ctx);
agrpc::repeatedly_request(&MarketDataAlert::AsyncService::Requestsubscribe, service,
boost::asio::bind_executor(grpc_context,
[&]<class T>(agrpc::RepeatedlyRequestContext<T>&& context)
{
boost::asio::co_spawn(
ctx,
[&, context = std::move(context)]()
{
auto args = context.args();
return std::invoke(handle_request, std::get<0>(args), std::get<1>(args), std::get<2>(args), grpc_context);
},
boost::asio::detached);
}));
On the handle_request coroutine, if the request is valid, I tried 2 different variations. The first is this:
co_await processRequest(server_context,
request,
writer, instrumentId, grpc_context);
Which kinda works as I expected except I can only handle one request at a time, which makes it unviable for my use case.
The second attempt was to co_spawn processRequest, as such:
boost::asio::system_context ctx;
boost::asio::co_spawn(
ctx,
[&]() -> boost::asio::awaitable<void>
{
auto guard = boost::asio::make_work_guard(ctx);
co_await processRequest(server_context,
request,
writer, instrumentId, grpc_context);
},
boost::asio::detached);
But now, upon calling agrpc::write, I have a "pure virtual method called" exception. As stated on the documentation, since I'm using another context instead of the agrpc::GrpcContext, I always use bind_executor(grpc_context, asio::awaitable).
The processRequest method has the following structure:
while(request_ok) {
// do some work
co_await fill_response(server_context, request, response, instrumentId);
// do some work
request_ok = co_await agrpc::write(writer, response,
boost::asio::bind_executor(grpc_context, boost::asio::use_awaitable));
}
//after done work
bool finish_ok = co_await agrpc::finish(writer, grpc::Status::OK, boost::asio::bind_executor(grpc_context, boost::asio::use_awaitable));
I don't know how should I send you the stack trace, but I'll send a printscreen from the IDE.
What am I doing wrong?
Thank you in advance
Hello. Thanks for the library!
asio-grpc/src/agrpc/detail/memory_resource.hpp:26:10: fatal error: 'memory_resource' file not found
#include <memory_resource>
^~~~~~~~~~~~~~~~~
1 error generated.
I have this include in #include <experimental/memory_resource>
Make asio-grpc available in the conan-center-index. Figure out how to handle the different CMake targets/backends: Boost.Asio, standalone Asio and libunifex. And of course the Boost.Container feature which should be straight forward.
@sanblch for your information.
Hi,
we have a project that already uses grpc and we link against gRPC::grpc++
. We now want to introduce asio-grpc, which links against gRPC::grpc++_unsecure
when using the generated asio-grpcConfig.cmake. This leads to problems, apparently it is not possible to link against both targets in the same binary.
I found this commit: 2731e5b
So first of all the question would be: Why was this changed? Why is the default the _unsecure target?
We can fix our build by manually patching the asio-grpcConfig.cmake.in
file and making sure that asio-grpc also links against gRPC::grpc++
, but this is quite ugly and will break on any changes to that file.
Would it be possible to add a cmake config variable so that we can optionally link asio-grpc against gRPC::grpc++
instead of the default of gRPC::grpc++_unsecure
?
I was wondering if it is possible to use this library to get callbacks if desired instead of coroutines?
I understand coroutines are more developer friendly, but sometimes we may want callbacks instead.
Examples:
a callback on server when server receives an rpc request
a callback on client when client receives a response to a request
Are there existing APIs to do this? If so how?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.