Coder Social home page Coder Social logo

tradias / asio-grpc Goto Github PK

View Code? Open in Web Editor NEW
325.0 10.0 29.0 4.33 MB

Asynchronous gRPC with Asio/unified executors

Home Page: https://tradias.github.io/asio-grpc/

License: Apache License 2.0

CMake 6.87% C++ 93.12% Shell 0.01%
cpp grpc asio executors asynchronous coroutine cpp17 cpp20 asynchronous-programming sender-reciever

asio-grpc's People

Contributors

actions-user avatar tradias avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

asio-grpc's Issues

"Pure virtual method called" exception

First, let me be clear that i don't know much about the boost library, so maybe my mistake is in that regard. The issue I'm facing is: I have a server-streaming service that needs to send data every x seconds to a client. There could be multiple requests from the same client, as long as the parameters of the request are different as well as multiple clients connected at the same time. To do that, what I did was start a repeatedly_request as shown on issue #14 :

boost::asio::system_context ctx;
    auto guard = boost::asio::make_work_guard(ctx);
    
    agrpc::repeatedly_request(&MarketDataAlert::AsyncService::Requestsubscribe, service,
                              boost::asio::bind_executor(grpc_context,
                                                  [&]<class T>(agrpc::RepeatedlyRequestContext<T>&& context)
                                                  {
                                                      
                                                      boost::asio::co_spawn(
                                                              ctx,
                                                              [&, context = std::move(context)]()
                                                              {
                                                                  auto args = context.args();
                                                                  return std::invoke(handle_request, std::get<0>(args), std::get<1>(args), std::get<2>(args), grpc_context);
                                                              },
                                                              boost::asio::detached);
                                                  }));

On the handle_request coroutine, if the request is valid, I tried 2 different variations. The first is this:

co_await processRequest(server_context,
                                request,
                                writer, instrumentId, grpc_context);

Which kinda works as I expected except I can only handle one request at a time, which makes it unviable for my use case.
The second attempt was to co_spawn processRequest, as such:

boost::asio::system_context ctx;
        boost::asio::co_spawn(
                ctx,
                [&]() -> boost::asio::awaitable<void>
                {
                    auto guard = boost::asio::make_work_guard(ctx);
                    co_await processRequest(server_context,
                                   request,
                                   writer, instrumentId, grpc_context);
                },
                boost::asio::detached);

But now, upon calling agrpc::write, I have a "pure virtual method called" exception. As stated on the documentation, since I'm using another context instead of the agrpc::GrpcContext, I always use bind_executor(grpc_context, asio::awaitable).
The processRequest method has the following structure:

while(request_ok) {
		// do some work
		co_await fill_response(server_context, request, response, instrumentId);
 
		// do some work
                request_ok = co_await agrpc::write(writer, response,
                                                              boost::asio::bind_executor(grpc_context, boost::asio::use_awaitable));
		
                    
    }
    //after done work
        bool finish_ok = co_await agrpc::finish(writer, grpc::Status::OK, boost::asio::bind_executor(grpc_context, boost::asio::use_awaitable));

I don't know how should I send you the stack trace, but I'll send a printscreen from the IDE.
Screenshot from 2022-05-06 10-17-50

What am I doing wrong?
Thank you in advance

Improve PollContext for typical use case

The typical use of PollContext is for asio::io_context{1}. This should be as performant and easy to use as possible.

Also add option to run only the grpc::CompletionQueue of the GrpcContext.

Work takes place on the improve-poll-context branch.

Targets v1.7.0.

Add support for generic stub/service

Add convenience overloads to agrpc::request and agrpc::repeatedly_request for grpc::GenericServerAsyncReaderWriter, grpc::GenericClientAsyncReaderWriter and the like.

Also consider adding a generic server benchmark to grpc_bench.

Work is being done on the generic-rpcs branch.

Targets v1.7.0.

Theads and asio-grpc

Thank you for implementing this excellent project to provide a consolidated way of executing async grpc command and send/receive tcp packages asynchronously with boost asio library. I just begin to use boost asio recently and have a couple of quesions when using this library.

According to this link: https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html, multiple threads may call io_context::run() to set up a threads pool and the io_context may distribute work across them. Dose asio-grpc's execution_context also guarantee thread safety if threads pool is enabled on it? I am using C++20 coroutines and assuming that each co_spawn will locate a thread from the threads pool and run the composed asynchronous operations. Correct me if my understanding is wrong. What if the composed asynchronous operations contains a blocking operation, it may block the running thread and how can I prevent the other co_spawn call to use the blocked thread for execution? In additional, co_spawn could spawn from both execution_context and excutor. I am guessing that if spawn from execution_context it will locate a new thread and run while from excutor, it will just run on the thread that the excutor is running. Is my guessing correct?

Meanwhile #8 mentions that if co_spawn non-grpc async operation like steady_timer from grpc_context, it will automatically spawns a 2nd io_context thread. So it seems that asio-grpc internally maintain two threads for both grpc execution_context and io_context to run async grpc operations and other async non-grpc operations. And the last comments says version 1.4 would also support ask io_context for a agrpc::GrpcContext. Considering my application would serve many clients and for each client's requst issue one single composed asynchronous operations containing one async grpc call and several async tcp read&write to the server call and response back to the client, will asio-grpc guarantee there won't have interleave between the grpc operation and the tcp operations when the single composed asynchronous operation is co_spawned from either grpc_context or io_context since they are from two context on two thread? Also does asio-grpc support the mode of having threads pool for io_context and single thread for grpc_context or both have threads pool enabled?

                      one single composed asynchronous operations
                     /                                            \
client1 --> { co_wait async grpc operation, co_wait async tcp operations } --> server 
client2 --> { co_wait async grpc operation, co_wait async tcp operations } --> server
clientN ...

Hope to get some guidence from you. Thanks.

Asio-grpc as a subdirectory asio standalone example issue

Here it is https://github.com/Tradias/asio-grpc#usage

Section: As a subdirectory using standalone Asio:

find_package(gRPC)
find_package(asio)
add_subdirectory(/path/to/repository/root)
target_link_libraries(your_app PUBLIC gRPC::grpc++_unsecure asio-grpc::asio-grpc-standalone-asio asio::asio)

First of all, no such find_package(asio) in cmake modules. Replaced this line to:

# Standalone Asio
find_path(ASIO_INCLUDE_PATH asio.hpp HINTS
    "/usr/include"
    "/usr/local/include"
    "/opt/local/include"
)
if (NOT ASIO_INCLUDE_PATH)
    message(FATAL_ERROR "${Green}No ASIO found${Reset}")
endif()
message(STATUS "${Yellow}Found ASIO${Reset}: ${ASIO_INCLUDE_PATH}")
add_library(asio INTERFACE)
target_include_directories(asio
    INTERFACE ${ASIO_INCLUDE_PATH}
)
target_compile_definitions(asio
    INTERFACE ASIO_STANDALONE
)

Then, tried to test option ASIO_GRPC_USE_BOOST_CONTAINER.
Added -DASIO_GRPC_USE_BOOST_CONTAINER=ON to cmake call.
Nothing changes. It looks like file asio-grpc/cmake/AsioGrpcOptionDefaults.cmake is never execuded.

Please test your library using add_subdirectory() and a pure CMake generate call.

cmake "find asio" weirdness

Hi,

Asio typically comes with boost, and boost does not install a "Findasio.cmake" file. This causes a cmake faile. (I'm using v3.25.1.) Chris Kohlhoff does not provide cmake files either.

To install through cmake, I ended up patching cmake/AsioGrpcFindPackages.cmake, replacing find_package(asio) with:

SET(_asio_grpc_asio_root "${CMAKE_PREFIX_PATH}/include/boost")

Note that CMAKE_PREFIX_PATH/include/boost is the most likely place to find the header asio.hpp.

Conan support

Make asio-grpc available in the conan-center-index. Figure out how to handle the different CMake targets/backends: Boost.Asio, standalone Asio and libunifex. And of course the Boost.Container feature which should be straight forward.

@sanblch for your information.

Question: is it possible to implement a server to client request, using a bidirectional-streaming channel and exposed as a standard C++ class/interface?

Hi,

Thank you for writing this library.

I'm currently trying trying to use asio-grpc for implementing a service that, as a part of a request, can call back to a connected client to get additional data - dependency injection style. This dependency injection channel is a long-living bidirectional streaming grpc call. My problem is that the server-logic is calling into a normal pure virtual class (interface) for requesting these values. AFAIK this rules out using co_await, co_return since this would imply my interface should return a coroutine. So I'm trying to figure out if I can implement such interface by using co_yield, where the consumer of the values does not need to be a coroutine.

The server-logic is being triggered by another async grpc call, but the server-logic itself is not async.

I hope someone is able to help me to figure out if and how this is possible. Let me know if my description is not clear enough.

Best regards

Trying to understand why other boost asio non-grpc async work in same thread with grpc?

I've been trying to understand how it is possible for your excellent library to be able to execute async operations (for example steady_timer) on the same main thread as grpc.

I ran the hello-world-server-cpp20.cpp in a debugger to help my understanding.

Part of my initial confusion is because I see in grpcContext.ipp/get_next_event() that it seems to be blocking only on the grpc completion queue (call to get_completion_queue()->AsyncNext()), so how could it unblock on other async events that are not grpc?

Then using the debugger I found that at the start of main() of hello-world-server-cpp20.cpp that this statement:

    boost::asio::basic_signal_set signals{grpc_context, SIGINT, SIGTERM};

spawns a 2nd ASIO thread.

Then I see that when I have another non-grpc async operation, such as steady_timer, that when the steady_timer expires it wakes up this 2nd ASIO thread somehow. Then when that wakes up somehow you get it to post a grpc alarm with immediate deadline to the grpc completion queue which unblocks the main thread and allows the handler for the steady timer to execute. Is this the proper understanding?

So would it be that any (not just steady_timer) non-grpc async completion handlers would wake up that 2nd thread which then sends an immediate grpc alarm to wake up the completion handler in the main thread? That is how you get non-grpc async completion handlers to execute?

Not knowing how boost::asio works so well, I suppose this 2nd thread is always there and wakes up the main thread, even when using basic io_context and not an overridden execution_context? Or is this 2nd thread somehow created because you have overridden the basic io_context/execution_context?

User-facing mock code test utilities

When the user generates client-side mock code (e,g, through the GENERATE_MOCK_CODE of asio_grpc_protobuf_generate) then they need a way of dealing with asio-grpc provided void* tags. Implement a function that completes those tags immediately.

Work takes place on the client-mock-test-utils branch.

Targets v1.7.0.

How to get callbacks if desired instead of coroutines?

I was wondering if it is possible to use this library to get callbacks if desired instead of coroutines?

I understand coroutines are more developer friendly, but sometimes we may want callbacks instead.

Examples:
a callback on server when server receives an rpc request
a callback on client when client receives a response to a request

Are there existing APIs to do this? If so how?

shared-io-context usage problem.

the problem is :

if the io_context has not used before agrpc::run, asio::post(io_context, cb) from another thread, cb can't be called...

void test_func(){

    asio::io_context io_context{1};

    example::v1::Example::Stub stub{grpc::CreateChannel(host, grpc::InsecureChannelCredentials())};
    agrpc::GrpcContext grpc_context{std::make_unique<grpc::CompletionQueue>()};

    asio::post(io_context,
               [&]
               {
                   auto worker = asio::make_work_guard(io_context);// without this line , asio::post(io_context, cb) from another thread can't work....
                   io_context.get_executor().on_work_finished();
                   agrpc::run(grpc_context, io_context);
                   io_context.get_executor().on_work_started();
               });
    io_context.run();
}

assertion GRPC_CALL_ERROR_TOO_MANY_OPERATIONS in the example server code

Hi,
I am trying to run the example-server.cpp and example-client.cpp example from the version 1.1.2 [installed using vcpkg with boost container feature].

I got an assertion error GRPC_CALL_ERROR_TOO_MANY_OPERATIONS at the following line in the server code
image

here is the stack trace when the assertion occurs.
image

here is the detail information about the assertion.
image

do you have any idea why the bug happens?

Multi-threaded server and health check

Hi,

I tried test example/multi-threaded-server with enabled DefaultHealthCheckService and found that after I set grpc::EnableDefaultHealthCheckService(true) before start server all handlers worked in only one thread. What could be causing this and how to fix it?

compiling problem with versions installed by vcpkg

Hi, thanks for creating a wonderful framework. It has made my life much easier.

I have used your framework for a few months and now I need to set up our project on a new machine. The installation is successful with the following command

./vcpkg install asio-grpc[boost-container]:x64-linux

However, when I compile my project, I got the following errors.
image
image
image
image

Do you have any idea why does this error happen?

I look forward to hearing from you soon.

Thanks

GrpcContext.run_while(Condition)

Motivation

When invoking a synchronous function it is possible to initiate some work for the GrpcContext and wait for the result using a std::promise:

Response get_synchronous()
{
    std::promise<void()> promise;
    auto future = promise.get_future();
    Response response;
    agrpc::read(reader, response, 
                [&](bool)
                {
                    promise.set_value();
                });
    return response;
}

But if we are invoking get_synchronous from the thread that runs the GrpcContext then we end up in a deadlock. The proposed GrpcContext.run_while() can be used to prevent that:

Response get_synchronous()
{
    if (grpc_context.get_executor().running_in_this_thread()) {
        // use promise based get_synchronous
    } else {
        bool is_result_ready{};

        // Initiate some IO work
        Response response;
        agrpc::read(reader, response, 
                    [&](bool)
                    {
                        is_result_ready = true;
                    });

        // Wait for the work to finish
        grpc_context.run_while([&]() { return !is_result_ready; });

        return response;
    }
}

Nice to find this work to make async grpc in c++ more user-friendly

It's exciting to find existing repo that integrates grpc with asio and c++20 coroutine. I always appreciate async grpc interfaces like what grpc dotnet provides.

By the way, is there any plan to adapt this repo to (maybe) c++23 executors/network once they are landed?

Streaming reads incompatible with `||` awaitable operator

Hello,
Firstly, great job adapting an un-ergonomic library to asio. I have been debugging a problem all day where combining a streaming agrpc::read with any other future with the awaitable || operator leads to the other futures not completing. It can be reproduced in the streaming-client bidirectional stream by changing the server such that it does not write to the stream, and awaiting on something like an alarm alongside the read/write calls on the client like below (after disabling the context timeout):

// ... line 86
        std::variant<bool, std::tuple<bool, bool>> res =
            co_await (agrpc::wait(alarm, expiry) || 
                (agrpc::read(*reader_writer, response) && agrpc::write(*reader_writer, request)));
// ...

This will hang forever, beyond the expiry of the timer. Is this expected behavior? I understand there exist timeouts on client contexts, however as far as I can tell this leaves the stream in an unrecoverable state. Are reads "cancel safe" meaning they can be re-initiated without ill effects like here in Rust, or is this just not possible? The use case is a recurring ping-type request being sent while simultaneously receiving data. The only solution I can think of would be to co_spawn another task to do this, but I would like to do it with these operators if at all possible.

Thanks

Add support for systemd socket activation

Feature request: Add support for systemd socket activation (LISTEN_FDS, LISTEN_FDNAMES)

The advantages are for instance

  • Native network performance over the socket-activated socket when running rootless Podman. Network traffic is normally passed through the program slirp4netns which comes with a performance penalty.

  • There is also a security advantage when running socket-activated containers with Podman. It's sometimes possible to run the container with --network=none. I wrote a blog about it https://www.redhat.com/sysadmin/socket-activation-podman

  • The possibility to stop on inactivity and the starting when the next client connects. (The functionality to "stop on inactivity" would need to be implemented in asio-grpc though)

  • In the future it might be possible to run a socket-activated container with Podman as a systemd system service that makes use of (User=). Then it would be possible to use port numbers below 1024 without having to run the command sudo sh -c "echo 0 > /proc/sys/net/ipv4/ip_unprivileged_port_start". Unfortunately, I think there are some hurdles to get this working still.

How to modify the hello-world-client-cpp20 example to unblock on timeout or response?

In hello-world-client-cpp20.cpp, how could it be modified to have a timeout getting the response?

Am I correct that the:

co_await agrpc::finish(*reader, response, status);

Would wait indefinitely for a response?

How to make one 'blocking' (actually co_await) call that would resume execution upon either timeout or reception of the response?

Then, how would I know whether it 'unblocked' due to timeout or response reception?

Also, what if the server later sends the response after the timeout? Would it get thrown away or could it get accidentally processed as the response to the next request?

High-level server API

  • Create I/O object for server-side requests: unary and streaming. Similar to the high-level client API.
  • Figure out what API to provide for attaching request handler. E.g. introspect one user-provided ServiceHandler class and register repeatedly_request for all of them automatically. Or let the user register a handler per endpoint themselves, like with repeatedly_request at the moment.
  • Allow users to put implementation of their request handler into cpp file.
  • Nicely integrate with tracing/metrics/logging/load balancing, like opentelemetry, opencensus, ORCA, xDS, etc.
  • Consider owning the grpc::CompletionQueue and grpc::Server to provide clean shutdown and multi-threading
  • Allow per-request ServerContext configuration, e.g. to enable compression
  • Support AsyncNotifyWhenDone

None of the methods works for compilation/build

I am using WSL Ubuntu 20.04.x. I have tried all the possible ways of compiling and building.
Nothing works. Recent ones is with vcpkg fails with errors.
I can post here hunter and conan errors as well, if that is helpful.
I am sure, being newbie, I must have missed something.

CMakeLists.txt

cmake_minimum_required(VERSION 3.16)
project(expr)
add_compile_options(-Wall -ggdb -std=c++14 -pthread)
find_package(asio-grpc CONFIG REQUIRED)
add_executable(boo boo.cpp)
target_link_libraries(boo PUBLIC asio-grpc::asio-grpc)

ERROR

โžœ  build cmake -DCMAKE_TOOLCHAIN_FILE=/home/bbhushan/tools/vcpkg/scripts/buildsystems/vcpkg.cmake ..
CMake Error at CMakeLists.txt:4 (find_package):
  Could not find a package configuration file provided by "asio-grpc" with
  any of the following names:

    asio-grpcConfig.cmake
    asio-grpc-config.cmake

  Add the installation prefix of "asio-grpc" to CMAKE_PREFIX_PATH or set
  "asio-grpc_DIR" to a directory containing one of the above files.  If
  "asio-grpc" provides a separate development package or SDK, be sure it has
  been installed.


-- Configuring incomplete, errors occurred!
See also "/home/bbhushan/work/expr/build/CMakeFiles/CMakeOutput.log".

vpkg list

โžœ  vcpkg git:(master) ./vcpkg list | grep "asio\|grpc"
asio-grpc:x64-linux                               2.3.0               Asynchronous gRPC with Asio/unified executors
asio:x64-linux                                    1.24.0              Asio is a cross-platform C++ library for network...
boost-asio:x64-linux                              1.80.0#2            Boost asio module
grpc:x64-linux                                    1.50.1              An RPC library and framework
grpc[codegen]:x64-linux                                               Build code generator machinery

cmake VERSION

โžœ  vcpkg git:(master) cmake --version
cmake version 3.16.3

I/O-object for grpc::Alarm

To make working with Alarm safer and more flexible we should have an I/O-object similar to agrpc::RPC.

agrpc::Alarm alarm{grpc_context};
bool ok = co_await alarm.wait(deadline, asio::use_awaitable);

Additionally it would be nice to have a && overload where the alarms keeps itself alive for the duration of the asynchronous operation. The sizeof a grpc::Alarm is 24 bytes - that seems affordable.

auto [alarm, ok] = co_await agrpc::Alarm(grpc_context).wait(deadline, asio::use_awaitable);

generic CMake on Linux

I'm hitting multiple issues using generic CMake (no package managers) on Fedora 36 and Ubuntu 22. I can make it work with changes.

Are you open to CMake changes/PRs?

Is there a way to build helloworld server code

I tried extensively with following code. Get compilation error as in this post.
any help is appreciated. Thanks you @Tradias

Code

    15  #include "zprobe.grpc.pb.h"
    16
    17  #include <agrpc/asio_grpc.hpp>
    18  #include <boost/asio/co_spawn.hpp>
    19  #include <boost/asio/detached.hpp>
    20  #include <boost/asio/signal_set.hpp>
    21  #include <grpcpp/server.h>
    22  #include <grpcpp/server_builder.h>
    23
    24  #include <optional>
    25  #include <thread>
    26  namespace asio = boost::asio;
    27
    28
    29  // begin-snippet: server-side-helloworld
    30  // ---------------------------------------------------
    31  // Server-side hello world which handles exactly one request from the client before shutting down.
    32  // ---------------------------------------------------
    33  // end-snippet
    34  int main(int argc, const char** argv)
    35  {
    36      const auto port = argc >= 2 ? argv[1] : "50051";
    37      const auto host = std::string("0.0.0.0:") + port;
    38
    39      std::unique_ptr<grpc::Server> server;
    40
    41      grpc::ServerBuilder builder;
    42      agrpc::GrpcContext grpc_context{builder.AddCompletionQueue()};
    43      builder.AddListeningPort(host, grpc::InsecureServerCredentials());
    44      zprobe::ProbeService::AsyncService service;
    45      builder.RegisterService(&service);
    46      server = builder.BuildAndStart();
    47
    48      asio::co_spawn(
    49          grpc_context,
    50          [&]() -> asio::awaitable<void>
    51          {
    52              grpc::ServerContext server_context;
    53              helloworld::HelloRequest request;
    54              grpc::ServerAsyncResponseWriter<helloworld::HelloReply> writer{&server_context};
    55              co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service, server_context,
    56                                      request, writer, asio::use_awaitable);
    57              helloworld::HelloReply response;
    58              response.set_message("Hello " + request.name());
    59              co_await agrpc::finish(writer, response, grpc::Status::OK, asio::use_awaitable);
    60          },
    61          asio::detached);
    62
    63      grpc_context.run();
    64
    65      server->Shutdown();
    66  }```

**CMAKE -- Success**
 cmake  .. "-DCMAKE_TOOLCHAIN_FILE=~/tools/vcpkg/scripts/buildsystems/vcpkg.cmake"  "-DCMAKE_PREFIX_PATH=$MY_INSTALL_DIR"

**CMakeLists.txt**
```target_link_libraries(zprobe
  PUBLIC zprobe_grpc_proto
  ${_REFLECTION}
  ${_GRPC_GRPCPP}
  ${_PROTOBUF_LIBPROTOBUF}
  asio-grpc::asio-grpc-standalone-asio)

ERROR on make

[ 83%] Building CXX object CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:26:29: error: โ€˜namespace asio = boost::boost::asio;โ€™ conflicts with a previous declaration
   26 | namespace asio = boost::asio;
      |                             ^
In file included from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution/allocator.hpp:19,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/execution.hpp:18,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/any_io_executor.hpp:22,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/asio_forward.hpp:24,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/detail/default_completion_token.hpp:18,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/default_completion_token.hpp:19,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/alarm.hpp:18,
                 from /home/bbhushan/tools/vcpkg/installed/x64-linux/include/agrpc/asio_grpc.hpp:33,
                 from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:17:
/home/bbhushan/tools/vcpkg/installed/x64-linux/include/asio/detail/type_traits.hpp:51:11: note: previous declaration โ€˜namespace asio { }โ€™
   51 | namespace asio {
      |           ^~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp: In function โ€˜int main(int, const char**)โ€™:
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:48:11: error: โ€˜co_spawnโ€™ is not a member of โ€˜asioโ€™
   48 |     asio::co_spawn(
      |           ^~~~~~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:24: error: โ€˜awaitableโ€™ in namespace โ€˜asioโ€™ does not name a template type
   50 |         [&]() -> asio::awaitable<void>
      |                        ^~~~~~~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:33: error: expected โ€˜{โ€™ before โ€˜<โ€™ token
   50 |         [&]() -> asio::awaitable<void>
      |                                 ^
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:50:34: error: expected primary-expression before โ€˜voidโ€™
   50 |         [&]() -> asio::awaitable<void>
      |                                  ^~~~
/home/bbhushan/work/zprobe/grpc_asio_server.cpp:61:15: error: โ€˜detachedโ€™ is not a member of โ€˜asioโ€™; did you mean โ€˜boost::asio::detachedโ€™?
   61 |         asio::detached);
      |               ^~~~~~~~
In file included from /home/bbhushan/work/zprobe/grpc_asio_server.cpp:19:
/home/bbhushan/tools/vcpkg/installed/x64-linux/include/boost/asio/detached.hpp:103:22: note: โ€˜boost::asio::detachedโ€™ declared here
  103 | constexpr detached_t detached;
      |                      ^~~~~~~~
make[2]: *** [CMakeFiles/zprobe.dir/build.make:76: CMakeFiles/zprobe.dir/grpc_asio_server.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/zprobe.dir/all] Error 2
make: *** [Makefile:91: all] Error 2

Which compile definitions are recommended for clang?

Which compile definitions are recommended for clang build on Linux?

I already figured out that I need to set on my cmake line:

-DASIO_GRPC_USE_BOOST_CONTAINER=1

But I wonder if there are any others?

I'm using a slightly older version of asio-grpc, it would take me a little effort to update so don't want to have to do that. I'm using commit a17b559. Don't know if this is the reason?

The reason that I ask is that when I use clang 10.0.1 I get a clang crash when trying to build hello-world-server-cpp20:

/gitworkspace/distributions/clang/10.0.1/bin/clang++ -DBOOST_ALL_NO_LIB -DCARES_STATICLIB -I/gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/native.build/src/generated -isystem /gitworkspace/rbresali/mdt/mdt_stage/native.stage/usr/local/include -stdlib=libc++ -fPIC -g -Wall -Werror -DBOOST_THREAD_VERSION=4 -save-temps=obj -std=gnu++2a -MD -MT src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -MF src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o.d -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.cpp.o -c /gitworkspace/rbresali/mdt/mdt_example/asio_grpc_example/src/hello-world-server-cpp20.cpp
Stack dump:
0.	Program arguments: /vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10 -cc1 -triple x86_64-unknown-linux-gnu -S -save-temps=obj -disable-free -disable-llvm-verifier -discard-value-names -main-file-name hello-world-server-cpp20.cpp -mrelocation-model pic -pic-level 2 -mthread-model posix -mframe-pointer=all -fmath-errno -fno-rounding-math -masm-verbose -mconstructor-aliases -munwind-tables -target-cpu x86-64 -dwarf-column-info -fno-split-dwarf-inlining -debug-info-kind=limited -dwarf-version=4 -debugger-tuning=gdb -resource-dir /vol/dwdmgit_distributions/clang/10.0.1/lib64/clang/10.0.1 -Wall -Werror -std=gnu++2a -fdebug-compilation-dir /gitworkspace/rbresali-mdt_211028.110446/mdt_example/asio_grpc_example/native.build -ferror-limit 19 -fmessage-length 0 -fgnuc-version=4.2.1 -fobjc-runtime=gcc -fdiagnostics-show-option -faddrsig -o src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.s -x ir src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc 
1.	Code generation
2.	Running pass 'Function Pass Manager' on module 'src/CMakeFiles/asio-grpc-hello-world-server-cpp20.dir/hello-world-server-cpp20.bc'.
3.	Running pass 'X86 DAG->DAG Instruction Selection' on function '@"_ZN5boost4asio6detail20co_spawn_entry_pointINS0_15any_io_executorEZ4mainE3$_0NS1_16detached_handlerEEENS0_9awaitableINS1_28awaitable_thread_entry_pointET_EEPNS6_IvS8_EES8_T0_T1_"'
 #0 0x00000000016b6e24 PrintStackTraceSignalHandler(void*) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b6e24)
 #1 0x00000000016b4b8e llvm::sys::RunSignalHandlers() (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b4b8e)
 #2 0x00000000016b7225 SignalHandler(int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x16b7225)
 #3 0x00007f5f529b1630 __restore_rt (/lib64/libpthread.so.0+0xf630)
 #4 0x00000000021de359 llvm::DAGTypeLegalizer::getTableId(llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de359)
 #5 0x00000000021de216 llvm::DAGTypeLegalizer::RemapValue(llvm::SDValue&) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21de216)
 #6 0x00000000021dd97f llvm::DAGTypeLegalizer::ReplaceValueWith(llvm::SDValue, llvm::SDValue) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21dd97f)
 #7 0x00000000021e01ac llvm::DAGTypeLegalizer::DisintegrateMERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x21e01ac)
 #8 0x0000000002236e99 llvm::DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(llvm::SDNode*, unsigned int) (/vol/dwdmgit_distributions/clang/10.0.1/bin/clang-10+0x2236e99)
 #9 0x00007ffe5377a490 
clang-10: error: unable to execute command: Segmentation fault (core dumped)
clang-10: error: clang frontend command failed due to signal (use -v to see invocation)
clang version 10.0.1 
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /gitworkspace/distributions/clang/10.0.1/bin
clang-10: note: diagnostic msg: PLEASE submit a bug report to https://bugs.llvm.org/ and include the crash backtrace, preprocessed source, and associated run script.
clang-10: note: diagnostic msg: 
********************

PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.cpp
clang-10: note: diagnostic msg: /gitworkspace/rbresali/tmp/hello-world-server-cpp20-31228c.sh
clang-10: note: diagnostic msg: 

********************

How to get notified when client close

Suppose I have a server streaming rpc:

rpc ServerStream(Req) returns (stream Resp);

When a client calls ServerStream, the server does some bookkeeping; when the client disconnects, the bookkeeping needs to be removed. Is there an API, let's say on_recv_client_close(release_function), that can register a callback for a client closed event?

Thank you.

P.S.

Compiler error trying to use asio::experimental::use_promise as completion token

I'm trying to use a promise as the completion token to agrpc methods, and getting a compiler error using gcc 11. Here is a minimal example, based off your streaming-server.cpp example:

// additional includes required:
#include <asio/experimental/promise.hpp>
#include <asio/this_coro.hpp>

asio::awaitable<void> handle_bidirectional_streaming_request(example::v1::Example::AsyncService& service)
{
    grpc::ServerContext server_context;
    grpc::ServerAsyncReaderWriter<example::v1::Response, example::v1::Request> reader_writer{&server_context};
    bool request_ok = co_await agrpc::request(&example::v1::Example::AsyncService::RequestBidirectionalStreaming,
                                              service, server_context, reader_writer);
    if (!request_ok)
    {
        // Server is shutting down.
        co_return;
    }
    example::v1::Request request;

    // none of the below work to put as COMPLETIONTOKEN - the following line fails to compile:
    // asio::experimental::use_promise
    // asio::experimental::use_promise_t<agrpc::GrpcContext>{}
    // asio::experimental::use_promise_t<agrpc::GrpcContext::executor_type>{}
    // asio::experimental::use_promise_t<agrpc::s::BasicGrpcExecutor<>>{}
    // asio::experimental::use_promise_t<asio::this_coro::executor_t>{}
    auto&& read_promise = agrpc::read(reader_writer, COMPLETIONTOKEN);

    co_await read_promise.async_wait(asio::use_awaitable);
}

The use case is that later in the function I would simultaneously await any of 3 conditions:
New request from client, finished writing response to client, or new response ready from data processing thread pool:

auto&& write_promise = agrpc::write(rw, response, COMPLETIONTOKEN);
auto&& data_ready_promise = // asynchronously dispatch work to data processing thread pool
auto rwd_promise = asio::experimental::promise<>::all(
    std::forward<decltype(read_promise)>(read_promise),
    std::forward<decltype(write_promise)>(write_promise),
    std::forward<decltype(data_ready_promise)>(data_ready_promise)
);
std::tie(read_ok, write_ok, data_ready_ok) = co_await rwd_promise.async_wait(asio::use_awaitable);

Clang 14 and 15 build error

Hello. Thanks for the library!

asio-grpc/src/agrpc/detail/memory_resource.hpp:26:10: fatal error: 'memory_resource' file not found
#include <memory_resource>
         ^~~~~~~~~~~~~~~~~
1 error generated.

I have this include in #include <experimental/memory_resource>

Failed to compile the latest version with c++17

Hi, we have tried to install asio-grpc 1.3.1 using https://vcpkg.info. Building it works but importing the lib into our c++14 project caused compiling problem. The header file "version" does not exist in c++14.

#include <version>

we are using msvc in windows 10

We cannot easily upgrade our big project with many libs to c++20. We would like to ask if there is a way to install asio-grpc 1.21 using vcpkg?
Thanks

Using asio::io_context in single threaded applications

Hi,
Thank you for your awesome library.
It is a very convenient way to use asio for writing single threaded (but concurrent) applications without worrying about problems in multi threaded applications.
If i'm not mistaken, right now the only way to use agrpc is to instantiate GrpcContext and run it on its own thread, which means we need to run asio::io_context on a separate thread and deal with concurrency problems between them.
Is there any plan for making it possible to reuse asio::io_context for agrpc services?

share io_context example

Hi,
first, I would like to thank you for a great library.
By looking at the example, it looks like that in order to share the asio::io_context for both grpc and non grpc operations, we should replace all io_context with grpcContext, is that a right assumption?
thank you.

Build with only the compiler (no cmake or any other build tools)

Hi, would you mind supporting the usage, which includes the sources by the compiler directly, not as a CMake package?

For example, let's say I have a server.cpp file, compiling with the following command without installing:

g++ -std=c++17 -fcoroutines \
  -DAGRPC_STANDALONE_ASIO \
  -DASIO_HAS_CO_AWAIT -DASIO_HAS_STD_COROUTINE -DASIO_HAS_CO_AWAIT \
  -Iasio-grpc/src -Iasio/asio/include -I</path/to/any/other/includes> \
  server.cpp \
  -lgrpc++ -lgpr -lgrpc++_reflection -lprotobuf -lpthread

The changes to support this seems little. The only problem is "memory_resource.hpp" not found since "memory_resource.hpp" is generated by CMake through src/agrpc/detail/memory_resource.hpp.in. A seemed reasonable way is to use a definition instead of a cmake variable.

// src/agrpc/detail/memory_resource.hpp.in
@ASIO_GRPC_MEMORY_RESOURCE_INDLUCES@

// =>

// src/agrpc/detail/memory_resource.hpp
#ifdef ASIO_GRPC_USE_BOOST_CONTAINER
namespace pmr = boost::container::pmr;
namespace container = boost::container;
#else
namespace pmr = std::pmr;
namespace container = std;
#endif

And then compile with g++ -DASIO_GRPC_USE_BOOST_CONTAINER ...

single threaded asio client falls in to infinite loop when integrated with asio grpc

I am seeing a tight loop with below stack trace. The application is a boost asio single threaded app which is now integrated with asio grpc. The boost version is 1.81. I have followed the client example in the examples directory ( share-io-context-client.cpp ).
Please throw some light on what could possibly go wrong.

(gdb) bt
#0 0x00007fb21bf778e9 in __GI___libc_realloc (oldmem=0x557a957fd180, bytes=90) at ./malloc/malloc.c:3496
#1 0x00007fb21cd7d8e5 in gpr_realloc () from /lib/x86_64-linux-gnu/libgrpc.so.10
#2 0x00007fb21cd06fe5 in grpc_error_string(grpc_error*) () from /lib/x86_64-linux-gnu/libgrpc.so.10
#3 0x00007fb21cd60518 in ?? () from /lib/x86_64-linux-gnu/libgrpc.so.10
#4 0x00007fb21ce5015c in grpc_impl::CompletionQueue::AsyncNextInternal(void**, bool*, gpr_timespec) () from /lib/x86_64-linux-gnu/libgrpc++.so.1
#5 0x0000557a66c7cddd in grpc_impl::CompletionQueue::AsyncNext<gpr_timespec> (this=0x557a6766ddf0, tag=0x7fff16a2e388, ok=0x7fff16a2e390, deadline=...) at /usr/include/grpcpp/impl/codegen/completion_queue_impl.h:198
#6 0x0000557a66c7ccd0 in agrpc::b::detail::get_next_event (cq=0x557a6766ddf0, event=..., deadline=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:165
#7 0x0000557a66c7c6c4 in agrpc::b::detail::GrpcContextImplementation::handle_next_completion_queue_event (grpc_context=..., deadline=..., invoke=agrpc::b::detail::InvokeHandler::YES)
at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:173
#8 0x0000557a66c7c4bd in agrpc::b::detail::GrpcContextImplementation::do_oneagrpc::b::detail::IsGrpcContextStoppedPredicate (grpc_context=..., deadline=..., invoke=agrpc::b::detail::InvokeHandler::YES, stop_predicate=...)
at opensource/base/asio-grpc-2.4.0/src/agrpc/detail/grpc_context_implementation.ipp:214
#9 0x0000557a66c7ba9a in agrpc::b::detail::GrpcContextDoOne::poll (grpc_context=..., deadline=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:164
#10 0x0000557a66c7b6fa in agrpc::b::detail::run_impl<agrpc::b::detail::GrpcContextDoOne, agrpc::b::DefaultRunTraits, boost::asio::io_context, agrpc::b::detail::AreContextsStoppedConditionboost::asio::io_context > (grpc_context=...,
execution_context=..., stop_condition=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:191
#11 0x0000557a66c7b5bd in agrpc::b::run<agrpc::b::DefaultRunTraits, boost::asio::io_context, agrpc::b::detail::AreContextsStoppedConditionboost::asio::io_context > (grpc_context=..., execution_context=..., stop_condition=...)
at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:214
#12 0x0000557a66c7b545 in agrpc::b::run<agrpc::b::DefaultRunTraits, boost::asio::io_context> (grpc_context=..., execution_context=...) at opensource/base/asio-grpc-2.4.0/src/agrpc/run.hpp:207
#13 0x0000557a668c11d2 in main::$_3::operator() (this=0x7fff16a2e718) at
#14 0x0000557a668c1185 in boost::asio::detail::binder0main::$_3::operator() (this=0x7fff16a2e718) at opensource/base/boost_1_81_0/boost/asio/detail/bind_handler.hpp:60
#15 0x0000557a668c1165 in boost::asio::asio_handler_invoke<boost::asio::detail::binder0main::$_3 > (function=...) at opensource/base/boost_1_81_0/boost/asio/handler_invoke_hook.hpp:88
#16 0x0000557a668c113f in boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder0main::$_3, main::$_3> (function=..., context=...) at opensource/base/boost_1_81_0/boost/asio/detail/handler_invoke_helpers.hpp:54
#17 0x0000557a668c110d in boost::asio::detail::asio_handler_invoke<boost::asio::detail::binder0main::$_3, main::$_3> (function=..., this_handler=0x7fff16a2e718) at opensource/base/boost_1_81_0/boost/asio/detail/bind_handler.hpp:111
#18 0x0000557a668c0ffd in boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder0main::$_3, boost::asio::detail::binder0main::$_3 > (function=..., context=...)
at opensource/base/boost_1_81_0/boost/asio/detail/handler_invoke_helpers.hpp:54
#19 0x0000557a668c12d3 in boost::asio::detail::executor_op<boost::asio::detail::binder0main::$_3, std::allocator, boost::asio::detail::scheduler_operation>::do_complete (owner=0x557a67661f00, base=0x557a97e2c740)
at opensource/base/boost_1_81_0/boost/asio/detail/executor_op.hpp:70
#20 0x0000557a6697293e in boost::asio::detail::scheduler_operation::complete (this=0x557a97e2c740, owner=0x557a67661f00, ec=..., bytes_transferred=0) at opensource/base/boost_1_81_0/boost/asio/detail/scheduler_operation.hpp:40
#21 0x0000557a669722a7 in boost::asio::detail::scheduler::do_run_one (this=0x557a67661f00, lock=..., this_thread=..., ec=...) at opensource/base/boost_1_81_0/boost/asio/detail/impl/scheduler.ipp:492
#22 0x0000557a66971d77 in boost::asio::detail::scheduler::run (this=0x557a67661f00, ec=...) at opensource/base/boost_1_81_0/boost/asio/detail/impl/scheduler.ipp:210
#23 0x0000557a66954e9e in boost::asio::io_context::run (this=0x7fff16a2ff78) at opensource/base/boost_1_81_0/boost/asio/impl/io_context.ipp:63
#24 0x0000557a668bf69c in main (argc=40, argv=0x7fff16a33c18)

Question why steady_timer callback happens right away

Hi:

To test mixing generic asio with asio-grpc I did the following:

I modified hello-world-server-cpp20.cpp to add a steady_timer that would call an asynchronous timer callback 5 seconds after the server gets the request.

But the timer callback is getting called right away, not waiting 5 seconds. I tried this without asio-grpc following the boost tutorial and it does delay 5 seconds: https://www.boost.org/doc/libs/1_77_0/doc/html/boost_asio/tutorial/tuttimer2/src.html

So I added the 3 lines below marked with a *:

...
                bool request_ok = co_await agrpc::request(&helloworld::Greeter::AsyncService::RequestSayHello, service,
                                                          server_context, request, writer);
*               printf("Got request\n");
*               boost::asio::steady_timer t(grpc_context, boost::asio::chrono::seconds(5));
*               t.async_wait(&timer_callback);
                if (!request_ok)
                {
                    co_return;
                }
...

My timer_callback() is simply this:

void timer_callback(const boost::system::error_code& /*e*/)
{
  printf("Inside timer callback\n");
}

When I launch the client and it sends the request then the timer callback is immediately called and it doesn't wait 5 seconds.

Do you have any idea why? Is there a problem using asio-grpc with steady_timer?

Linking against gRPC::grpc++ vs gRPC::grpc++_unsecure

Hi,

we have a project that already uses grpc and we link against gRPC::grpc++. We now want to introduce asio-grpc, which links against gRPC::grpc++_unsecure when using the generated asio-grpcConfig.cmake. This leads to problems, apparently it is not possible to link against both targets in the same binary.

I found this commit: 2731e5b
So first of all the question would be: Why was this changed? Why is the default the _unsecure target?

We can fix our build by manually patching the asio-grpcConfig.cmake.in file and making sure that asio-grpc also links against gRPC::grpc++, but this is quite ugly and will break on any changes to that file.
Would it be possible to add a cmake config variable so that we can optionally link asio-grpc against gRPC::grpc++ instead of the default of gRPC::grpc++_unsecure?

Client-side request overloads for PrepareAsync

Due to potential race-conditions in agrpc::request(&Stub::Async... when switching to another thread after the request and interacting with the responder, it would be good to have overloads for the &Stub::PrepareAsync versions which avoid the problem. They also make it easier to mock those requests.

Targets v1.8.0.

Building with -DASIO_GRPC_BUILD_TESTS=ON fails

I have both boost asio and stand-alone asio installed. I am trying to build the tests and examples so that I can having working examples for my own code, but one particular test fails:

asio_grpc_add_test_util(asio-grpc-test-util-standalone-asio "STANDALONE_ASIO" "17")

[ 59%] Building CXX object test/utils/CMakeFiles/asio-grpc-test-util-standalone-asio.dir/cmake
_pch.hxx.gch

In file included from /include/asio/spawn.hpp:902,

fatal error: boost/context/fiber.hpp: No such file or directory
40 | # include <boost/context/fiber.hpp>

My standalone asio installation was configured with: --with-boost=$BOOST_PREFIX, but I don't see boost includes for asio-grpc-test-util-standalone-asio.

How can I get around this problem?

Question about ServerSingleArgRequest

using ServerMultiArgRequest = void (RPC::*)(grpc::ServerContext*, Request*, Responder*, grpc::CompletionQueue*,

using ServerSingleArgRequest = void (RPC::*)(grpc::ServerContext*, Responder*, grpc::CompletionQueue*,

When comparing these two RPCs, I have no idea when the case in ServerSingleArgRequest will happen. That means there's no request but have response. Could provide a proto example for that?

What is the proper way to handle 2 different rpcs in the server example?

In hello-world-server-cpp20.cpp you show an example with just 1 rpc type, the HelloRequest.

What if I have multiple rpcs that I want to handle in my server? Lets say I want to add a "GoodbyeRequest"?

Would the proper way to do this be to spawn another co-routine, a 2nd call to co_spawn() that looks very similar to the one with HelloRequest, but I would have it be a GoodbyeRequest instead? Basically replace all "hello" with "goodbye" (case insensistive)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.