Coder Social home page Coder Social logo

zhiming99 / rpc-frmwrk Goto Github PK

View Code? Open in Web Editor NEW
7.0 1.0 3.0 16.14 MB

This is an asynchronous and event-driven RPC framework

License: GNU General Public License v3.0

C++ 82.67% C 0.16% Makefile 0.57% M4 0.27% Python 6.75% Shell 0.63% Lex 0.22% Yacc 0.50% Dockerfile 0.03% Roff 0.19% Java 1.86% SWIG 2.65% JavaScript 3.51% HTML 0.01%
rpc iot-gateway rpcfs rpc-framework

rpc-frmwrk's Introduction

[δΈ­ζ–‡]

rpc-frmwrk badge

This is an asynchronous and event-driven RPC implementation for embeded system with small system footprint. It is targeting at both IOT platforms and mainstream servers with high-throughput and high availability. It features highly easy to use with a versatile skeleton generator, to generate C++, Python, or Java skeleton instantly. Welcome to use!

Concept

Here is an introduction to the concept of rpc-frwmrk.

Features

  1. Synchronous/asynchronous request handling
  2. Active/passive request canceling.
  3. Server-push events
  4. Keep-alive for time-consuming request.
  5. Simultaneous object access over network and IPC.
  6. Peer online/offline awareness.
  7. Publishing multiple local/remote object services via single network port.
  8. Full-duplex streaming channels
  9. Both OpenSSL and GmSSL support
  10. Websocket support
  11. Object access via Multihop routing
  12. Authentication support with Kerberos 5
  13. Node Redudancy/Load Balance
  14. A skeleton generator for CPP, Python and Java
  15. A GUI config tool for rpcrouter
  16. rpcfs - filesystem interface for rpc-frmwrk

Building rpc-frmwrk

Installation

  1. Run sudo make install from the root directory of rpc-frmwrk source tree.
  2. Configure the runtime parameters for rpc-frwmrk as described on this page.
  3. Start the daemon process rpcrouter -dr 2 on server side, and on start daemon process rpcrouter -dr 1 on client side. And now we are ready to run the helloworld program. For more information about rpcrouter, please follow this link.
  4. Smoketest with HelloWorld. Start the hwsvrsmk, the helloworld server on server side. And start the hwclismk on the client side.
  5. This wiki has some detail information.

Development

rpc-frmwrk can generate skeleton systems for different system architectures.

  1. The micro-service RPC. rpc-frmwrk has an interface description language, ridl to help you to generate the skeleton code in one second. Examples can be found here. The advantage is that you can deploy new services on the fly, as well as shutting down some of them.
  2. The single-app RPC. ridlc can also generate skeleton code in the form of the classic client/server program. The advantage is it has much better performance.
  3. Programming with rpcfs. The ridlc can generate a pair of filesystems for server and client respectively with the ridl file. And all the rpc traffic goes through file read/write and other file operations. And moreover rpcfs hosted by the rpcrouter provides information for runtime monitoring and management.

Runtime Dependency

This project depends on the following 3rd-party packags at runtime:

  1. dbus-1.0 (dbus-devel)
  2. libjson-cpp (jsoncpp-devel)
  3. lz4 (lz4-devel)
  4. cppunit-1 (for the test cases, cppunit and cppunit-devel)
  5. openssl-1.1 for SSL communication.(optional)
  6. MIT krb5 for authentication and access control.(optional)
  7. c++11 is required, and make sure the GCC is 5.x or higher.
  8. python 3.5+ is required for Python support.(optional)
  9. Java OpenJDK 8 or higher for Java support.(optional)
  10. FUSE-3 for rpcfs support(optional)
  11. GmSSL 3.0(optional)

Todo

  1. Please refer to issues.

rpc-frmwrk's People

Contributors

datang99 avatar zhiming99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

rpc-frmwrk's Issues

python test case `asynctst` proxy could timeout when waiting for response sometimes during stress test.

It turned out that dbus_message_get_interface could return null to a dbus message, and when calling it from GDB command line to the same message, dbus_message_get_interface would return the expected value. so it seems to be an upstream issue.
To reproduce in a bash shell on a debian/fedora desktop:

  1. run pushd examples/python/asynctst
  2. run ridlc -bpO . ../../asynctst.ridl
  3. run python3 mainsvr.py
  4. in another bash shell, run for((j=0;j<5;j++));do for((i=0;i<500;i++));do python3 maincli.py & done; wait $(jobs -p); banner $j;done
    if there is a long wait without activities for about two minutes, it should be this bug onset.

Memory leaks in python stress test

Python testcase asynctst shows the mainsvr.py leaked some CTcpStreamPdo2 after several round of stress tests. Probably the CTcpStreamPdo2 was removed from the port stack, without being stopped.

a rare segment fault on server side

In extreame high load, the server could run into segment fault on mysterious release of CIfRootTaskGroup from CRpcServices::OnPostStop of CUnixSockStmProxyRelay object.

The auth-enabled server can be blocked by a non-auth connection.

this is a severe issue and need to be fixed ASAP. The reason is that all the login requests are processed sequentially and the non-auth connection does not follow the handshake protocol of an auth-enabled connection, and only when the offending login timed out, would the server come back to normal.

EchoMany response lost

  • There is a slim chance the test script fusetest.sh would fail during running the workflow c-cpp-yml or c-cpp-2.yml. The investigation shows when the load is very high ( 200 proxy processes on one host ), the first reading of response(EchoMany's response) from a proxy instance could possibly hang somewhere in the syscall read although rpcfs has sent back the data block to read via fuse_reply_data.

known bugs 2

  1. dbus proxypdolpbk could be alive still after the dbusbus port has stopped, it will result in segment fault on client side.---fixed
  2. The KeepAlive task could be leaked in CRpcReqForwarderProxy due to the rewritten RunManagedTask.---fixed

Further reducing the failure number of FetchData request under high system load

Latest tests indicate the dbus could be a bottleneck unable to grab the enough CPU time when the system is very busy, because dbus message processing has just one thread and through a poll loop, much weaker when compared to stream message processing with many threads and simple message queues. So probably the dbus requests such as FetchData or Login could be starved to timeout.

known bugs

  1. Leaked many CRouterRemoteMatch objects on sudden network disruptions. --- fixed
  2. Leaked many callback objects of OnRWComplete when the server pauses for several minutes. --- fixed

server crashed in stress test with `asynctst`

To reproduce in a bash shell on a debian/fedora desktop:

  1. run pushd examples/python/asynctst
  2. run ridlc -bpO . ../../asynctst.ridl
  3. run python3 mainsvr.py
  4. in another bash shell, run for((j=0;j<5;j++));do for((i=0;i<500;i++));do python3 maincli.py & done; wait $(jobs -p); banner $j;done
  5. run pkill -f maincli.py

If lucky enough, the server crashes

unexpected disconnection from `rpcrouter` during handshake in the stress test of `asynctst`

To reproduce in a bash shell on a E5-2680v4 server:

  1. run pushd examples/python/asynctst
  2. run ridlc -bpO . ../../asynctst.ridl
  3. run python3 mainsvr.py
  4. in another bash shell, run for((j=0;j<5;j++));do for((i=0;i<500;i++));do python3 maincli.py & done; wait $(jobs -p); banner $j;done
  5. run pkill -f maincli.py

from wireshark, there are some sessions with less than ten packets, which shows an unexpected disconnection from rpcrouter.
Screenshot from 2024-01-12 22-40-16

rpcfg.py cannot config the webserver successfully.

The steps to reproduce this issue:

  1. in a bash shell, run /usr/local/bin/rpcf/rpcfg.py
  2. switch to the security tab
  3. press the config webserver button.
  4. start a testcase, and the client is rejected immediately.

Unsafe shutdown of listening socket

It is unsafe to shutdown the listening socket from other thread than the sock-loop thread, which creates potential race condition of the CRpcListeningSock between the task-thread and the sock-loop.

Unexpected disconnections from the proxy in `asynctst` stress test

The issue can reproduce following the below steps:

  1. start the wireshark
  2. start the server by python3 mainsvr.py
  3. Type the command for((j=0;j<5;j++));do for((i=0;i<500;i++));do python3 maincli.py & done; wait $(jobs -p); banner $j;done in another shell.
  4. when the command is complete, using wireshark's conversation to find short conversations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.