Coder Social home page Coder Social logo

realm's People

Contributors

billzhong avatar howardjohn avatar i18n-now avatar meteorsliu avatar noahziheng avatar sabify avatar soniccube avatar zephyrchien avatar zhboner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realm's Issues

f

f

端口段转发超过128个端口 cpu飙升

系统:debian 9.13(全新环境可以复现)
配置文件:config.json

{
    "listening_addresses": ["0.0.0.0"],
    "listening_ports": ["10000-10127"],
    "remote_addresses": ["1.2.3.4"],
    "remote_ports": ["10000-10127"]
}

当端口段范围为 1-127 时, cpu 无负载.

截屏2021-01-13 上午3 31 30

当端口段范围为 1-128 时, cpu 单核满载.

截屏2021-01-13 上午3 32 28

运行中异常, 直接退出

thread 'tokio-runtime-worker' panicked at 'called Result::unwrap() on an Err value: AddrParseError(())', src/relay.rs:73:22
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

辛苦了, 谢谢

发布一个windows 的exe

发布一个windows 的exe呀
还有,tcp udp转发能不能分开,提供个选项。

之前用go的项目,
占用cpu 内存太多,

功能建议

请问下后期会考虑增加加密转发么?类似于gost这样功能的

no errors reported when config file is not correct

example wrong config file listed below, note that there is a "," instead of a ";" between the server key and cert.
{
"log":{
"level":"info",
"output":"/var/log/realm-tls.log"
},
"network":{
"zero_copy":true
},
"endpoints":[
{
"listen":"0.0.0.0:443",
"listen_transport": "tls;cert=cert.cer,key=cert.key",
"remote":"127.0.0.1:444"
}
]
}

the correct config file should be:

{
"log":{
"level":"info",
"output":"/var/log/realm-tls.log"
},
"network":{
"zero_copy":true
},
"endpoints":[
{
"listen":"0.0.0.0:443",
"listen_transport": "tls;cert=cert.cer;key=cert.key",
"remote":"127.0.0.1:444"
}
]
}

an attempt to impl zero-copy

I think it's difficult to be compatible with tokio's AsyncRead/AsyncWrite trait, for that AsyncRead/AsyncWrite uses a &[u8] buffer however splice uses a pipe.

So I've tried another approach. We can still use TcpStream, making use of its inner event loop. Instead of calling read/write, we can firstly get its inner fd via as_raw_fd() (TcpStream impls AsRawFd), then call libc::splice on the fd directly.

The problem is that TcpStream::ready does not clear the readiness on the fd, and there is no public fn to achieve this. I have to invoke TcpStream::try_read/try_write with an empty buffer to consume the read/write event.

Here's the demo
https://github.com/zephyrchien/realm/blob/bfed56a66d7c91126a41a05ad2bac7b92c728f61/src/relay.rs#L134-L215

无法添加本地host

Describe the bug
我从本地host设置了域名对应的ip,但是无法解析域名,直接失败了

Screenshots
image

Realm 无连接的情况下自动死亡

Describe the bug
在启动守护进程的情况下,如果长时间(5-15min)没有连接中转服务器,则会变成inactive,只能通过
systemctl restart realm
命令来手动重启!

系统:
Centos7 与 Ubuntu18 TLS

propose to set tcp_nodelay flag

From https://www.extrahop.com/company/blog/2016/tcp-nodelay-nagle-quickack-best-practices/

Enabling the TCP_NODELAY option turns Nagle's algorithm off. In the case of interactive applications or chatty protocols with a lot of handshakes such as SSL, Citrix and Telnet, Nagle's algorithm can cause a drop in performance, whereas enabling TCP_NODELAY can improve the performance.

In any request-response application protocols where request data can be larger than a packet, this can artificially impose a few hundred milliseconds latency between the requester and the responder, even if the requester has properly buffered the request data. Nagle's algorithm should be disabled by enabling TCP_NODELAY by the requester in this case. If the response data can be larger than a packet, the responder should also disable Nagle's algorithm by enabling TCP_NODELAY so the requester can promptly receive the whole response.

Consider we are performing a lot of TLS/HTTP handshakes over Realm, I think it's a good trade-off.

Tokio has provided a convenient API: tokio::net::TcpStream::set_nodelay. To set the TCP_NODELAY socket option, we just need to invoke that fn right after connect() or accept().

功能建议

1、增加配置文件,以便支持多端口

2、增加守护进程,避免进程挂掉

Debian 10 64位启动后无反应

Debian 10 64位系统,下载最新版本,给权限后
设置端口转发无反应,检查realm进程占用0内存0CPU,端口检测没有开放
实际转发失败,程序并没有启动

多ip 多端口 转发配置失败

想实现 本地443 转发远程 1.1.1.1:443
本地 1443 转发 远程2.2.2.2:1443

按博客上给的 示例改动了下 这2个转发 只有运行一个有效。
{
"listening_addresses": ["0.0.0.0"],
"listening_ports": ["443","1443"],
"remote_addresses": ["1.1.1.1","2.2.2.2"],
"remote_ports": ["443","1443"]
}

能给个实现上面目的的有效配置吗?

Realm got crashed due to panic a few hours later

Describe the bug
Realm got crashed due to panic a few hours later with below message

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 24, kind: Other, message: "Too many open files" }', src/relay/udp.rs:30:88
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I'm in the process of verifying to get a backtrace...

To Reproduce

  1. Run ./realm -l 0.0.0.0:50000 -r x.x.x.x:50000 (udp)
  2. Get error

Expected behavior
Don't panicking

Screenshots
None

Environment

  • Kernel: Linux version 5.13.12-200.fc34.x86_64 ([email protected]) (gcc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1), GNU ld version 2.35.2-4.fc34) #1 SMP Wed Aug 18 13:27:18 UTC 2021
  • OS: Fedora release 34 (Thirty Four)
  • rustc: rustc 1.54.0 (Fedora 1.54.0-1.fc34)
  • Cargo: cargo 1.54.0

Additional context
Add any other context about the problem here.

支持TFO

可以用ss-rust作者开源的这个库 tokio-tfo,支持linux/win/bsd,跟ss-rust是相同的实现
建议把这个功能加入到features,默认关闭。有需要的用户可以加上--features tfo 参数自行编译
#13

可以支持这种格式吗

{
"listening_addresses": ["0.0.0.0"], #监听IP
"listening_ports": ["20000"],监听端口
"remote_addresses": ["azure.alibaba.com"],#远程地址
"remote_ports": ["20002"]#远程端口
}

{
"listening_addresses": ["0.0.0.0"], #监听IP
"listening_ports": ["20002-30000"],#监听端口
"remote_addresses": ["azure-a.alibaba.com"],#远程地址
"remote_ports": ["20002-30000"]#远程端口
}

可以支持这种配置文件吗 第一个单端口转发 第二个端口段

长时间转发会报错

报错信息如下
TCP forward error 域名:端口,No file descriptors available (os error 24)
图片

UDP转发貌似不正常,NAT错误。

Describe the bug
转发后 TCP 数据没问题,但UDP貌似不正常。转发后使用SSR等程序玩游戏时 显示 NAT错误。

希望能解决一下,感谢!

VIRT 是不是分配的过高了

在虚拟机上测试转发10000-20000端口段,提示 memory allocation of 24 bytes failed
top命令中查看VIRT到了30G多,但实际的占用的内存不是很多
虚拟机的配置是4GBRAM+3GB SWAP,不知道要怎么改可以降低VIRT的分配,不然没办法正常跑起来

Problem with UDP

As has been mentioned in #21, sometimes UDP forwarding does not work properly.

realm/src/relay.rs

Lines 115 to 131 in 3243f54

match from != remote_socket {
true => {
// forward
sender_vec.push(from);
packet_sender
.send((buf, size, remote_socket.clone()))
.unwrap();
}
false => {
// backward
if sender_vec.len() < 1 {
continue;
}
let client_socket = sender_vec.remove(0);
packet_sender.send((buf, size, client_socket)).unwrap();
}
}

When sending a UDP packet, we need to specify its destination. According to L115-L131, sender_vec stores the SockAddr from a incoming packet(usually from a client) one by one, then consumes them when the response arrives.

However, this would work only under some ideal conditions, where there is only one client sending, and the number of response packets should be the same. Otherwise, the program would drop some of the response packets, or incorrectly send them to another peer, or have the SockAddr left in sender_vec...

Instead of counting the incoming SockAddr one by one, I think we should record the associated pairs of incoming SockAddr and remote SockAddr for a period of time. To deal with multiple clients, maybe we need to allocate a unique local SockAddr for each incoming Sockaddr, and also have it recorded. Finally, the UDP part would work somewhat like a NAT device.

src\relay.rs:100:32 报错

Describe the bug

thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 10022, kind: InvalidInput, message: "提供了一个无效的参数。" }', src\relay.rs:100:32

When this bug was triggered, realm can not relay UDP stream.
Screenshots
image

Desktop :

  • OS: Windows Server 2016
  • Version: The Lastest

配置文件支持log吗?

配置文件里只要写上这一段 就无法启动realm
[log]
level = "warn"
output = "/var/log/realm.log"

提示
● realm.service - realm
Loaded: loaded (/etc/systemd/system/realm.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: core-dump) since Mon 2022-04-25 15:39:00 CST; 2s ago
Process: 9710 ExecStart=/etc/realm/realm -c /etc/realm/config.toml (code=dumped, signal=ABRT)
Main PID: 9710 (code=dumped, signal=ABRT)

Apr 25 15:39:00 OLink systemd[1]: realm.service: Main process exited, code=dumped, status=6/ABRT
Apr 25 15:39:00 OLink systemd[1]: realm.service: Failed with result 'core-dump'.

运行时报错

root@ubuntu:~# /usr/bin/realm -l 127.0.0.1:7890 -r x.x.x.x:443
thread 'tokio-runtime-worker' panicked at 'called Result::unwrap() on an Err value: Os { code: 98, kind: AddrInUse, message: "Address in use" }', src/relay.rs:51:69
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

请问这种情况该怎么解决?

What about spawning less thread?

Thread is expensive. As far as I am concerned, spawning too much threads should be avoided.

In this program, each pair of laddr&raddr creates 4 threads(1tcp, 2udp, 1dns) 2 extra threads. When more pairs added, the resource consumption and the context switch overhead would become unacceptable(#15, #20).

And in most situations, network I/O is the bottleneck. I'm afraid that multi-thread would not help improve the performance, instead the program could be slowed down because of context switching and the use of mutex.

I have replaced thread::spawn with tokio:spawn, and use trust_dns_resolver::TokioAsyncResolver as async dns resolver, would that be OK?

benchmark: realm vs gost

NO WARRANTY

Data are roughly collected. You should never rely on these results for any serious purpose.

Tool

Realm:

realm -v
Realm 2.0.0 [udp][zero-copy][trust-dns][proxy-protocol][multi-thread]

Gost:

gost -V
gost 3.0.0-beta.2 (go1.18.1 linux/amd64)

Environment

Run these tools in a container:

docker run -it --cpus=0.5 --name=relay bench /bin/bash

We simply limit CPU usage to make sure network would not become the bottleneck during a benchmark. And there is no extra restriction on memory usage.

Command

A(host) => B(docker) => C(docker) => D(host)

A:

iperf3 -c 172.17.0.2 -p 8080 -t 60 -P [1,10,30,50,100]

D:

iperf3 -s -p 5201

TCP

Realm:

realm -l 0.0.0.0:8080 -r 172.17.0.1:5201 -z

Gost:

gost -L tcp://:8080/172.17.0.1:5201 2>/dev/null

WS

Realm:

realm -l 0.0.0.0:8080 -r 172.17.0.3:8080 -b 'ws;host=abc;path=/'
realm -l 0.0.0.0:8080 -r 172.17.0.1:5201 -a 'ws;host=abc;path=/'

Gost:

gost -L tcp://:8080 -F relay+ws://172.17.0.3:8080 2>/dev/null
gost -L relay+ws://:8080/172.17.0.1:5201 2>/dev/null

WSS

Realm:

realm -l 0.0.0.0:8080 -r 172.17.0.3:8080 -b 'ws;host=abc;path=/;tls;insecure;sni=abc'
realm -l 0.0.0.0:8080 -r 172.17.0.1:5201 -a 'ws;host=abc;path=/;tls;servername=abc'

Gost:

gost -L tcp://:8080 -F relay+wss://172.17.0.3:8080 2>/dev/null
gost -L relay+wss://:8080/172.17.0.1:5201 2>/dev/null

Result

TCP:

TCP Bandwidth
TCP Memory

WS:

WS Bandwidth
WS Memory

WSS:

WSS Bandwidth
WSS Memory

realm v2.2.2使用UDP问题

系统资料:
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
Linux 5.15.32-xanmod1 x86_64 GNU/Linux

Realm 2.2.2 [proxy][transport][multi-thread]
[CONFIG文件内容]:

[log]
level = "warn"

[network]
use_udp = true
udp_timeout = 30

[[endpoints]]
listen = "0.0.0.0:8888"
remote = "6.7.8.9:3333"
remote_transport = "ws;host=abc.com;path=/wesboot;tls;sni=abc.com"

[[endpoints]]
listen = "0.0.0.0:2222"
remote = "1.2.3.4:5555"

当开启后.出现日志内容:

log: level=warn, output=stdout
dns: mode=ipv4_and_ipv6, protocol=tcp+udp, min-ttl=0, max-ttl=86400, cache-size=32, servers=system
inited: x.x.x.x:xxxx -> x.x.x.x:xxxx; options: udp-forward=on, tcp-fast-open=off, tcp-zero-copy=off; send-proxy=off, send-proxy-version=2, accept-proxy=off, accept-proxy-timeout=5s; tcp-timeout=300s, udp-timeout=30s; transport=kaminari
inited: x.x.x.x:xxxx -> x.x.x.x:xxxx; options: udp-forward=on, tcp-fast-open=off, tcp-zero-copy=off; send-proxy=off, send-proxy-version=2, accept-proxy=off, accept-proxy-timeout=5s; tcp-timeout=300s, udp-timeout=30s; transport=none
thread 'tokio-runtime-worker' panicked at '[udp]unable to bind x.x.x.x:xxxx: Address already in use (os error 98)', /project/src/relay/mod.rs:81:29
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted

[udp]unable to bind x.x.x.x:xxxx: Address already in use对应使用端口已经KILL.

increase buffer size

tokio::io::copy internally uses a hard-coded buffer(only 2k). It is too small. I would like to extend it to at least 4k, (or 16k which is better), so that a lot of(or maybe a few) syscall could be saved.

And we need to write our own copy fn. It is not complex, just like:

let mut buf = vec![0u8; 0x1000];
let mut n: usize;
loop {
    n = r.read(&mut buf).await?;
    if n==0 { break; }
    w.write(&buf[..n]).await?;
}

使用疑问

是否支持长时间、大流量转发的场景。稳定性如何?

docker镜像报错: Error response from daemon: Head https://ghcr.io/v2/zhboner/realm/manifests/latest: unauthorized.

大佬你好,realmdocker镜像是不是没有读取权限呢?我直接运行readme示例:

docker run -d -p 9000:9000 ghcr.io/zhboner/realm:latest -l 0.0.0.0:9000 -r 192.168.233.2:9000

会得到如下报错:

Unable to find image 'ghcr.io/zhboner/realm:latest' locally
docker: Error response from daemon: Head https://ghcr.io/v2/zhboner/realm/manifests/latest: unauthorized.
See 'docker run --help'.

Google了一下,说是需要personal access token (PAT)才行,不过我创建了之后,重新拉取镜像,依然是报错:

Error response from daemon: unauthorized

功能建议

建议加入配置文件,可参考haproxy,有配置文件后期迁移就方便很多了,希望后期加入 Tls ws 转发协议客户端(加密数据)→服务端解密→酸酸乳

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.